GroIP Discontinuation - updates

I use the GeoIP legacy code on my site. The way I'm reading this page is that the database for that version will no longer be updated. They say to use the GeoLite2 databases and
the GeoLite2 databases are free. However, you will need to update your GeoLite Legacy integrations to work with GeoLite2 databases.
But I can't find how to do that update. Does anyone know how to do this?

Related

MYSQL or PostgresSQL on AWS

I am trying to understand the trade-offs between going with MySQL or PostgreSQL on AWS.
Some considerations for me are that I am an amateur database user, so I need to be sure resources are available which allow me to overcome problems quickly. Along these lines, I bought the book 'PostgreSQL on the Cloud' and was all set to go with PostgreSQL since the book laid out a great use case.
One thing held me back though is that it is important for my work to be able to to easily use Excel as a front end for importing and exporting data into and out of the Database on AWS.
It looks like MySQL has an open extension which is fully integrated with Excel and is also well documented. My research into PostgreSQL uncovered a much more uneven integration with Excel and a lot of long painful group frustration a closer integration has not already occurred.
Right now, I am leaning to MySQL, but want to make sure I am not missing something.
Thanks!
Microsoft touts a PostgreSQL plugin as well: https://support.office.com/en-us/article/connect-to-a-postgresql-database-power-query-bf941e52-066f-4911-a41f-2493c39e69e4. Never used it, so can't comment on it.
You mention you are a beginner, so I'll add... be careful about security with either of these options. There are options to encrypt the channel between the client and server, which you indicate is running on AWS. If not secure, anyone would be able to effectively monitor the connections, extract credentials, and do whatever to your AWS-hosted DB. Generally, cloud-hosted DBs should be behind an authentication/authorization login process.

ETL between a MySQL primary Data Store and a MongoDB secondary Data Store

We have a rails app that has a MySQL backend, each client has one DB and the schema is identical. We use a custom gem to change the DB based on the URL of the request (This is some legacy code that we are trying to move away from)
We need to capture some changes from those MySQL databases (Changes in inventory, some order information, etc) transform and store in a single MongoDB database (multitenant data store), this data will be used for analytics at first, but our idea is to move everything there.
There was something in place to do this, using AR callbacks and Rabbit, but to be honest it wasn't working correctly and it looked like it was more trouble to fix it than to start over with a fresh approach.
We did some research and found some tools to do ETL but they are overkill for our needs.
Does anyone have some experience with a similar problem?
Recommendations on how to architect and implement this simple ETL
Pentaho provides change-data-capture option which can solve Data-synchronization problems.
If by Overkill you mean Setup, Configuration, then Yes that is the common problem with ETL tools and PENTAHO is the easiest among them.
If you can provide more details, I'll be glad to provide an elaborate answer.

How to keep track of database updates associated with Jira issues?

We are using Jira as our issue-tracker, and our team works with mercurial repositories. When a developer makes a database change that is associated with a jira issue, he adds the sql as a comment on the issue. The problem with this is - when it comes time to push these issues to our production site, I need to browse through each issue going live to see which ones have db updates in their comments. There has to be a better way!!
Our production mysql db is on a shared host that does not allow us direct access. Any sql updates I want to go live need to be emailed in a sql file to be imported.
Thanks.
What you describe is a common problem when developing against a database. The usual solution is "database versioning".
The basic idea is that different states of your schema (i.e. your tables, columns, stored procedures etc.) get different version numbers. Then scripts for migrating between schema versions are created and stored.
Be warned that you'll likely need to fundamentally change your workflow. I don't think having the SQL code for migration in JIRA is a sustainable strategy. SQL is code, and belongs into the code repository.
See e.g. this question for details and techniques: Database Schema Versioning Strategies

MYSQL Database offline use

Is there a way to use a MYSQL database without the database management system.. Like use tables offline without installing the db management system on the machine..
If there is can you please point me in the right direction?
Thank you!
As far as I know, there is no way to do this.
However, there is a portable DBMS SQLIte. It comes in different ways and can be used on other platform with different programming languages.
After reading your comment, I'm almost sure, this is what you need.
It's not that fast as MySQL I guess, but it works.
You can use The embedded MySQL Server Library to access MySQL data files without running the MySQL server.
You can setup a database to work on your localhost. This will be offline unless you setup the front-end stuff to let the internet interact with it.
What exactly do you mean "without the database management system"? You always need a way of interacting with it, even if it is offline. (Otherwise how can it work for you?)
The server side piece of the application, mysql-server, is needed at a minumum to run mysql. This server application comes with all the tools built-in to manage the instance. I doubt you can prevent installation of this.
If you've actually opened the table files in a hex or text editor, you'll see that you will definitely need the mysql application installed to make any sense of them to use them. Sure the records are all there in plain text (.myd files for myisam, the ibdata1 file for innodb tables), but it would be a complete time-waster devising a custom app to parse or update the file structure, as well as trying to tie in table structure contained in the related files for each table.

Interface between CardDav server and MySQL database

My web app uses mysql to store contact data. I'd like to sync this data via carddav with mobile devices. I understand carddav is based on a file system, not a database. What software is available to act as an interface or wrapper to make the carddav server work with mysql? or other relational database?
You might want to take a look at Bedework.
Baikal just added this feature!!!
Most dav servers are file system based. If you use SabreDav you can build a virtual filesystem based on your own backend. Baikal is a project that uses sabredav, and a virtual file system. Until recently it stored its data in sqllite. Now it supports both mysql and sqlite.
Its still not 100% mature, but its a great starting point. Playing around with it, I have been able to create contacts directly in the DB (by uploading vcard blobs to a table) and then having them show on my ipad addressbook.
After evaluating many systems, ones built on sabredav like baikal tend to be the simplest to build on. Fruxx is something else you may also check out. Its a hosted system, but will soon have an api.
Last if you are looking for a very elaborate system, then take a look at tine20. It supports activesync (illegally in the usa), carddav, caldav, and has a decent extjs web ui. It natively stores contact information in its mysql store, which is nice since you can update a contact through a sql statement without having to build a vcf file. Where tine doesnt make sense is that it uses a bit more resources because of all the features it offers, and the complexity has ensured that it has a VERY complicated database schema. In other words, you are probably better off creating a rest api on the tine source code rather than doing bare sql inserts.
http://baikal-server.com/