Run Flyway Migrations via Percona Toolkit - mysql

I'm looking into setting up FlywayDB as our migration toolkit for our webapp, however there are some migrations (such as adding a column) on large tables (90 million rows) that take many minutes to run.
Usually when this is the case we use Percona Toolkit to run the schema change as it allows the application to continue running and not block incoming queries. So my question is if there a way to run FlywayDB migrations through Percona Toolkit or something similar? I have been unable to find much if any real documentation on such a situation.

There is no direct integration to Percona Online Schema Change from external sources. You would have to code hooks into FlywayDB to execute PT-OSC for you during deployments/migrations or you can write a plugin for PT-OSC to read FlywayDB files.

Related

How to upgrade MySQL database schema?

How to upgrade a production DB schema from a dev DB schema automatically, via command line? The dev version has changes to the schema that need to be made in the production schema, but I cannot lose the data in production.
Schema Migrations
Most modern projects use a tool to track each individual change to the database, and associate some version number with the change. The database must also have some table to store its current version. That way the tool can query the current version and figure out which (if any) changes to apply.
There are several free tools to do this, like:
Liquibase
Flyway
Rails Migrations
Doctrine Migrations
SQLAlchemy Migrate
All of these require that you write meticulous code files for each change as you develop. It would be hard to reverse-engineer a project if you haven't been following the process of creating schema change code all along.
There are tools like mysqldbcompare that can help you generate the minimal ALTER TABLE statements to upgrade your production database.
There is also a newer free tool called Shift (I work with the engineer who created it), which helps to automate the process of upgrading your database. It even provides a nice web interface for entering your schema changes, running them as online changes, and monitoring their progress. But it requires quite a lot of experience to use this tool, I wouldn't recommend it for a beginner.
You can also use MySQL WorkBench: https://dev.mysql.com/downloads/workbench/
if you want an easy GUI, and cross OS compatible ;)

Phinx and pt-online-schema-change

Does Phinx support pt-online-schema-change?
I realize Phinx is supposed to handle DB migrations. But in the live environment, running a simple ALTER TABLE command on a huge table might lead to a table lock and temporary service unavailability.
There is a tool from Percona Toolkit called pt-online-schema-change which can handle the schema migration without any downtime, by creating a temporary table, copying the data and re-applying the log.
Is there a way to easily integrate these two, in order to get nice DB migration management from Phinx, and the production zero downtime from Percona Toolkit? Is there any other DB migration management tool, which supports pt-online-schema-change?
Phinx doesn't support 'pt-online-schema-change' at the moment. You could try opening an issue on the Github project for future support (if it proves to be popular). Somebody has been hacking on something similar (see: https://github.com/masom/lhm_php). It is a port of a ruby-based SoundCloud project.

Git + database auto generated script

I wanted to create a auto-generated script of database with sql extension using git. I know it is stupid question, but how comfortable would it be, when I know starter developers are also working at the same database and I can give them freedom to use live database as per their own without loosing my important data and chance to revert at live.
The main concern is it must be using GIT. My one command and creation of script become ready.
Any suggestion will be helpful. Thanks in advance.
Here are some suggestions that can help with database management in a multi-developer, version-controlled environment.
Use a migrations library. There might be one for your stack (e.g. South if you're working with Django 1.6 or earlier, Doctrine Migrations for Doctrine, Active Record Migrations for Rails) or you could use a general purpose tool like Liquibase.
In general, when working with database migrations libraries you end up with a number of versioned files that modify your database in some way. For example you might have a migrations directory containing
0001_create_initial_schema.sql
0002_add_user_model.sql
0003_replace_plaintext_passwords_with_hashed_passwords.sql
Depending on your library, these scripts may logically be written at the database level or at the ORM level. In general, each script is reversible.
This lets developers easily upgrade their database schemas as development continues.
Set each developer up with their own copy of the database. You could ask them to install MySQL locally and have initialization instructions along the lines of "create database, create user, add credentials to config file, run migrations".
Alternatively you could use something like Vagrant to easily create self-contained virtual machines for developers to use. Vagrant can easily install MySQL and create a database.
Finally, you could create databases for your developers on a centralized server, e.g. app1_john, app1_cindy, app1_joel.
Using these guidelines, developers have the flexibility to modify their own databases as needed. They can share these modifications cleanly by adding migrations to the code repository. Any developer can reset their database at any time, either by running all the migrations in reverse or by simply dropping the database.
Using this model there should be no need for most developers to have administrative access to the production database. Only allow trusted users to modify the production database, and always upgrade your database schema using the well-tested migrations scripts that you have used everywhere else.
And, of course, keep regular backups.

How to execute mysql scripts in cloudfoundry

I have a .sql file (initial sql scripts). I have recently deployed application in cloudfoundry, So I want to run these scripts to make application work, Scripts will update more than 5 db tables.
Is there any other way to run the mysql scripts from the grails application on start up Or Is there any provision to run the scripts in the cloudfoundry.
you have several options here.
The first one (which I recommend), is to use something like http://liquibase.org/ (there is a Grails plugin for it: http://grails.org/plugin/liquibase). This tool will make sure that any script you give it will run prior to the app starting, without running the same script twice, etc. This is great to keep track of your database changes.
This works independently of CloudFoundry and would help anyone installing your app having an up to date schema
The second option would be to tunnel to the CloudFoundry database and run the script to the db. Have a look at http://docs.cloudfoundry.com/tools/vmc/caldecott.html or even easier with STS : http://blog.cloudfoundry.com/2012/07/31/cloud-foundry-integration-for-eclipse-now-supports-tunneling-to-services/
Yup, what ebottard said! :-) Although, personally I would opt for using the tunnel feature on VMC, but that said, I am a Ruby guy!
Be weary of the fact that there are timeouts against queries in MySQL if are bootstrapping your database with large datasets!

Delvering a modified database from the local env to the production one

I'm working with MySQL databases.
To simplify the problem, let's say I have two environments : the local one (development) and the remote one (production mode).
In the database, I have some tables that contain configuration data.
How can I automate cleanly the delivery from the development mode to the production mode when I modify the database schema and the configuration tables content ?
For instance, I dot it manually by doing a diff between the local and remote databases. But, I find that method not so clean and I believe there is good practice allowing that.
This might be helpful in cases where you have multiple environments and multiple developers making schema changes very often and using php.. https://github.com/davejkiger/mysql-php-migrations
Introduce parameter "version" for your database. This version should be written somewhere in your code and somewhere in your database. Your code will work with database only if they have equal versions
Create a wrapper around your MySQL connection. This wrapper should check versions and if versions are not compatible, it should start upgrade.
"Upgrade" is a process of sequential applying the list of *.sql files with SQL commands, which will move your database from one state to another. It can be schema changes or data manipulation commands.
When you do something with database, do it only through adding new *.sql file and incrementing version.
As a result, when you deploy your database from development enviroment to production, your database will be upgraded automatically in the same way as it was upgraded during development.
I've seen LiquiBase http://www.liquibase.org/ a lot in Java environments.
In most of my projects I use sqlalchemy(a Python tool to manage db plus an ORM). If you have some experience(little more than beginner) with Python I highly recommend using it.
You can check this tool with a little help of that. This is also very useful for migrating your db to other rdbms(for example mysql to postgres or oracle).