Does Phinx support pt-online-schema-change?
I realize Phinx is supposed to handle DB migrations. But in the live environment, running a simple ALTER TABLE command on a huge table might lead to a table lock and temporary service unavailability.
There is a tool from Percona Toolkit called pt-online-schema-change which can handle the schema migration without any downtime, by creating a temporary table, copying the data and re-applying the log.
Is there a way to easily integrate these two, in order to get nice DB migration management from Phinx, and the production zero downtime from Percona Toolkit? Is there any other DB migration management tool, which supports pt-online-schema-change?
Phinx doesn't support 'pt-online-schema-change' at the moment. You could try opening an issue on the Github project for future support (if it proves to be popular). Somebody has been hacking on something similar (see: https://github.com/masom/lhm_php). It is a port of a ruby-based SoundCloud project.
Related
How to upgrade a production DB schema from a dev DB schema automatically, via command line? The dev version has changes to the schema that need to be made in the production schema, but I cannot lose the data in production.
Schema Migrations
Most modern projects use a tool to track each individual change to the database, and associate some version number with the change. The database must also have some table to store its current version. That way the tool can query the current version and figure out which (if any) changes to apply.
There are several free tools to do this, like:
Liquibase
Flyway
Rails Migrations
Doctrine Migrations
SQLAlchemy Migrate
All of these require that you write meticulous code files for each change as you develop. It would be hard to reverse-engineer a project if you haven't been following the process of creating schema change code all along.
There are tools like mysqldbcompare that can help you generate the minimal ALTER TABLE statements to upgrade your production database.
There is also a newer free tool called Shift (I work with the engineer who created it), which helps to automate the process of upgrading your database. It even provides a nice web interface for entering your schema changes, running them as online changes, and monitoring their progress. But it requires quite a lot of experience to use this tool, I wouldn't recommend it for a beginner.
You can also use MySQL WorkBench: https://dev.mysql.com/downloads/workbench/
if you want an easy GUI, and cross OS compatible ;)
I'm looking into setting up FlywayDB as our migration toolkit for our webapp, however there are some migrations (such as adding a column) on large tables (90 million rows) that take many minutes to run.
Usually when this is the case we use Percona Toolkit to run the schema change as it allows the application to continue running and not block incoming queries. So my question is if there a way to run FlywayDB migrations through Percona Toolkit or something similar? I have been unable to find much if any real documentation on such a situation.
There is no direct integration to Percona Online Schema Change from external sources. You would have to code hooks into FlywayDB to execute PT-OSC for you during deployments/migrations or you can write a plugin for PT-OSC to read FlywayDB files.
I wanted to create a auto-generated script of database with sql extension using git. I know it is stupid question, but how comfortable would it be, when I know starter developers are also working at the same database and I can give them freedom to use live database as per their own without loosing my important data and chance to revert at live.
The main concern is it must be using GIT. My one command and creation of script become ready.
Any suggestion will be helpful. Thanks in advance.
Here are some suggestions that can help with database management in a multi-developer, version-controlled environment.
Use a migrations library. There might be one for your stack (e.g. South if you're working with Django 1.6 or earlier, Doctrine Migrations for Doctrine, Active Record Migrations for Rails) or you could use a general purpose tool like Liquibase.
In general, when working with database migrations libraries you end up with a number of versioned files that modify your database in some way. For example you might have a migrations directory containing
0001_create_initial_schema.sql
0002_add_user_model.sql
0003_replace_plaintext_passwords_with_hashed_passwords.sql
Depending on your library, these scripts may logically be written at the database level or at the ORM level. In general, each script is reversible.
This lets developers easily upgrade their database schemas as development continues.
Set each developer up with their own copy of the database. You could ask them to install MySQL locally and have initialization instructions along the lines of "create database, create user, add credentials to config file, run migrations".
Alternatively you could use something like Vagrant to easily create self-contained virtual machines for developers to use. Vagrant can easily install MySQL and create a database.
Finally, you could create databases for your developers on a centralized server, e.g. app1_john, app1_cindy, app1_joel.
Using these guidelines, developers have the flexibility to modify their own databases as needed. They can share these modifications cleanly by adding migrations to the code repository. Any developer can reset their database at any time, either by running all the migrations in reverse or by simply dropping the database.
Using this model there should be no need for most developers to have administrative access to the production database. Only allow trusted users to modify the production database, and always upgrade your database schema using the well-tested migrations scripts that you have used everywhere else.
And, of course, keep regular backups.
I'm working with MySQL databases.
To simplify the problem, let's say I have two environments : the local one (development) and the remote one (production mode).
In the database, I have some tables that contain configuration data.
How can I automate cleanly the delivery from the development mode to the production mode when I modify the database schema and the configuration tables content ?
For instance, I dot it manually by doing a diff between the local and remote databases. But, I find that method not so clean and I believe there is good practice allowing that.
This might be helpful in cases where you have multiple environments and multiple developers making schema changes very often and using php.. https://github.com/davejkiger/mysql-php-migrations
Introduce parameter "version" for your database. This version should be written somewhere in your code and somewhere in your database. Your code will work with database only if they have equal versions
Create a wrapper around your MySQL connection. This wrapper should check versions and if versions are not compatible, it should start upgrade.
"Upgrade" is a process of sequential applying the list of *.sql files with SQL commands, which will move your database from one state to another. It can be schema changes or data manipulation commands.
When you do something with database, do it only through adding new *.sql file and incrementing version.
As a result, when you deploy your database from development enviroment to production, your database will be upgraded automatically in the same way as it was upgraded during development.
I've seen LiquiBase http://www.liquibase.org/ a lot in Java environments.
In most of my projects I use sqlalchemy(a Python tool to manage db plus an ORM). If you have some experience(little more than beginner) with Python I highly recommend using it.
You can check this tool with a little help of that. This is also very useful for migrating your db to other rdbms(for example mysql to postgres or oracle).
I wanted to migrate my production MySQL database to any other RDBMS. Someone suggested me to use SQLite. I have following queries:
Is there any tool to migrate MySQL to SQLite?
Any GUI tool to manage SQLite databases?
How reliable it is for large production databases?
(I'm not sure about the MySQL to SQLite migration tools. As always with SQL, there are SQL dialects changes that may have to be taken into account, it really depends on your existing databases.)
MySQL and SQLite are fundamentally different in that MySQL is server-based, intended to be used by a client, whereas SQLite is file-based, intended to be used via an API that accesses the underlying files directly. As such, you don't need to manage a SQLite in the same way as you would manage MySQL, because SQLite is an embedded database. There are useful tools for connecting to SQLite databases, one of them is SQLite Manager (it doesn't have to run within Firefox).
This may be an issue for large production databases if you need concurrent access (see this SQLite FAQ.
Old stuff but I needed to convert a MySQL database. I have developed a small snippet in lua to do the basic job of converting the CREATE and INSERT statements. I don't guarantee that it will work in all cases. Just report if it doesn't.
And, by the way, I did the same script in GNU awk some time ago. Lua is about two times faster! As I am a big supporter of gawk that came as a surprise to me.
Lua version: https://gist.github.com/985257
Gawk version: https://gist.github.com/943776