Migrate a Development MySQL database to a Production database - mysql

I need to be able to make changes to my development DB,
Such as adding a table or so adding a column.
Is it possible to take this new DB schema and merge it or diff-&-merge it with the production DB without having to rebuild/repopulate the production database?
any tips welcome.

A simple way to do this is to keep track of your ALTER's and CREATE's in a file.
For example, if I were to add a column to a table on the development db, I would copy paste the sql I used into a file called migrate.sql. I would keep doing this until I'm ready to migrate to production.
At this point the file would be a series of sql statements that could be run in order on the production db to "sync" it with the development environment.
If you're not writing the raw queries yourself, you can probably get the commands being run out of whatever GUI tool you're using.

Related

merge design of mysql between localhost and server?

I'm kinda new to this kind of problem. I'm developing a web-app and changing DB design trying to improve it and add new tables.
well since we had not published the app since some days ago,
what I would do was to dump all the tables in server and import my local version but now we've passed the version 1 and users are starting to use it.
so I can't dump the server, but I still would need to update design of server DB when I want to publish a new version. What are the best practices here?
I like to know how I can manage differences between local and server in mysql?
I need to preserve data in server and just change the design, data on local DB are only for test.
Before this all my other apps were small and I would change a single table or column but I can't keep track of all changes now, since I might revert many of them later and managing all team members on this is impossible.
Assuming you are not using a framework that provides a migration tool for database, you need to keep track of the changes manually.
Create a folder sql_upgrades (or whatever name you name) in your code repository
Whenever a team member updates the SQL schema, he creates a file in this folder with the corresponding ALTER statements, and possibly UPDATE, CREATE TABLE etc. So basically the file contains all the statements used to update the dev database.
Name the files so that it's easy to manage, and that statements for the same feature are grouped together. I suggest something like YYYYMMDD-description.sql, e.g. 20150825-queries-for-feature-foobar.sql
When you push to production, execute the files to upgrade you SQL schema in production. Only execute the files that have been created since your last deployment, and execute them in the order they have been created.
Should you need to rollback a file, check the queries it contains, and write queries to undo what was done (drop added columns, re-create dropped columns, etc.). Note that this is "non-trivial", as many changes cannot be rolled back fully (e.g. you can recreate a dropped column, but you will have lost the data inside).
Many web frameworks (such as Ruby of Rails) have tools that will do exactly that process for you. They usually work together with the ORM provided by the framework. Keeping track of the changes manually in SQL works just as well.

How do you make changes to a production MySQL database?

I'm still fairly new to MySQL, and have an app which uses a MySQL database. What is the proper workflow for copying changes from a development copy to the production DB (indexes, new fields, etc)? So far, I've just used phpMyAdmin to make the changes one at a time, but it seems wrong to work this way.

Modify database schema with MySQL Workbench

Using MySQL Workbench, I created an ERD and database schema. I've deployed the database to my production server and have live data.
I now need to modify my schema. Instead of making changes on the live server database, I would like to modify the ERD, test it, and then create a modify script to deploy on the production server. Obviously, I do not wish to loose data, and thus cannot drop a table or column and then add new ones.
Is this possible to do with MySQL workbench? If so, how?
This is possible and it's called "Synchronization". This is a two-way merge between a model and a live database. Synchronization doesn't touch the data in the schema, but as usual, when you modify a db structure (removing tables or columns) the associated data is lost, regardless how you do that. So take care for proper backups.

test and production server deployment with yii framework - syncing DB changes

I am working on a Yii framework based app where I have to test the app on my local machine and then when ready move the changes to the production server.
the app will be developed as people are using it and ask for new features. So when I make changes to my DB schema on the test machine I have to apply these to the schema of the production DB without destroying data there.
is there a recommended and convenient way to deal with this? syncing source code is less of an issue, i am using svn and can do svn export ; rsync ...
MySQLWorkbench can be helpful for syncing db schema as well as other database design tasks.
Yii does support Migrations (since v1.1.6), although it can be more trouble than it's worth depending on how often you make changes and how collaborative your project is.
Another approach I've used is to keep a log of update statements in svn and basically handled the migrations manually.
The best approach is going to depend on the cost/benefits to your particular project/workflow.
You can try SQLyog's Schema Synchronization Tool, which is a visual comparison/synchronization tool designed for developers who work between different MySQL servers or need to keep databases between two MySQL servers in sync. This means reporting the differences between tables, indexes, columns and routines of two databases, and generating scripts to bring them in Sync. Only the Schema will be synced in the target.
For a similar project we
use MySQLWorkbench (MWB) to design and edit the schema
share the .mwb file through a VCS.
When one of us is comfortable with a change he uses mysqldump --complete-insert... on the production and test schemas to generate a copy of the existing test and production data with field names
pull out all the production server insert statements in (3) and put them in protected/data/insert.sql
use the "forward engineer" menu item in MWB on the modified model to generate sql to save to a file called protected/data/create.sql, hand-editing as appropriate (be sure to use the if exists clause to reduce errors)
write a drop.sql file based on drop statements in (3)
use MWB, run the sql (drop.sql, create.sql, insert.sql) after issuing the appropriate "use database" command that identifies the production database
deal with all the errors in (7) by getting rid of any invalid inserts due to columns/fields that are not needed in the new models/schema. Rerun (7)
deal with new fields in (7) that need data other than Null. Rerun (7)
Now you have a snapshot of your schema (drop.sql create.sql) and your data that should revive either your test or production server if you ever have a problem. And you have a "fixture" of the data that was in your production server (insert.sql) that can be used to bring your test server up to speed, or as a backup of the production server data (that will quickly be outdated). Obviously all the foreign key relationships are what are really painful, so it's rare that the insert.sql is useful for anything except upgrading the schema and restoring the production data after this change. Clearly it takes some time to work out the kinks in the process so that the delay between (3) and (9) is small enough that the production server users don't notice the downtime.
Clearly "Rerun (7)" gets repetitive and quickly turns into a shell script that calls mysql directly. Also other steps in the sql editing process become sed scripts.
Have a look at schema comparison tool in dbForge Studio for MySQL.
This tool will help you to compare and synchronize two databases or a database project with specified database.
Also there is separate tool - dbForge Schema Compare for MySQL.

Database Design: Separate Live and Test databases for PHP Website

My organization is rewriting our database-driven website from scratch including a brand new database schema.
In the current setup both live and test websites use the exact same database.
We would like the next version to have a separate database for both live and test versions of the website.
Updating the live version of the database with new schema changes from test isn't a problem.
The problem is data on the live database is constantly being changed. For example, users uploading images, modifying meta data, creating new objects, etc.
What is the proper way of keeping the data on the test server in sync with user-entered data on the live server?
Generally, you don't keep it in sync. The test database is a separate system, and the data in it should be generated - so you can dump it and restore its state between different tests.
You should generate "migrations" (popularised by RoR, but they're a good idea) against the test database whenever you change the schema. Migrations are ALTER TABLE statements that update the layout of the database to the new schema. You can then run the migrations against the live database when you're upgrading it.
Of course, beforehand, it'd no doubt be a good idea to dump the real database into another test database to ensure the migration goes smoothly... so yeah.
I'd spin off a pair of databases, and look at beginning to unit test properly. If you're looking for a more off-the-cuff solution, I'd copy the live database to the test database and run your migrations nightly with a cron-job.
Good call on separating production and test.
This is a loaded question, and we might be able to point you in the right direction if you can tell us which database you are using and the amount of data you are expecting to migrate from production to test. For example, for a relatively small database you could just mysqldump out of production into a completely new database on your test database instance. Other solutions involve replication, but this has some overhead in terms of configuration, runtime and maintenance.
Do you really need to replicate production data for your test environment?