Track MySQL schema and procedures change - mysql

Recently I changed some table schema and procedures in DEV environment. Before i deploy it to production environment, I want to have a change list.
How to track or show the schema and procedures change history in MySQL? For example, show which tables have been changed after 2011-06-10, which procedures have been created or updated after 2011-06-08.

This is a longer route, but, you could you create a data dictionary. You can date them, and then represent the different schema version, or even changes to the schema. They are easily tracked with your favorite version control tool.

information_schema tables have create date fields which would get you a list of added tables and procedures etc after a particular date but modifications can't be identified by this method. However, a much simpler method would be to restore a prod backup to a different name and then use the mysqldiff utility to identify the changes.

Related

Concept of schema in MySQL

Is there a suggested way to use "schemas" in mysql? For example, if I have one database called events and then I want to have two environments dev and prod, what might be a way to do that? Currently I add a table prefix, but it seems a a bit hack-ish:
you create a separate database for that, because MySQL does not have the concept of schema like e.g. PostgreSQL does.
You create one database for production e.g. prod_database with the table names event and event_type. and one database for dev e.g. dev_database, with the same table names event and event_type. As you always want to have the same table names in different environments.
You could (and should) even use the same database name, if you host the database on different servers. Which for production and development/staging would also make sense e.g. to test server version updates on one setup without affecting production.

MySQL Track DDL changes history

I need to somehow store the changes made to database structure (on CREATE, DELETE, ALTER, RENAME). The goal is to create an api that will return to a client a list of changes made to the specified database structure since some date (user_who_made_changes, changes_date, DDL_query).
Is it possible with MySQL?
Unfortunately DDL triggers are not supported in MySQL.
I also looked at logging at MySQL (general or binary logging), but it seems there's no possibility to sort logged statements by specified schema.
Any ideas? Will be thankful for any advice.

Database Version / Change Control for Data not Schema?

After reading a few articles here and around, I have realised that database version control in a development team is actually of high importance.
Until now I have been using a simple dump whole database each time there is an update, if only 1 table was altered sometimes we can get away with just dumping the single table then reimporting. Not the best but it works quite well, for additive changes and we haven't had any hiccups yet.
Now, I save a .mwb (Mysql Workbench diagram) file in the git repository of the project I'm working on.
Then I also use dbv for schema management, along with git, with each branch being named based on the project and it's working quite well. This allows me to version schematic changes with the ability to revert or rollback.
However, what about the data contained in the tables. How can this be maintained? Maybe I'm better off just sticking with the old method. I understand on projects with the same DB structure but different data that's fine but what about sites with specific database data that needs to be versioned and managed.
Also what about the base of already deployed sites that need database changes, how can this be seamless. Some have suggested the use of update/alter scripts and that works fine with default values and such. But what if I have made a change on a website platform that requires every websites database to be changed, and keep the data intact?
I've worked mostly in business application development and configuration management. Your question is representative for the challenges in such an environment; when you upgrade for instance Microsoft Word, you don't need to change all documents right away from doc to docx. And the documents even have a more simple structure a full relation database.
Not so for business applications; users skip releases, make unauthorized changes to the data model and the system needs to keep running and providing the correct numbers...
We use for our own applications (largest one is like 600 tables) a self-developed CASE tool which includes branching/merging, but the approach can also be done manually.
Versioning Datamodel
The data model can be written down in a structured way. For instance as table contents (CSV to be loaded in a table with meta data) or as code that detects the version in use and adds columns and tables when missing, including non-trivial migrations.
This even allows multiple users at the same time to change the data model.
When you use auto-detection (for instance, we use a call named "verify_column" instead of "add_column"), this even allows smooth migration independent of the release number the customer is starting the upgrade from. Such a procedure analyzes the table to be changed and issues the correct DDL such as alter table t1 add col1 number not null when a column is missing or alter table t1 modify col1 not null when the column was already present but nullable.
For Oracle and SQL Server I can provide you with a few sample procedures. In MySQL I would code this using a client side language, preferably OS independent to allow installations to run on Windows and Linux. Maybe using Apache Ant when you have experience with that.
Versioning Data
We split the tables in four categories:
R: referential data; data the application site must provide before he actually use the system. For instance, general ledger account codes. The referential data seldomly changes after go live and does not continuously grow in size. The contents reflect the site's business model where the application is used.
T: transaction data; data the site registers, changes and removes during use of the application. For instance, general ledger entries. The transaction data starts at 0 an grows continuously. When company doubles in revenues, transaction data also doubles.
S: seeded data; data NOT maintained by the user at the site but provided and maintained by the developing party. Essentially this is code turned into data. For example, 'F' stands for 'Female'. Errors in seeded data can lead to system errors.
O: the rest (ideally not needed, since they are technical, but some systems require a temporary table A or a scratch table B).
The contents of tables of category 'S' (seeded data) is placed under version control. We normally register these as metadata in our case tool, then named 'data sets', but you can also use for instance Microsoft Excel or even code.
For example, in Excel you would have a list of rows of seeded data. In column A you might enter an Excel function like =B..&"|"&C..& "|" & ... which concatenates everything and makes it suitable for loading by a loader tool.
For example in code, you might have a call like:
verifySeed('TABLE_A', 'CODE', 'VALUE')
The Excel is a little bit hard to bring under version control allowing multiple users to change contents at the same time. The approach with code is very simple.
Please remember to also add features to remove obsoleted seeded data. For instance, by explicitly listing obsoleted seeded data or by automatically removing all seeded data present in the tables but not touched by the last installation.
You would need to keep a journal of transactions on your datamodel that is synchronised to your code versions. For each update that adds information (i.e. a new field) you can simply enter the statements like 'ALTER TABLE x ADD COLUMN y ...' and provide a DEFAULT VALUE (with a function perhaps) in an update script. And a 'ALTER TABLE x REMOVE COLUMN y ...' for the downdate script. You would need to export your data before you truncate information in a table. You can convert the dumped table data to SQL for the inverse transaction so that you can add the missing information using these.
You can use a 'journal' table within your data-model to keep track of these transactions using simple ordinals that denote the applied scripts. Whenever the software is installed it can compare these numbers to create a list of transactions to play to move the database from state N to state X, backwards or forwards, without losing any data!

Large scale MySQL changes to active sites

Just some pointers here.
I am making fairly extensive modifications to a site, including the MySQL database.
My plan is to do everything on my development server, export the new MySQL structure for the db and import it onto the clients server.
Basically I need to know that performing a structure only import will not overwrite/delete existing data. I am not making changes to the data type or field length.
In my experience, when you export a database (through phpMyAdmin for instance), part of the SQL script that is created includes a "DROP TABLE IF EXISTS 'table_name';" before doing a "CREATE TABLE 'table_name'...;" to build the new table.
My guess is that this is not what you want to do! Certainly use the dev system to alter the structure in order to make everything correct, but then look around for a database synchronisation routine where you can provide the old structure, the new structure, and the software will create the appropriate "ALTER TABLE 'table_name'...;" scripts to make the required changes.
You should then really examine these change files before executing them on the live database, and of course BACKUP the live database, and ensure you are able to fully recover from the backup before starting any of the alterations!
I've had to do this a lot, and it always goes like this:
Make a backup of the live database, complete with data.
Make a backup of the live database schema only.
Calculate the differences between the old (live) schema and the new (devel) schema.
Create all of the 'ALTER TABLE ...' DDL statements necessary to upgrade from the old schema to the new one. Keep in mind that if you rename a field, you probably won't be able to just rename it -- you'll need to create the new field, copy the data from the old field, and then drop the old field.
If you changed relationships between tables, you'll probably need to drop indexes and foreign key relationships first, and then add them back afterwards.
You'll need to populate any new fields based upon their default values, if any.
Once you've got all the pieces working, you'll need to combine them into one large script, and then run it on a copy of the live database.
Dump the schema and compare it against the desired new schema -- if they don't match, go back to step 3 and repeat.
Dump the data and compare it against the expected changes -- again, if they don't match, go back to step 3 and repeat.
You're going to learn a lot more about SQL DDL/DML during this process than you ever thought you'd learn. (For one project, where we were switching from natural keys to UUID keys for 50+ tables, I ended up writing programs to generate all of the DDL/DML.)
Good luck, and make frequent backups.
I'd recommend to prepare a sql script for every change you do on development server, so you will be able to reproduce it on development. You shouldn't get to the point where you need to calculate differences between database structures
This is how I do it, all changes are reflected in sql scripts, and I can reconstruct the history of my database running all these files if needed.
Test the final release version on a "staging" mysql server. Make a copy of your production server on another machine and test your script to make sure everything's ok.
Of course, preliminary database backup is a must.

MySQL to SQL Server transferring data

I need to convert data that already exists in a MySQL database, to a SQL Server database.
The caveat here is that the old database was poorly designed, but the new one is in a proper 3N form. Does any one have any tips on how to go about doing this? I have SSMS 2005.
Can I use this to connect to the MySQL DB and create a DTS? Or do I need to use SSIS?
Do I need to script out the MySQL DB and alter every statement to "insert" into the SQL Server DB?
Has anyone gone through this before? Please HELP!!!
See this link. The idea is to add your MySQL database as a linked server in SQL Server via the MySQL ODBC driver. Then you can perform any operations you like on the MySQL database via SSMS, including copying data into SQL Server.
Congrats on moving up in the RDBMS world!
SSIS is designed to do this kind of thing. The first step is to map out manually where each piece of data will go in the new structure. So your old table had four fields, in your new structure fileds1 and 2 go to table a and field three and four go to table b, but you also need to have the autogenerated id from table a. Make notes as to where data types have changed and you may need to make adjustments or where you have required fileds where the data was not required before etc.
What I usually do is create staging tables. Put the data in the denormalized form in one staging table and then move to normalized staging tables and do the clean up there and add the new ids as soon as you have them to the staging tables. One thing you will need to do if you are moving from a denormalized database to a normalized one is that you will need to eliminate the duplicates from the parent tables before inserting them into the actual production tables. You may also need to do dataclean up as there may be required fileds in the new structure that were not required in the old or data converstion issues becasue of moving to better datatypes (for instance if you stored dates in the old database in varchar fields but properly move to datetime in the new db, you may have some records which don't have valid dates.
ANother issue you need to think about is how you will convert from the old record ids to the new ones.
This is not a an easy task, but it is doable if you take your time and work methodically. Now is not the time to try shortcuts.
What you need is an ETL (extract, transform, load) tool.
http://en.wikipedia.org/wiki/Extract,_transform,_load#Tools
I don't really know how far an 'ETL' tool will get you depending on the original and new database designs. In my career I've had to do more than a few data migrations and we usually always had to design a special utility which would update a fresh database with records from the old database, and yes we coded it complete with all the update/insert statements that would transform data.
I don't know how many tables your database has, but if they are not too many then you could consider going the grunt root. That's one technique that's guaranteed to work after all.
If you go to your database in SSMS and right-click, under tasks should be an option for "Import Data". You can try to use that. It's basically just a wizard that creates an SSIS package for you, which it can then either run for you automatically or which you can save and then alter as needed.
The big issue is how you need to transform the data. This goes into a lot of specifics which you don't include (and which are probably too numerous for you to include here anyway).
I'm certain that SSIS can handle whatever transformations you need to do to change it from the old format to the new. An alternative though would be to just import the tables into MS SQL as-is into staging tables, then use SQL code to transform the data into the 3NF tables. It's all a matter of what your most comfortable with. If you go the second route, then the import process that I mentioned above in SSMS could be used. It will even create the destination tables for you. Just be sure that you give them unique names, maybe prefixing them with "STG_" or something.
Davud mentioned linked servers. That's definitely another way that you can go (and got my upvote). Personally, I prefer to copy the tables over into MS SQL first since linked servers can sometimes have weirdness, especially when it comes to data types not mapping between different providers. Having the tables all in MS SQL will also probably be a bit faster and saves time if you have to rerun or correct portions of the data. As I said though, the linked server method would probably be fine too.
I have done this going the other direction and SSIS works fine, although I might have needed to use a script task to deal with slight data type weirdness. SSIS does ETL.