Synchronize two databases schema in MySQL - mysql

I was looking for a portable script or command line program that can synchronize two MySQL databases schema. I am not looking for a GUI based solution because that can't be automated or run with the buid/deployment tool.
Basically what it should do is scan database1 and database2. Check the schema difference (tables and indexes) and propose a bunch of SQL statements to run on one so that it gets the similiar structure of the other minimizing data damage as much as possible.
If someone can indicate a PHP, Python or Ruby package where this type of solution is implemented, I can try to copy the code from there.
A lot of MySQL GUI tools probably can do this, but I am looking for a scriptable solution.
Edit: Sorry for not being more clear: What I am looking for is synchronization in table structure while keeping data intact as far as possible. Not data replication.
More info:
Why replication won't work.
The installation bases are spread around the state.
We want the installer to perform dynamic fixes on the DB based on chagnes made in the latest version, regardless of what older version the end user might be using.
Changes are mostly like adding new column to a tables, creating new indexes, or dropping indexes, adding tables or dropping tables used by the system internally (we don't drop user data table).
If it's a GUI: No it can't be used. We don't want to bunddle a 20MB app with our installer just for DB diff. Specially when the original installer is less than 1 MB.

Have you considered using MySQL replication ?

SQLyog does that and it is awesome. We use it in production often.

I know it's an old question but it was the first result on google for what I was searching for (exact same thing as the initial question)
I found the answer still here but I don't remember the URL
it's a script that started from:
mysqldump --skip-comments --skip-extended-insert -u root -p dbName1>file1.sql
mysqldump --skip-comments --skip-extended-insert -u root -p dbName2>file2.sql
diff file1.sql file2.sql
and ended up more like this
#!/bin/sh
echo "Usage: dbdiff [user1:pass1#dbname1] [user2:pass2#dbname2] [ignore_table1:ignore_table2...]"
dump () {
up=${1%%#*}; user=${up%%:*}; pass=${up##*:}; dbname=${1##*#};
mysqldump --opt --compact --skip-extended-insert -u $user -p$pass $dbname $table > $2
}
rm -f /tmp/db.diff
# Compare
up=${1%%#*}; user=${up%%:*}; pass=${up##*:}; dbname=${1##*#};
for table in `mysql -u $user -p$pass $dbname -N -e "show tables" --batch`; do
if [ "`echo $3 | grep $table`" = "" ]; then
echo "Comparing '$table'..."
dump $1 /tmp/file1.sql
dump $2 /tmp/file2.sql
diff -up /tmp/file1.sql /tmp/file2.sql >> /tmp/db.diff
else
echo "Ignored '$table'..."
fi
done
less /tmp/db.diff
rm -f /tmp/file1.sql /tmp/file2.sql

For a long-term, professional solution, you should keep an eye on Schemamatic (http://versabanq.com/products/schemamatic.php). This link shows a GUI app but all it does is manipulate a command-line software. In this page there is link to its google code site where the C# .Net version of Schemamatic can be found. Your perfect solution would be to add support for MySQL to Schemamatic. For SQL Server, it's perfect and does exactly what you mentioned.
Now, for a short-term solution I would suggest dumping the data you want with MySQL's command-line tools like:
mysqldump -A -c -uroot -ppassword >bkpmysql.sql
And play with it, although it should take quite some time to achieve what you want. Schemamatic really seems to me your best choice. Let me know if you need any clarification when/if trying Schemamatic.

You might want to look at some tools such as dbdeploy (this is a java or a .net version) and liquidbase and others.
Although most of these I think will apply sets of changes to a DB in a controlled manner. Don't know if they can reverse engineer from existing schemas and compare.
E.

check this one is codeigniter database diff script generator
https://github.com/vaimeo/ci-database-diff-generator

Related

Export/Import databases faster than currently

I have a database(MYSQL) that is exported in this way:
echo "Dump structure"
mysqldump -S /path/db.sock --user = $ {USER} --password = $ {PASSWORD} --single-transaction --no-data $ {DATABASE}> $ {DB_FILE}
echo "Dump content"
mysqldump -S /path/db.sock --user = $ {USER} --password = $ {PASSWORD} $ {DATABASE} --no-create-info $ {IGNORED_TABLES_STRING} >> $ {DB_FILE}
What it does is export the structure of some ignored tables and the contents of others. This is so to make the database occupy less space.
I'm not sure if there are more optimal ways to do this.
My question is this:
In "Dump Content" I would like to just take the last 1000 results to reduce its content, but the problem is that not all tables are related or contain the same field to filter them.
How can I filter the latest records if I can not do it for a single field?
Can I achieve an export / import of the database in a faster way?
Short answer: you can't do that.
Long answer: you could do selects with a "LIMIT 1000" for example and write that into files. But whatever your reason behind this is, I strongly advice you to not do that. If you want to backup everything, do a full backup. If you want to keep the database fast, don't worry about that and learn how to use indexes. If you want to do that just to keep it well organized and get rid of data you don't need anymore, just keep them in the database because they don't do any harm. If you want to synchronize data with another database, set up a replication.

How do I get a tab delimited MySQL dump from a remote host ?

A mysqldump command like the following:
mysqldump -u<username> -p<password> -h<remote_db_host> -T<target_directory> <db_name> --fields-terminated-by=,
will write out two files for each table (one is the schema, the other is CSV table data). To get CSV output you must specify a target directory (with -T). When -T is passed to mysqldump, it writes the data to the filesystem of the server where mysqld is running - NOT the system where the command is issued.
Is there an easy way to dump CSV files from a remote system ?
Note: I am familiar with using a simple mysqldump and handling the STDOUT output, but I don't know of a way to get CSV table data that way without doing some substantial parsing. In this case I will use the -X option and dump xml.
mysql -h remote_host -e "SELECT * FROM my_schema.my_table" --batch --silent > my_file.csv
I want to add to codeman's answer. It worked but needed about 30 minutes of tweaking for my needs.
My webserver uses centos 6/cpanel and the flags and sequence which codeman used above did not work for me and I had to rearrange and use different flags, etc.
Also, I used this for a local file dump, its not just useful for remote DBs, because I had too many issues with selinux and mysql user permissions for SELECT INTO OUTFILE commands, etc.
What worked on my Centos+Cpanel Server
mysql -B -s -uUSERNAME -pPASSWORD < query.sql > /path/to/myfile.txt
Caveats
No Column Names
I cant get column names to appear at the top. I tried adding the flag:
--column-names
but it made no difference. I am still stuck on this one. I currently add it to the file after processing.
Selecting a Database
For some reason, I couldn't include the database name in the commandline. I tried with
-D databasename
in the commandline but I kept getting permission errors, so I ended using the following the top of my query.sql:
USE database_name;
On many systems, MySQL runs as a distinct user (such as user "mysql") and your mysqldump will fail if the MySQL user does not have write permissions in the dump directory - it doesn't matter what your own write permissions are in that directory. Changing your directory (at least temporarily) to world-writable (777) will often fix your export problem.

How to export a mySql database to a embedded database (for example H2)?

We have been working on a project. In the beginning we had some database issues, so we used a mySQL-server database to work around this.
Now we really should get back to an embedded database, access is out of the question (has to be cross-platform)
Our mentor suggested using an H2 embedded database, but we our sql-dump is getting syntax errors if we try to run it in the console of H2.
Any thoughts?
Thanks in advance!
To generate suitable for H2 SQL script on unix system you may try the following:
mysqldump -u root -c --p --skip-opt db_name | sed -e "/^--/d" -e 's/`//g' -e 's/bigint(20)/numeric(20)/g' -e 's/double\(([^(]*)\)/double/' -e 's/int(11)/integer/g' -e '/^\/\*.*\*\//d' -e '/^LOCK TABLES/d' -e '/^\/\*/,/\*\//c\;' -e '/^CREATE TABLE/,/);/{/^ KEY/d; /^ PRIMARY KEY/ s/,$//}' > db.sql
Currently it is not supporting conversion of all mysql specific statements, feel free to edit and add additional conversions.
The SQL script generated by MySQL is made to run against MySQL. It contains options and features that other databases don't support.
As described in a related question, you could try creating the dump using the compatibility option. But you may still need to fix problems manually.

How to copy a mysql database to a machine which is not networkedly connected?

MySql n00b, here. I had thought that copying the directory <mysql_path>/<data_base_name> to USB stick & then copying it to the new PC would do it. It didn't.
Maybe I need to also copy schema or some such?
I can't say at the time of copying where it will be copied to & they might not be on the same LAN.
http://dev.mysql.com/doc/refman/5.0/en/copying-databases.html seems useful, but too tricky for n00bs. Similarly, using mysqldump and piping it on one line won't work as I don't knwo teh destination.
What's the simplest no-brain way?
What's wrong with:
$ mysqldump -u user -p db_name > /machine1/your/portable/media/mysqldump.sql
and later, on the other machine:
$ mysql -u user -p db_name < /machine2/your/portable/media/mysqldump.sql
?
Here's a simple guide.

How to put MySQL code into source control?

I know I can copy all my MySQL code manually to files and then put those files into source control. But is there any way to do this automatically?
I would like to do this to stored procedures, but also to table/event/trigger creation scripts.
You can create triggers on data change, which would store the change automatically to some source control. However there is no automatic way to track structure changes (tables, stored procedures and so on) this way. So probably the best way is to dump database and store these dumps in source control. You can do this periodically to automate the things.
Based on Michal answer, the solution I am using so far is:
#!/bin/bash
BACKUP_PATH=/root/database_name
DATABASE=database_name
PASSWORD=Password
rm -f "$BACKUP_PATH/*.sql"
mysqldump -p$PASSWORD --routines --skip-dump-date --no-create-info --no-data --skip-opt $DATABASE > $BACKUP_PATH/$DATABASE.sql
mysqldump -p$PASSWORD --tab=$BACKUP_PATH --skip-dump-date --no-data --skip-opt $DATABASE
hg commit -Am "automatic commit" $BACKUP_PATH
Don't really understand what you'r trying to do.
Look at Liquibase, perhaps it will do what you need...