How to put MySQL code into source control? - mysql

I know I can copy all my MySQL code manually to files and then put those files into source control. But is there any way to do this automatically?
I would like to do this to stored procedures, but also to table/event/trigger creation scripts.

You can create triggers on data change, which would store the change automatically to some source control. However there is no automatic way to track structure changes (tables, stored procedures and so on) this way. So probably the best way is to dump database and store these dumps in source control. You can do this periodically to automate the things.

Based on Michal answer, the solution I am using so far is:
#!/bin/bash
BACKUP_PATH=/root/database_name
DATABASE=database_name
PASSWORD=Password
rm -f "$BACKUP_PATH/*.sql"
mysqldump -p$PASSWORD --routines --skip-dump-date --no-create-info --no-data --skip-opt $DATABASE > $BACKUP_PATH/$DATABASE.sql
mysqldump -p$PASSWORD --tab=$BACKUP_PATH --skip-dump-date --no-data --skip-opt $DATABASE
hg commit -Am "automatic commit" $BACKUP_PATH

Don't really understand what you'r trying to do.
Look at Liquibase, perhaps it will do what you need...

Related

MySQLDump backup and restore on different schema same server

I have a staging MySQL db where I had created intermediate tables for data load. I want to migrate this db to Prod but with only the tables needed in production (along with all the routines, events and triggers) using mysqldump.
Before I hand over the script to the platform team to carry this out in production I wanted to test it out locally, so we created a separate schema on the same server and we are trying out the following:
We tried out the mysqldump command without the --add-drop database, instead used --no-create-db because I am concerned that during the restore process, the "use stage" statements in the stored procedure definitions might change the db and wipe out my stage db. The result was that the tables got copied over, but the routines did not.
Is there a way to make this work while:
A. select subset of tables along with all routines and events
B. restore them in a different schema
C. WITHOUT risking wipeout of the original schema?
Use:
mysqldump -u username -p password --hex-blob -R -e --triggers database_name > database_name.sql
You can add or remove options from the dump commands based on your needs.
--single-transaction --- > option if you don't want or can't do table locks
-d --- > -d is --no-data for short.
-R --- > Also consider adding --routines (-R for short ) if you're database has stored procedures/functions
-B --- > Include the CREATE DATABASE command. --databases dbname (shorthand: -B dbname)
-r --- > To avoid having the character set of your shell interfere with encoding, -r schema.sql is preferred over > schema.sql. Also a good idea to specify the character set explicitly
with --default-character-set=utf8 (or whatever). You'll still want to check the set names at the top of the dump file. I've been caught in MySQL charset encoding hell before
--routines --- > Also consider adding --routines if you're database has stored procedures/functions
--hex-blob if you have blob data type in your database

Mysqldump/MySql avoid overwriting testers database development

So I'm working on this bash script, which has to create a backup of both the production database and work environment database, this is easy enough and done like this:
bin/mysqldump.exe -uUsername -p-hHost.com --lock-tables=false --single-transaction --routines --triggers production_database >backups/dump_production_database .sql
bin/mysqldump.exe -uUsername -p-hHost.com --lock-tables=false --single-transaction --routines --triggers testers_database >backups/dump_testers_database .sql
BUT the testers_database is at the moment updated manually, and I want it to be updated automatic.
Then I though, lets just use MySql.exe like this, to update the testers_database:
bin/mysql.exe -uUsername -p -hHost.com testers_database <backups/dump_production_database .sql
BUT then I thought of a problem here.. What if the developers have added new columns to tables, edits to existing functions or something like that, then this would be overwritten? How should I create this mysql.exe command, so it's not overwriting anything valuable to the developers? Can I somehow do a validation check, that checks if there is any new edits to functions/new columns to existing tables etc., then don't drop that table and create the old / just skip this command? Or should I just forget about this automatic update, and have the developers update the database themselves and wait 30min for it to complete the database update?
Right now when they develop they test that it's working on the testers_database and if everything is working there it's implemented it to the production_database manually which is fine, but I dont want this bash script to ruin their progress/changes to their test_database only to keep their database updated with the newest results all the time, so they don't have to do it themselves.

MySQL data backup?

I have an database with tons of data in it.
I also wrote a new program that will change the data in the database.
Is there any way to make a copy of the database before i run my program?
Or is there any other solution?
What I'm thiking is making a copy of the database,
Run the program, which modified the main database. If things goes wrong, how do I use my copied database data to revert the main database?
Please provide steps and commands on linux. I'm new with the database mysql and its commands.
You can use the mysqldump command to make a backup of your database and overwrite the backup file everytime
mysqldump -u <user> -p <db> > dump.sql
Read the following link this will tell you how to dump your database via different ways and restore it.
http://www.thegeekstuff.com/2008/09/backup-and-restore-mysql-database-using-mysqldump/
Basic command to dump a single database is:
mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql
Is there any way to make a copy of the database before i run my program?
Yes, there's. You have to use the client utility mysqldump before run you app.
It is something like
shell> mysqldump [options] > dump.sql

How to make dump of all MySQL databases besides two

this is probably massively simple, however I will be doing this for a live server and don't want to mess it up.
Can someone please let me know how I can do a mysqldump of all databases, procedures, triggers etc except the mysql and performance_schema databases?
Yes, you can dump several schemas at the same time :
mysqldump --user=[USER] --password=[PASS] --host=[HOST] --databases mydb1 mydb2 mydb3 [...] --routines > dumpfile.sql
OR
mysqldump --user=[USER] --password=[p --host=[HOST] --all-databases --routines > dumpfile.sql
concerning the last command, if you don't want to dump performance_schema (EDIT: as mentioned by #Barranka, by default mysqldump won't dump it), mysql, phpMyAdmin schema, etc. you just need to ensure that [USER] can't access them.
As stated in the reference manual:
mysqldump does not dump the INFORMATION_SCHEMA or performance_schema database by default. To dump either of these, name it explicitly on the command line and also use the --skip-lock-tables option. You can also name them with the --databases option.
So that takes care of your concern about dumping those databases.
Now, to dump all databases, I think you should do something like this:
mysqldump -h Host -u User -pPassword -A -R > very_big_dump.sql
To test it without dumping all data, you can add the -d flag to dump only database, table (and routine) definitions with no data.
As mentioned by Basile in his answer, the easiest way to ommit dumping the mysql database is to invoke mysqldump with a user that does not have access to it. So the punch line is: use or create a user that has access only to the databases you mean to dump.
There's no option in mysqldump that you could use to filter the databases list, but you can run two commands:
# DATABASES=$(mysql -N -B -e "SHOW DATABASES" | grep -Ev '(mysql|performance_schema)')
# mysqldump -B $DATABASES

Synchronize two databases schema in MySQL

I was looking for a portable script or command line program that can synchronize two MySQL databases schema. I am not looking for a GUI based solution because that can't be automated or run with the buid/deployment tool.
Basically what it should do is scan database1 and database2. Check the schema difference (tables and indexes) and propose a bunch of SQL statements to run on one so that it gets the similiar structure of the other minimizing data damage as much as possible.
If someone can indicate a PHP, Python or Ruby package where this type of solution is implemented, I can try to copy the code from there.
A lot of MySQL GUI tools probably can do this, but I am looking for a scriptable solution.
Edit: Sorry for not being more clear: What I am looking for is synchronization in table structure while keeping data intact as far as possible. Not data replication.
More info:
Why replication won't work.
The installation bases are spread around the state.
We want the installer to perform dynamic fixes on the DB based on chagnes made in the latest version, regardless of what older version the end user might be using.
Changes are mostly like adding new column to a tables, creating new indexes, or dropping indexes, adding tables or dropping tables used by the system internally (we don't drop user data table).
If it's a GUI: No it can't be used. We don't want to bunddle a 20MB app with our installer just for DB diff. Specially when the original installer is less than 1 MB.
Have you considered using MySQL replication ?
SQLyog does that and it is awesome. We use it in production often.
I know it's an old question but it was the first result on google for what I was searching for (exact same thing as the initial question)
I found the answer still here but I don't remember the URL
it's a script that started from:
mysqldump --skip-comments --skip-extended-insert -u root -p dbName1>file1.sql
mysqldump --skip-comments --skip-extended-insert -u root -p dbName2>file2.sql
diff file1.sql file2.sql
and ended up more like this
#!/bin/sh
echo "Usage: dbdiff [user1:pass1#dbname1] [user2:pass2#dbname2] [ignore_table1:ignore_table2...]"
dump () {
up=${1%%#*}; user=${up%%:*}; pass=${up##*:}; dbname=${1##*#};
mysqldump --opt --compact --skip-extended-insert -u $user -p$pass $dbname $table > $2
}
rm -f /tmp/db.diff
# Compare
up=${1%%#*}; user=${up%%:*}; pass=${up##*:}; dbname=${1##*#};
for table in `mysql -u $user -p$pass $dbname -N -e "show tables" --batch`; do
if [ "`echo $3 | grep $table`" = "" ]; then
echo "Comparing '$table'..."
dump $1 /tmp/file1.sql
dump $2 /tmp/file2.sql
diff -up /tmp/file1.sql /tmp/file2.sql >> /tmp/db.diff
else
echo "Ignored '$table'..."
fi
done
less /tmp/db.diff
rm -f /tmp/file1.sql /tmp/file2.sql
For a long-term, professional solution, you should keep an eye on Schemamatic (http://versabanq.com/products/schemamatic.php). This link shows a GUI app but all it does is manipulate a command-line software. In this page there is link to its google code site where the C# .Net version of Schemamatic can be found. Your perfect solution would be to add support for MySQL to Schemamatic. For SQL Server, it's perfect and does exactly what you mentioned.
Now, for a short-term solution I would suggest dumping the data you want with MySQL's command-line tools like:
mysqldump -A -c -uroot -ppassword >bkpmysql.sql
And play with it, although it should take quite some time to achieve what you want. Schemamatic really seems to me your best choice. Let me know if you need any clarification when/if trying Schemamatic.
You might want to look at some tools such as dbdeploy (this is a java or a .net version) and liquidbase and others.
Although most of these I think will apply sets of changes to a DB in a controlled manner. Don't know if they can reverse engineer from existing schemas and compare.
E.
check this one is codeigniter database diff script generator
https://github.com/vaimeo/ci-database-diff-generator