Mysqldump/MySql avoid overwriting testers database development - mysql

So I'm working on this bash script, which has to create a backup of both the production database and work environment database, this is easy enough and done like this:
bin/mysqldump.exe -uUsername -p-hHost.com --lock-tables=false --single-transaction --routines --triggers production_database >backups/dump_production_database .sql
bin/mysqldump.exe -uUsername -p-hHost.com --lock-tables=false --single-transaction --routines --triggers testers_database >backups/dump_testers_database .sql
BUT the testers_database is at the moment updated manually, and I want it to be updated automatic.
Then I though, lets just use MySql.exe like this, to update the testers_database:
bin/mysql.exe -uUsername -p -hHost.com testers_database <backups/dump_production_database .sql
BUT then I thought of a problem here.. What if the developers have added new columns to tables, edits to existing functions or something like that, then this would be overwritten? How should I create this mysql.exe command, so it's not overwriting anything valuable to the developers? Can I somehow do a validation check, that checks if there is any new edits to functions/new columns to existing tables etc., then don't drop that table and create the old / just skip this command? Or should I just forget about this automatic update, and have the developers update the database themselves and wait 30min for it to complete the database update?
Right now when they develop they test that it's working on the testers_database and if everything is working there it's implemented it to the production_database manually which is fine, but I dont want this bash script to ruin their progress/changes to their test_database only to keep their database updated with the newest results all the time, so they don't have to do it themselves.

Related

Complete database reset for MySQL dump?

This may seem like a very dumb question but I didn't learn it in any other way and I just want to have some clarification.
I started to use MySQL a while ago and in order to test various scenarios, I back up my databases. I used MySQL dump for that:
Export:
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases > filename.sql
Import:
mysql -hSERVER -uUSER -pPASSWORD < filename.sql
Easy enough and it worked quite well up until now, when I noticed a little problem with this "setup": It does not fully "reset" the databases and tables. If, for example, there is an additional table added AFTER a dump file has been created, that additional table will not disappear if you import the same dump file. It essentially only "corrects" tables already there and recreates any databaes or tables missing, but does not remove any additional tables, which happen to have names that are not in the dump file.
What I want to do is to completely reset all the databases on a server when I import such a dump file. What would be the best solution? Is there a special import function reserved for that purpose or do I have to delete the databases myself first? Or is that a bad idea?
You can use the parameter --add-drop-database to add a "drop database" statement to the dump before each "create database" statement.
e.g.
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases --add-drop-database >filename.sql
see here for details.
There's nothing magic about the dump and restore processes you describe. mysqldump writes out SQL statements that describe the current state of the database or databases you are dumping. It has to fetch a list of tables in each database you're dumping, then it has to read the tables one by one and write them out as SQL. On databases of any size, this takes time.
So, if you create a new table while mysqldump is running, it may not pick up that new table. Similarly, if your application software changes contents of tables while mysqldump is running, those changes may or may not show up in the backup.
You can look at the .sql files mysqldump writes out to see what they have picked up. If you want to be sure that your dumped .sql files are perfect, you need to run mysqldump on a quiet server -- one where nobody is running data definition language.
MySQL hot backup solutions are available. You may need to look into that.
The OP may want look into
mysql_install_db
if they want a fresh start with the post-install default
settings before restoring one or more dumped DBs. For
production servers, another useful script is:
mysql_secure_installation
Also, they may prefer to dump the DB(s) they created separately:
mysqldump -hSERVER -uUSER -pPASSWORD --database foo > foo.sql
to avoid inadvertently changing the internal DBs:
mysql, information_schema, performance_schema.

mysqlimport using dump

I need to restore a dumped database, but without discarding existing rows in tables.
To dump I use:
mysqldump -u root --password --databases mydatabase > C:\mydatabase.sql
To restore I do not use the mysql command, since it will discard all existing rows, but instead mysqlimport should do the trick, obviously. But how? Running:
mysqlimport -u root -p mydatabase c:\mydatabase.sql
says "table mydatabase.mydatabase does not exist". Why does it look for tables? How to restore dump with entire database without discarding existing rows in existing tables? I could dump single tables if mysqlimport wants it.
What to do?
If you are concerned with stomping over existing rows, you need to mysqldump it as follows:
MYSQLDUMP_OPTIONS="--no-create-info --skip-extended-insert"
mysqldump -uroot --ppassword ${MYSQLDUMP_OPTIONS} --databases mydatabase > C:\mydatabase.sql
This will do the following:
remove CREATE TABLE statements and use only INSERTs.
It will INSERT exactly one row at a time. This helps mitigate rows with duplicate keys
With the mysqldump performed in this manner, now you can import like this
mysql -uroot -p --force -Dtargetdb < c:\mydatabase.sql
Give it a Try !!!
WARNING : Dumping with --skip-extended-insert will make the mysqldump really big, but at least you can control each duplicate done one by one. This will also increase the length of time the reload of the mysqldump is done.
I would edit the mydatabase.sql file in a text editor, dropping the lines that reference dropping tables or deleting rows, then manually import the file normally using the mysql command as normal.
mysql -u username -p databasename < mydatabase.sql
The mysqlimport command is designed for dumps created with the mysql command SELECT INTO OUTFILE rather than direct database dumps.
This sounds like it is much more complicated than you are describing.
If you do a backup the way you describe, it has all the records in your database. Then you say that you do not want to delete existing rows from your database and load from the backup? Why? The reason why the backup file (the output from mysqldump) has the drop and create table commands is to ensure that you don't wind up with two copies of your data.
The right answer is to load the mysqldump output file using the mysql client. If you don't want to do that, you'll have to explain why to get a better answer.

Importing incremental backups in MySQL

I'm using the following command to create an incremental backup in MySQL
mysqldump -uusername -ppassword db_name --flush-logs > D:\dbname_incremental_backup.sql
However the sql file is as big as a complete backup, and obviously importing it takes a long time as well. Could anybody tell me how to create incremental backups and import just the new data from each incremental backup rather than the whole database again?
I have read all the related articles in dev.mysql.com but still can not understand how to do it.
mysqldump only creates full backups. There's no built-in functionality for incremental backups.
For that sort of thing you probably want Percona xtrabackup but that will only work with InnoDB tables. This is usually not an issue since using MyISAM tables is considered extremely harmful.
By default a mysql dump will drop tables making an incremental update impossible. If you open up the resulting file, you will see something like:
DROP TABLE IF EXISTS `some_table_name`;
You can create a dump without dumping and creating new tables using the --no-create-info option. To make your dump friendly to incremental imports, you should also use --skip-extended-import which will break inserts out into one insert statement per row. Combined with using --force on the import will mean that inserts for rows that exist will fail but the import will continue. You will end up seeing errors in the logs for rows that already exist, but new rows will be inserted as desired.
You should be able to export with the following command (I also recommend not typing the password in the command so that it won't appear in your history)
mysqldump -u username -p --no-create-info --skip-extended-insert db_name --flush-logs > D:\dbname_incremental_backup.sql
You can then import with the following command:
mysql -u username -p --force db_name < D:\dbname_incremental_backup.sql

How do I do an incremental backup for a mysql database using a .sql file?

Situation: our production mysql database makes a daily dump into a .sql file. I'd like to keep a shadow database that is relatively up to date.
I know that to create a mysql database from a .sql file, one uses:
mysql -u USERNAME -p DATABASENAME < FILE.SQL
For our db, this took 4-5 hours. Needless to say, I'd like to cut that down, and I'm wondering if there's a way to just update the db with what's new/changed. On Day 2, is there a way to just update my shadow database with the new .sql file dumped from the production db?
MySQL Replication is the way to go.
But, in cases, where that is not possible, use the following procedure:
Have a modifed timestamp column in all your tables and update this value whenever a row is inserted/changed.
Use the following mysqldump options to take the incremental SQL file (this uses REPLACE commands instead of insertcommands, since the existing record will be updated in the backup database).
Keep a timestamp value somewhere placed in the file system. and use it in the where condition. MDFD_DATE is the column name on which you need to filter. On successful backup, update the value stored in the file.
skip-tz-utc prevents MSQL from automatically adjusting the timestamp values, based on your timezone.
mysqldump --databases db1,db2 --user=user --password=password --no-create-info --no-tablespaces --replace --skip-tz-utc --lock-tables --add-locks --compact --where=MDFD_DATE>='2012-06-13 23:09:42' --log-error=dump_error.txt --result-file=result.sql
Use the new sql file and run it in your server.
Limitations:
This method will not work if some records are deleted in your database. You need to manually delete them from the backup databases. Otherwise, keep a DEL_FLAG column and update it to 'Y' in production for deleted records and use this condition to delete records in the backup databases.
This problem can be solved using mysql synchronization.
Some links to guide you:
http://www.howtoforge.com/mysql_database_replication
Free MySQL synchronization tool
https://launchpad.net/mysql-proxy
https://www.google.com.br/search?q=mysql+synchronization

Replicate MYSQL data from stage to dev with a script

I have two versions of my application, one "stage" and one "dev."
Right now, "stage" is exposed to the real world for beta-testing.
From time to time, I want an exact replica of the data to be replicated into the "dev" database.
Both databases are on the same hosted Linux machine.
Sometimes I create "dummy" data in the development environment. At this stage, I'd be fine if it needs to get written over in stage.
Thanks.
Be sure to add security to your script so only the user you are authorizing is able to run that script. basically you want to use mysql and mysqldump commands.
mysqldump -u username --password=userpass --add-drop-database --add=locks --create-options --disable-keys --extend-insert --result-file=database.sql databasename
mysql -u username --password=userpass -e "source database.sql;"
The first command will make the backup the second command will bring the backup to another database engine. Be careful because if you run it on the same exact process of mysql you are only backing up the database adn then restoring it to the same database, you have to change the database name.
Hope this helps.
Just use mysqldump to create a backup of the staging database and then load the dump file over your dev database. This will give you an exact copy of the stage data.