MySQL backup procedure for large databases - mysql

I currently make a backup of my 2.5GB (and growing) MySQL Database every day. I have over 100 tables that are backed up.
I use this command:
mysqldump --user=user --password=pass --host=localhost db_name | gzip > backup.sql.gz
Works great but when I need to quickly restore data to a single table, it's a horrible process. I have to download the backup, extract the ZIP file, wait forever for the editor to load the SQL file so that I can remove the other tables I don't require. When I need this done fast, I'm pulling my hair out
Can anyone recommend a better way to store MySQL backups? Is there a command to split all the tables into their own sql files?
Appreciate your help!

Related

Which one is faster to import 50GB data into MySQL? In-database source or shell command read file?

I used this command mysqldump -u root -p etl_db > ~/backup.sql to get the backup data.
Now I want to import into a new remote MySQL database.
I saw there are 2 ways to do it.
https://dev.mysql.com/doc/refman/8.0/en/mysql-batch-commands.html
I wonder which one would be faster?
shell> mysql < dump.sql
or
mysql> source dump.sql?
I saw some people says that source is for small data but there are others say that it's good for large data. I couldn't find much documentation.
Thanks!

Complete database reset for MySQL dump?

This may seem like a very dumb question but I didn't learn it in any other way and I just want to have some clarification.
I started to use MySQL a while ago and in order to test various scenarios, I back up my databases. I used MySQL dump for that:
Export:
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases > filename.sql
Import:
mysql -hSERVER -uUSER -pPASSWORD < filename.sql
Easy enough and it worked quite well up until now, when I noticed a little problem with this "setup": It does not fully "reset" the databases and tables. If, for example, there is an additional table added AFTER a dump file has been created, that additional table will not disappear if you import the same dump file. It essentially only "corrects" tables already there and recreates any databaes or tables missing, but does not remove any additional tables, which happen to have names that are not in the dump file.
What I want to do is to completely reset all the databases on a server when I import such a dump file. What would be the best solution? Is there a special import function reserved for that purpose or do I have to delete the databases myself first? Or is that a bad idea?
You can use the parameter --add-drop-database to add a "drop database" statement to the dump before each "create database" statement.
e.g.
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases --add-drop-database >filename.sql
see here for details.
There's nothing magic about the dump and restore processes you describe. mysqldump writes out SQL statements that describe the current state of the database or databases you are dumping. It has to fetch a list of tables in each database you're dumping, then it has to read the tables one by one and write them out as SQL. On databases of any size, this takes time.
So, if you create a new table while mysqldump is running, it may not pick up that new table. Similarly, if your application software changes contents of tables while mysqldump is running, those changes may or may not show up in the backup.
You can look at the .sql files mysqldump writes out to see what they have picked up. If you want to be sure that your dumped .sql files are perfect, you need to run mysqldump on a quiet server -- one where nobody is running data definition language.
MySQL hot backup solutions are available. You may need to look into that.
The OP may want look into
mysql_install_db
if they want a fresh start with the post-install default
settings before restoring one or more dumped DBs. For
production servers, another useful script is:
mysql_secure_installation
Also, they may prefer to dump the DB(s) they created separately:
mysqldump -hSERVER -uUSER -pPASSWORD --database foo > foo.sql
to avoid inadvertently changing the internal DBs:
mysql, information_schema, performance_schema.

Fastest way of restoring MYSQL Dump

I have a dump of 2gb size and its taking more than 2 days to restore whole database and still going on.
I am using this command to restore my Database from dump file
mysql -uroot -ppassword new_db < old_db.sql
is there any faster way to do the same. so I can wrap up the Restoration task in one day or less.

How do I do an incremental backup for a mysql database using a .sql file?

Situation: our production mysql database makes a daily dump into a .sql file. I'd like to keep a shadow database that is relatively up to date.
I know that to create a mysql database from a .sql file, one uses:
mysql -u USERNAME -p DATABASENAME < FILE.SQL
For our db, this took 4-5 hours. Needless to say, I'd like to cut that down, and I'm wondering if there's a way to just update the db with what's new/changed. On Day 2, is there a way to just update my shadow database with the new .sql file dumped from the production db?
MySQL Replication is the way to go.
But, in cases, where that is not possible, use the following procedure:
Have a modifed timestamp column in all your tables and update this value whenever a row is inserted/changed.
Use the following mysqldump options to take the incremental SQL file (this uses REPLACE commands instead of insertcommands, since the existing record will be updated in the backup database).
Keep a timestamp value somewhere placed in the file system. and use it in the where condition. MDFD_DATE is the column name on which you need to filter. On successful backup, update the value stored in the file.
skip-tz-utc prevents MSQL from automatically adjusting the timestamp values, based on your timezone.
mysqldump --databases db1,db2 --user=user --password=password --no-create-info --no-tablespaces --replace --skip-tz-utc --lock-tables --add-locks --compact --where=MDFD_DATE>='2012-06-13 23:09:42' --log-error=dump_error.txt --result-file=result.sql
Use the new sql file and run it in your server.
Limitations:
This method will not work if some records are deleted in your database. You need to manually delete them from the backup databases. Otherwise, keep a DEL_FLAG column and update it to 'Y' in production for deleted records and use this condition to delete records in the backup databases.
This problem can be solved using mysql synchronization.
Some links to guide you:
http://www.howtoforge.com/mysql_database_replication
Free MySQL synchronization tool
https://launchpad.net/mysql-proxy
https://www.google.com.br/search?q=mysql+synchronization

How to backup my MySQL's databases on Windows Vista?

How can I backup my MySQL's databases? I'm using Windows Vista and MySQL 5.1.
I have found the folder "C:\Users\All Users\MySQL\MySQL Server 5.1\data" with all my database files and copy them, but how can I restore them if I need?
Thank you.
You could use the mysqldump tool:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
That way you'd get SQL files that you could just execute.
You can also go surf to localhost/phpmyadmin and go to 'export' and select the databases you want to export.
The backup process does not have anything to do with your operating system. Simply export your databases.
You can back up the database files directly, but this can be dangerous if the database is in active use at the time you do the backup. There's no guarantee that you'll make a consistent and valid backup if a query starts modifying on-disk data. You may end up with broken tables.
The safest route is to use mysqldump to output a set of sql statements which can recreate the database completely (table creation + data) in one go. Should you need to restore from backup, you can simply feed this dump file back to mysql:
mysqldump -p -u username nameofdatabase > backup.sql
and restore via:
mysql -p -u username nameofdatabase < backup.sql
The .sql file is just a plaintext dump of all the queries required to rebuild the table(s) and their data.