Restoring selective tables from an entire database dump? - mysql

I have a mysql dump created with mysqldump that holds all the tables in my database and all their data. However I only want to restore two tables. (lets call them kittens and kittens_votes)
How would I restore those two tables without restoring the entire database?

Well, you have three main options.
You can manually find the SQL statements in the file relating to the backed up tables and copy them manually. This has the advantage of being simple, but for large backups it's impractical.
Restore the database to a temporary database. Basically, create a new db, restore it to that db, and then copy the data from there to the old one. This will work well only if you're doing single database backups (If there's no CREATE DATABASE command(s) in the backup file).
Restore the database to a new database server, and copy from there. This works well if you take full server backups as opposed to single database backups.
Which one you choose will depend upon the exact situation (including how much data you have)...

You can parse out CREATE TABLE kittens|kitten_votes AND INSERT INTO ... using regexp, for example, and only execute these statements. As far as I know, there's no other way to "partially restore" from dump.

Open the .sql file and copy the insert statements for the tables you want.

create a new user with access to only those 2 tables. Now restore the DB with -f (force) option that will ignore the failed statements and execute only those statements it has permission to.

What you want is a "Single Table Restore"
http://hashmysql.org/wiki/Single_table_restore
A few options are outlined above ... However the one which worked for me was:
Create a new DB
$ mysql -u root -p CREATE DATABASE temp_db
Insert the .sql file ( the one with the desired table ) into the new DB
$ mysql -u root -p temp_db < ~/full/path/to/your_database_file.sql
dump the desired table
$ mysqldump -u root -p temp_db awesome_single_table > ~/awesome_single_table.sql
import desired table
$ mysql -u root -p original_database < ~/awesome_single_table.sql
Then delete the temp_db and you're all golden!

Related

How to update mySQL table starting from a dump?

I'm working with a mySQL database located on a separate cluster. Since the changes were few, I was just dumping the whole db and porting it to a fresh database each time. But now changes on the main db are more frequent, so I am looking for something that allows me to just "update" the tables of my existing db after having dumped it from the main site.
I am dumping the db using
mysqldump --master-data -h my_main_server -u my_dump_user -pmy_password mydb > dbdump.sql
How can I use it to "update" my current db?
Since you'd have the tables created already, the dump would fail whilst trying to create them, so for you to be able to execute the dump, you need to drop all the existing tables in the database.
You could have instructions in your dump to do that, so you could execute that command without a problem, or you can just reset the database.
If you really need to update some parts of the db with that dump, you could just comment out all the ALTER and CREATE TABLE instructions and just keep the INSERTS, if that's what you want.

MySQL: Create consistent dump of audit tables

I have set up a system whose purpose it is to generate incremental dumps of our production data to our data warehouse. "Incremental" in this sense means that we can "synchronize" the production database with our data warehouse every minute or so without having to generate a full dump. Instead, we are just dumping and inserting the new/changed data.
On our replication save, I have set up a system where every relevant table of our production database has one insert TRIGGER and one update TRIGGER. These copy every row which is inserted or updated into an "audit table" in a different schema. This audit schema contains tables with the same structure as the production tables, but no indexes, and by using those TRIGGERs the audit tables will only contain the new or updated rows since the last export.
At the moment I'm using the mysql command line client to do the following for each of these audit tables:
LOCK TABLES table WRITE
SELECT * FROM table
DELETE FROM table
UNLOCK TABLES
I then pipe the output to some other scripts for further processing.
Now this works perfectly fine, however it creates the problem that while the state of every individual table will be consistent, the entire set of tables won't be. For example, if I have a clicks table and an impressions table and there is a 1 minute delay between dumping the former and the latter, the entire dump will be in a state which is inconsistent, obviously.
Now my question is: How do I do the following:
Lock ALL tables
Generate dumps for ALL tables
Delete all data from ALL tables
Unlock tables
I cannot use the mysql command line client because I cannot keep the lock across different sessions, and each table requires a new command. Also, I checked mysqldump which allows dumping multiple tables at a time, but I didn't find a way to delete all data from the tables before releasing the locks.
Any ideas?
To perform the first two points the command could be this one :
mysqldump --lock-all-tables -u root -p DATABASENAME > nameofdumpfile.sql
Since it is not possible to perform step 3 and 4 without releasing the lock, at least with mysqldump utility, why don't copy all the tables into another database (backup db) and then export the dump file from it ?
CREATE DATABASE backupdb;
USE originaldb;
FLUSH TABLES WITH READ LOCK;
keep this prompt(Prompt 1) open and then clone the database from another command prompt(Prompt 2) :
mysqldump --lock-all-tables -u root -p originaldb | mysql -u backup -p password backupdb;
Drop the original database from the Prompt 1 :
USE backupdb;
DROP DATABASE originaldb;
Then restore the empty database back with its original name (note the -d flag ) :
mysqldump --lock-all-tables -u root -p backupdb | mysql -u backup -p password originaldb;
This could be an example of a workaround that you can apply to achieve what you need.

Importing incremental backups in MySQL

I'm using the following command to create an incremental backup in MySQL
mysqldump -uusername -ppassword db_name --flush-logs > D:\dbname_incremental_backup.sql
However the sql file is as big as a complete backup, and obviously importing it takes a long time as well. Could anybody tell me how to create incremental backups and import just the new data from each incremental backup rather than the whole database again?
I have read all the related articles in dev.mysql.com but still can not understand how to do it.
mysqldump only creates full backups. There's no built-in functionality for incremental backups.
For that sort of thing you probably want Percona xtrabackup but that will only work with InnoDB tables. This is usually not an issue since using MyISAM tables is considered extremely harmful.
By default a mysql dump will drop tables making an incremental update impossible. If you open up the resulting file, you will see something like:
DROP TABLE IF EXISTS `some_table_name`;
You can create a dump without dumping and creating new tables using the --no-create-info option. To make your dump friendly to incremental imports, you should also use --skip-extended-import which will break inserts out into one insert statement per row. Combined with using --force on the import will mean that inserts for rows that exist will fail but the import will continue. You will end up seeing errors in the logs for rows that already exist, but new rows will be inserted as desired.
You should be able to export with the following command (I also recommend not typing the password in the command so that it won't appear in your history)
mysqldump -u username -p --no-create-info --skip-extended-insert db_name --flush-logs > D:\dbname_incremental_backup.sql
You can then import with the following command:
mysql -u username -p --force db_name < D:\dbname_incremental_backup.sql

How do I do an incremental backup for a mysql database using a .sql file?

Situation: our production mysql database makes a daily dump into a .sql file. I'd like to keep a shadow database that is relatively up to date.
I know that to create a mysql database from a .sql file, one uses:
mysql -u USERNAME -p DATABASENAME < FILE.SQL
For our db, this took 4-5 hours. Needless to say, I'd like to cut that down, and I'm wondering if there's a way to just update the db with what's new/changed. On Day 2, is there a way to just update my shadow database with the new .sql file dumped from the production db?
MySQL Replication is the way to go.
But, in cases, where that is not possible, use the following procedure:
Have a modifed timestamp column in all your tables and update this value whenever a row is inserted/changed.
Use the following mysqldump options to take the incremental SQL file (this uses REPLACE commands instead of insertcommands, since the existing record will be updated in the backup database).
Keep a timestamp value somewhere placed in the file system. and use it in the where condition. MDFD_DATE is the column name on which you need to filter. On successful backup, update the value stored in the file.
skip-tz-utc prevents MSQL from automatically adjusting the timestamp values, based on your timezone.
mysqldump --databases db1,db2 --user=user --password=password --no-create-info --no-tablespaces --replace --skip-tz-utc --lock-tables --add-locks --compact --where=MDFD_DATE>='2012-06-13 23:09:42' --log-error=dump_error.txt --result-file=result.sql
Use the new sql file and run it in your server.
Limitations:
This method will not work if some records are deleted in your database. You need to manually delete them from the backup databases. Otherwise, keep a DEL_FLAG column and update it to 'Y' in production for deleted records and use this condition to delete records in the backup databases.
This problem can be solved using mysql synchronization.
Some links to guide you:
http://www.howtoforge.com/mysql_database_replication
Free MySQL synchronization tool
https://launchpad.net/mysql-proxy
https://www.google.com.br/search?q=mysql+synchronization

Mysql restore to restore structure and no data from a given backup (schema.sql)

Hi I use mysql administrator and have restored backup files (backup.sql). I would like to use restore the structure without data and it is not giving me an option to do so. I understand phpadmin provides this. I can not use this however. Any one can tell me an easy way?
Dump database structure only:
cat backup.sql | grep -v ^INSERT | mysql -u $USER -p
This will execute everything in the backup.sql file except the INSERT lines that would have populated the tables. After running this you should have your full table structure along with any stored procedures / views / etc. that were in the original databse, but your tables will all be empty.
You can change the ENGINE to BLACKHOLE in the dump using sed
cat backup.sql | sed 's/ENGINE=(MYISAM|INNODB)/ENGINE=BLACKHOLE/g' > backup2.sql
This engine will just "swallow" the INSERT statements and the tables will remain empty. Of course you must change the ENGINE again using:
ALTER TABLE `mytable` ENGINE=MYISAM;
IIRC the backup.sql files (if created by mysqldump) are just SQL commands in a text file. Just copy-paste all the "create ..." statements from the beginning of the file, but not the "insert" statements in to another file and "mysql < newfile" you should have the empty database without any data in it.
there is no way to tell the mysql client to skip the INSERT commands. the least-hassle way to do this is run the script as-is and let it load the data, then just TRUNCATE all of the tables.
you can write a script to do the following:
1 : import the dump into a new database.
2 : truncate all the tables with a loop.
3 : export the db again.
4 : now u just have the structure
You can backup you MYSQL database structure with
mysqldump -u username –p -d database_name > backup.sql
(You should not supply password at command line as it leads to security risks.MYSQL will ask for password by default.)