Restore data from sql.gz file, but skip a single table - mysql

I took a backup of my database from engine yard, which is downloaded in sql.gz File. Since one of the tables of my database is too large, so I want to skip it while restoring it in my local system.
I use gunzip < file_name.sql.gz | mysql -u user_name -p password database name command to restore backup.

You may have identified a solution for this by now but I figured I'd add some insight. The restore operation of MySQL does not currently offer an easy way to exclude a single table from the restore operation, but there are a few options you could consider:
If your local MySQL server offers the 'Blackhole' engine you could use gawk to alter the ENGINE definition for that table when it is created. This would be something like:
gunzip < file_name.sql.gz | gawk -v RS='' '{print gensub(/(CREATE TABLE .[table_to_be_skipped].*) ENGINE=InnoDB/, "\\1 ENGINE=Blackhole", 1)}' | mysql -u user_name -p password database name.
This instructs the database to just pass through the row inserts against this table during the reload. Once the load completes you could modify this back to the InnoDB engine with alter table [table_to_be_skipped] engine=innodb;. The drawback to this would be that you are still downloading and parsing through a larger backup.
Another option would be to use the Awk method described here to create two backup files, one representing the tables and data prior to the excluded table, and another representing everything that follows it. The tricky part here is that if you add tables on either side of the excluded table you would have to update this script. This also has the consequence of having to download and parse a larger backup file but tends to be a bit easier to remember from a syntax perspective.
By far the best option to addressing this would be to simply do a manual backup on your source database that ignores this table. Use a replica if possible, and use the --single-transaction option to mysqldump if you are using all InnoDB tables or consistency of non-InnoDB tables is of minimal importance to your local environment. The following should do the trick:
mysqldump -u user_name -p --single-transaction --ignore-table=[table_to_be_skipped] database name | gzip > file_name.sql.gz
This has the obvious benefit of not requiring any complex parsing or larger file downloads.

Not sure if this will help, but we have documentation available for dealing with database backups - also, you may want to talk to someone from support through a ticket or in #engineyard on IRC freenode.

Related

Data change during doing Database backup

I have a question regarding to mysql DB backup:
1.AT 8.00.00 AM ,I do backup Database by using mysqldump command.It somtime takes 5 second to finish.
2.While database backup is inprogress(At 8.00.01 AM ), someone make some changes to DB
Will backup version containt data changes of step 2 ?
I have goolge but not found explaination yet.Pls help me !
Percona provides a free tool for this purpose called xtrabackup.
For InnoDB tables MySQL uses log files to store DML commands, so that commands can be rolled back and some other things.
You can backup your database (in a non locking way), and after you created the backup you can apply the logs, so that you have a backup which holds the database status when the backup is finished. Don't know the commands without looking them up now, sorry, you'll have to have a look into the documentation.
It depends on your mysqldump command and which table the dump is on at the time of the update:
If you used the --single-transaction option, then, no, it will not contain the late change.
If you did not use that option then:
if the table being updated has not been dumped yet, then yes it will have the change.
If the table being updated has been dumped already, then no it won't have the change.
Chances are, you don't want that change in your backup, because you would like your backup to be consistent to a point in time. Here is some more discussion about all those kinds of things:
How to obtain a correct dump using mysqldump and single-transaction when DDL is used at the same time?

How can I remove unmapped MYSQL databases using InnoDB tables with a safe bash command?

$ user=jocular; cat ~/list|while read db; do echo rm -vi /var/lib/mysql/$user_$db; done
That is what I came up with but my instructor gave me this feedback :
Removing MySQL databases in this manner can cause catastrophic problems for the MySQL server that can lead to loss of the MySQL server. Remove a database using InnoDB tables in this fashion and attempt to restore it from a backup to learn more.
What would be the safest command to remove the unmapped databases ?
Your instructor is right, InnoDB makes it harder to manipulate tables and databases using shell tools. The reason is that InnoDB manages a "data dictionary" inside the ibdata1 file, which catalogs the databases and tables and which tablespace files they belong in. If you move or rename or delete files in the shell, InnoDB's data dictionary now references out-of-date information, and subsequently trying to use those table names or database names runs into conflicts.
Sort of like when you get a new phone, and you keep getting calls from friends of the former owner of that phone number.
If you use SQL or other MySQL commands to drop the database, InnoDB makes sure to update its data dictionary and keep it in sync with reality.
user=jocular; cat ~/list | while read db; do mysqladmin drop "${user}_${db}" ; done
Mysqladmin prompts you before dropping a database, since there's no undoing that change. But you can also use the -f option to force dropping without prompting. Just be careful you don't drop the wrong database!

moving InnoDb DB

I have DB InnoDb innodb_db_1. I have turned on innodb_file_per_table.
If I go to var/lib/mysql/innodb_db_1/ I will find files table_name.ibd, table_name.frm, db.opt.
Now, I'm trying to copy these files to another DB for example to innodb_db_2(var/lib/mysql/innodb_db_2/) but nothing happened.
But if my DB will be MyIsam, I can copy in such way and everything be ok.
What suggestions to move the DB by copying the file of InnoDb DB?
Even when you use file-per-table, the tables keep some of their data and metadata in /var/lib/mysql/ibdata1. So you can't just move .ibd files to a new MySQL instance.
You'll have to backup and restore your database. You can use:
mysqldump, included with MySQL, reliable but slow.
mydumper, a community contributed substitute for mysqldump, this supports compression and parallel execution and other neat features.
Percona XtraBackup, which is free and performs high-speed physical backups of InnoDB (and also supports other storage engines). This is recommended to minimize interruption to your live operations, and also if your database is large.
Re your comment:
No, you cannot just copy .ibd files. You cannot turn off the requirement for ibdata1. This file includes, among other things, a data dictionary which you can think of like a table of contents for a book. It tells InnoDB what tables you have, and which physical file they reside in.
If you just move a .ibd file into another MySQL instance, this does not add it to that instance's data dictionary. So InnoDB has no idea to look in the new file, or which logical table it goes with.
If you want a workaround, you could ALTER TABLE mytable ENGINE=MyISAM, move that file and its .frm to another instance, and then ALTER TABLE mytable ENGINE=InnoDB to change it back. Remember to FLUSH TABLES WITH READ LOCK before you move MyISAM files.
But these steps are not for beginners. It would be a lot safer for you to use the backup & restore method unless you know what you're doing. I'm trying to save you some grief.
There is an easy procedure to move the whole Mysql InnoDB from pc A to pc B.
The conditions to perform the procedure are:
You need to have innodb_file_per_table option set
You need to be able to make a shutdown of the database
In my case i had to move whole 150Gb MySql database (the biggest table had aprox. 60Gb). Making sqldumps and loading them back was not an option (too slow).
So what I did was I made a "cold backup" of the mysql database (mysql doc) and then simply move files to another computer.
The steps to make after moving the databse are described here dba stackexchange.
I am writing this, because (assuming you are able to follow mentioned conditions) this is by far the fastest (and probalby the easiest) method to move a (large) MySql InnoDb and nobody mentioned it yet.
You can copy MyISAM tables all day long (safely, as long as they are flushed and locked or the server is stopped) but you can't do this with InnoDB, because the two storage engines handle tables and tablespaces very differently.
MyISAM automatically discovers tables by iterating the files in the directory named for the database.
InnoDB has an internal data dictionary stored in the system tablespace (ibdata1). Not only do the tables have to be consistent, there are identifiers in the .ibd files that must match what the data dictionary has stored internally.
Prior to MySQL 5.6, with the introduction of transportable tablespaces, this wasn't a supported operation. If you are using MySQL 5.6, the link provides you with information on how this works.
The alternatives:
use mysqldump [options] database_name > dumpfile.sql without the --databases option, which will dump the tables in the specified database but will omit any DATABASE commands (DROP DATABASE, CREATE DATABASE and USE), some or all of which, based on the combination of options specified, are normally added to the dump file. You can then import this with mysql [options] < dumpfile.sql.
CREATE TABLE db2.t1 LIKE db1.t1; INSERT INTO db2.t1 SELECT * FROM db1.t1; (for each table; you'll have to add any foreign key constraints back in)
ALTER TABLE on each table, changing it to MyISAM, then flushing and locking the tables with FLUSH TABLES WITH READ LOCK;, copying them over, then altering everything back to InnoDB. Not the greatest idea, since you'll lose all foreign key declarations and have to add them back on the original tables, but it is an alternative.
As far as I know, "hot copying" table files is a very bad idea (I've done it two times, and only made it work with MyISAM tables, and i did it only because I had no other choice).
My personal recomendation is: Use mysqldump. On your shell:
mysqldump -h yourHost -u yourUser -pYourPassword yourDatabase yourTable > dumpFile.sql
To copy the data from a dump file to another database, on your shell:
mysql -h yourHost -u yourUser -pYourPassword yourNewDatabase < dumpFile.sql
Check: mysqldump — A Database Backup Program.
If you insist on copying InnoDB files by hand, please read this: Backing Up and Recovering an InnoDB Database

MYSQLDUMP failing. Couldn't execute 'SHOW TRIGGERS LIKE errors like (Errcode: 13) (6) and (1036) [duplicate]

This question already has answers here:
mysqldump doing a partial backup - incomplete table dump
(4 answers)
Closed 9 years ago.
Does anyone know why MYSQLDUMP would only perform a partial backup of a database when run with the following instruction:
"C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" databaseSchema -u root --password=rootPassword > c:\backups\daily\mySchema.dump
Sometimes a full backup is performed, at other times the backup will stop after including only a fraction of the database. This fraction is variable.
The database does have several thousand tables totalling about 11Gb. But most of these tables are quite small with only about 1500 records, many only have 150 - 200 records. The column counts of these tables can be in the hundreds though because of the frequency data stored.
But I am informed that the number of tables in a schema in MySQL is not an issue. There are also no performance issues during normal operation.
And the alternative of using a single table is not really viable because all of these tables have different column name signatures.
I should add that the database is in use during the backup.
Well after running the backup with instruction set:
"C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" mySchema -u root --password=xxxxxxx -v --debug-check --log-error=c:\backups\daily\mySchema_error.log > c:\backups\daily\mySchema.dump
I get this:
mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE '\_dm\_10730\_856956\_30072013\_1375194514706\_keyword\_frequencies'': Error on delete of 'C:\Windows\TEMP\#sql67c_10_8c5.MYI' (Errcode: 13) (6)
Which I think is a permissions problem.
I doubt any one table in my schema is in the 2GB range.
I am using MySQL Server 5.5 on a Windows 7 64 bit server with 8 Gb of memory.
Any ideas?
I am aware that changing the number of files which MySQL can open, the open_files_limit parameter, may cure this matter.
Another possibility is interference from anti virus products as described here:
How To Fix Intermittent MySQL Errcode 13 Errors On Windows
There are a few possibilities for this issue that I have run into and here is my workup:
First: Enable error/debug logging and/or verbose output, otherwise we won't know of an error that could be creating the issue:
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check --log-error=c:\backup\mysqldump_error.log > c:\backup\visualRSS.dump
So long as debug is enabled in your distribution, you should now
both be able to log errors to a file, as well as view output on a
console. The issue is not always clear here, but it is a great first
step.
Have you reviewed your error or general logs? Not often useful information for this issue, but sometimes there is, and every little bit helps with tracking these problems down.
Also watch SHOW PROCESSLIST while you are running this. See if you are seeing status columns like: WAITING FOR..LOCK/METADATA LOCK which would indicates the operation is unable to acquire a lock because of another operation.
Depending on info gathered above: Assuming I found nothing and had to shoot blind, here is what I would do next with some common cases I have experienced:
Max Packet Size errors: If you receive an error regarding max-allowed-packet-size, which, you can add --max_allowed_packet=160M to your parameters to see if you can get it large enough:
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --max_allowed_packet=160M > c:\backup\visualRSS.dump
Try to reduce run time/size using --compact flag. mysqldump will add everything you need to create the schema and insert the data along with other information: You can significantly reduce run-time and file size by just requiring the dump contain only the INSERTS to your schema and avoid all statements to create the schema and other non-critical info within ea. insert.This can mitigate a lot of problems is appropriate for use, but you will want to use a separate dump with the --nodata to export your schema ea. run to allow you to create all the empty tables etc.
/Create Raw data, exclude add-drop table, comment, lock and key check statements/
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --compact > c:\backup\visualRSS.dump
/Create Schema dump with no data:/
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --nodata > c:\backup\visualRSS.dump
Locking Issues: By default, mysqldump uses the LOCK TABLE (unless you specify
single transaction) to read a table while it is dumping and wants to acquire a read-lock on the table, DDL operations and your global lock type may create this case. Without seeing the hung query you will typically see a small backup file size as you described, and usually the mysqldump operation will sit until you kill it, or the server closes the idle connection. You can use the --single-transaction flag to set a REPEATABLE READ type for the transaction to essentially take a snapshot of the table without blocking operations or being block, saved for some older server vers that have issues with ALTER/TRUNCATE TABLE while in this mode.
FileSize issues: If I read incorrectly that this backup HAS NOT successfully run before, indication the 2GB filesize
potential issue, you can try piping mysqldump output straight into
something like 7zip on the fly:
mysqldump |7z.exe a -si name_in_outfile
output_path_and_filename
If you continue to have issues or there is an unavoidable issue prohibiting mysqldump from being used. Percona XtraBackup is what I prefer, or there is the Enterprise Backup for MySQL from Oracle. It is open source, far more versatile than mysqldump, has a very reliable group of developers working on it and has many great features that mysqldump does not have, like streaming/hot backups, etc. Unfortunately the windows build is old, unless you can compile from binary or run a local linux VM to handle that for you.
Very important I noticed that you are not backing up your information_schema table, this needs to be mentioned exclusively if it is of significance to your backup scheme.

mysql sync two tables from 2 databases

I have a query . I have two tables on two different servers .Both the tables have the same structure . The master table on one server gets updated on a daily basis , so i want a cron job or a php script cron to update the second slave table on a different server .
I have seen a lot of scripts but none resolved my requirements .
I can't believe you didn't find a suitable script to do this. Depending on server-to-server bandwidth and connectivity, and table data size, you can:
directly transfer the whole table:
mysqldump [options] sourcedatabase tablename \
| mysql [options] --host remoteserver --user username ...
transfer the table with MySQL compression
# same as above, mysql has the "-C" flag
transfer using SSH encryption and compression; mysql is executed remotely
mysqldump [options] sourcedatabase tablename \
| ssh -C user#remoteserver 'mysql [options]'
transfer using intermediate SQL file and rsync to transfer only modifications
mysqldump [options] sourcedb tbl > dump.sql
rsync [-z] dump.sql user#remoteserver:/path/to/remote/dump.sql
ssh user#remoteserver "mysql [options] < /path/to/remote/dump.sql"
The above are all simple table overwrites, remote data is LOST and replaced by the master copy. The mysqldump-plus-rsync-plus-ssh runs in a time roughly proportional to modifications, which means that if you have a 10-GB SQL dump and add a dozen INSERTS, the transfer stage will need at most a couple seconds to synchronize the two SQL files.
To optimize the insert stage too, either you go for full MySQL replication, or you will need a way to identify operations on the table in order to replicate them manually at sync time. This may require alterations to the table structure, e.g. adding "last-synced-on" and "needs-deleting" columns, or even introduction of ancillary tables.
You can use one simple solution - Data Synchronization tool in dbForge Studio for MySQL.
Create Data Comparison project that would compare and synchronize two tables on two different MySQL servers.
Run application in command line mode using created Data Comparison project document (*.dcomp file).