I have a RHEL 5 system with a fresh new hard drive I just dedicated to the MySQL server. To get things started, I used "mysqldump --host otherhost -A | mysql", even though I noticed the manpage never explicitly recommends trying this (mysqldump into a file is a no-go. We're talking 500G of database).
This process fails at random intervals, complaining that too many files are open (at which point mysqld gets the relevant signal, and dies and respawns).
I tried upping it at sysctl and ulimit, but the problem persists. What do I do about it?
mysqldump by default performs a per-table lock of all involved tables. If you have many tables that can exceed the amount of file descriptors of the mysql server process.
Try --skip-lock-tables or if locking is imperative --lock-all-tables.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html--lock-all-tables, -x
Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump. This option automatically turns off --single-transaction and --lock-tables.
mysqldump has been reported to yeld that error for larger databases (1, 2, 3). Explanation and workaround from MySQL Bugs:
[3 Feb 2007 22:00] Sergei Golubchik
This is not really a bug.
mysqldump by default has --lock-tables enabled, which means it tries to lock all tables to
be dumped before starting the dump. And doing LOCK TABLES t1, t2, ... for really big
number of tables will inevitably exhaust all available file descriptors, as LOCK needs all
tables to be opened.
Workarounds: --skip-lock-tables will disable such a locking completely. Alternatively,
--lock-all-tables will make mysqldump to use FLUSH TABLES WITH READ LOCK which locks all
tables in all databases (without opening them). In this case mysqldump will automatically
disable --lock-tables because it makes no sense when --lock-all-tables is used.
Edit: Please check Dave's workaround for InnoDB in the comment below.
If your database is that large you've got a few issues.
You have to lock the tables to dump the data.
mysqldump will take a very very long time and your tables will need to locked during this time.
importing the data on the new server will also take a long time.
Since your database is going to be essentially unusable while #1 and #2 are happening I would actually recommend stopping the database and using rsync to copy the files to the other server. It's faster than using mysqldump and much faster than importing because you don't have the added IO and CPU of generating indexes.
In production environments on Linux many people put Mysql data on an LVM partition. Then they stop the database, do an LVM snapshot, start the database, and copy off the state of the stopped database at their leisure.
I just restarted the "MySql" Server and then I could use the mysqldump command flawlessly.
Thought this might be helpful tip here.
Related
We have production mysql DB around 35G (innoDB) and we have notice when mysqldump start application get unstable. we use following command to run dump
mysqldump --password=XXXX --add-drop-table foo | gzip -c > foo.dmp.gz
after googling people said mysqldump lock table before dumping data so people suggested using --single-transaction flag for innoDB
so for experiment i started mysqldump manually and run some query on tables read/write and it allowed me to perform all operation while mysqldump was running so how do i reproduce this behavior that mysqldump really locking my table which causing application accessibility?
Does mysqldump lock Read Operation or Just Write on table?
We have few DB using MyISAM in that case what we should do to avoid locks
Use --single-transaction to avoid table locks on InnoDB tables.
There's nothing you can really do about MyISAM, though you really shouldn't be using MyISAM. The best workaround is to create a read replica and make backups from the replica so that the locks don't impact the application.
What you should find is that while a backup is running, a READ LOCAL lock is held on the tables in the single database that is currently being backed up, meaning that you can read from the tables but writes (insert/update/delete) will block except for certain inserts on MyISAM that can be achieved without disturbing the lock. Those may be allowed. The easiest way to see this happening is to repeatedly query SHOW FULL PROCESSLIST; to find threads that are blocking.
I need to make working mysql replication from master to slave. (tried it once already)
The database is quite large (over 100GB) and it will take some hours to make it ready for new slave.
The database has MyIsam and innoDB engine and both are being written
I think my only choice is to copy the data files from master to a new slave? (or make a database dump which im referring later in the topic of ROUND 2)
Before that I have to run down all the services which uses the database and
make writelock for tables or should i shut down the whole database?
After data directory sync to the new replication server I started it up and the database with the tables was there. First error that I got rid off by changing bin.log to 007324 and position to 0.
Error 1:
140213 4:52:07 [ERROR] Got fatal error 1236: 'Could not find first log file name in binary log index file' from master when reading data from binary log
140213 4:52:07 [Note] Slave I/O thread exiting, read up to log 'bin-log.007323', position 46774422
After that I got new problems from database and this error came out from every table.
Error 2:
Error 'Incorrect information in file: './database/table.frm'' on query. Default database: 'database'.
Seems that something went wrong.
ROUND 2!
After this scene I started to think that can this be done without long service break.
Master database has been already configured and it works ok to another slave.
So i did some googling and this is what i came up with.
Making read lock to tables:
FLUSH TABLES WITH READ LOCK;
Taking dump:
mysqldump --skip-lock-tables --single-transaction --flush-logs --master-data=2 -A > dbdump.sql
Packaging and moving:
gzip (pigz) the the dbdump and moving it to slave server after that finding the MASTER_LOG_FILE and MASTER_LOG_POS from the dump.
After that i don't think that i want to import the dbdump.sql because its over 100GB and
will take time. So i think SOURCE would be ok option for it.
On SLAVE server:
CREATE DATABASE dbdump;
USE dbdump;
SOURCE dbdump.db;
CHANGE MASTER TO MASTER_HOST='x.x.x.x',MASTER_USER='replication',MASTER_PASSWORD='slavepass',
MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=X;
start slave;
SHOW SLAVE STATUS \G
I haven't tested this yet, am I on to something?
--bp
Realize that issuing a SOURCE command is the same as running an import of the dumped SQL from shell. Either way, it is going to take a long time. Outside of that, you have the steps correct - flush table with read lock on master, make a database dump of master, make sure you note master binlog coordinates, import dump on slave, set binlog coordinates, start replication. Do not work with the raw binaries unless you REALLY know what you are doing (especially for INNODB tables).
If you have a number of large tables (i.e. not just one big one), you could consider parallelizing your dumps/imports by table (or groups of tables) to speed things along. There are actually tools out there to help you do this.
You CAN work with the raw binaries, but it is not for the faint of heart. In the past, I have used rsync to differentially update the raw binaries between master and slave (you still must use flush table with read lock and gather master binlog coordinates before doing this). For MyISAM tables this works pretty well actually. For InnoDB, it can be more tricky. I prefer to use the option to set InnoDB to write index and data files per table. You would need to rsync the ibdata* files. You would delete ib_logfile* files from slave.
This whole thing is a bit of a high wire act, so I would not resort to doing this unless you have no other viable options. Absolutely take a traditional SQL dump before even thinking about attempting a binary file sync, and each time until you are VERY comfortable that you actually know what you are doing.
This question already has answers here:
mysqldump doing a partial backup - incomplete table dump
(4 answers)
Closed 9 years ago.
Does anyone know why MYSQLDUMP would only perform a partial backup of a database when run with the following instruction:
"C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" databaseSchema -u root --password=rootPassword > c:\backups\daily\mySchema.dump
Sometimes a full backup is performed, at other times the backup will stop after including only a fraction of the database. This fraction is variable.
The database does have several thousand tables totalling about 11Gb. But most of these tables are quite small with only about 1500 records, many only have 150 - 200 records. The column counts of these tables can be in the hundreds though because of the frequency data stored.
But I am informed that the number of tables in a schema in MySQL is not an issue. There are also no performance issues during normal operation.
And the alternative of using a single table is not really viable because all of these tables have different column name signatures.
I should add that the database is in use during the backup.
Well after running the backup with instruction set:
"C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" mySchema -u root --password=xxxxxxx -v --debug-check --log-error=c:\backups\daily\mySchema_error.log > c:\backups\daily\mySchema.dump
I get this:
mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE '\_dm\_10730\_856956\_30072013\_1375194514706\_keyword\_frequencies'': Error on delete of 'C:\Windows\TEMP\#sql67c_10_8c5.MYI' (Errcode: 13) (6)
Which I think is a permissions problem.
I doubt any one table in my schema is in the 2GB range.
I am using MySQL Server 5.5 on a Windows 7 64 bit server with 8 Gb of memory.
Any ideas?
I am aware that changing the number of files which MySQL can open, the open_files_limit parameter, may cure this matter.
Another possibility is interference from anti virus products as described here:
How To Fix Intermittent MySQL Errcode 13 Errors On Windows
There are a few possibilities for this issue that I have run into and here is my workup:
First: Enable error/debug logging and/or verbose output, otherwise we won't know of an error that could be creating the issue:
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check --log-error=c:\backup\mysqldump_error.log > c:\backup\visualRSS.dump
So long as debug is enabled in your distribution, you should now
both be able to log errors to a file, as well as view output on a
console. The issue is not always clear here, but it is a great first
step.
Have you reviewed your error or general logs? Not often useful information for this issue, but sometimes there is, and every little bit helps with tracking these problems down.
Also watch SHOW PROCESSLIST while you are running this. See if you are seeing status columns like: WAITING FOR..LOCK/METADATA LOCK which would indicates the operation is unable to acquire a lock because of another operation.
Depending on info gathered above: Assuming I found nothing and had to shoot blind, here is what I would do next with some common cases I have experienced:
Max Packet Size errors: If you receive an error regarding max-allowed-packet-size, which, you can add --max_allowed_packet=160M to your parameters to see if you can get it large enough:
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --max_allowed_packet=160M > c:\backup\visualRSS.dump
Try to reduce run time/size using --compact flag. mysqldump will add everything you need to create the schema and insert the data along with other information: You can significantly reduce run-time and file size by just requiring the dump contain only the INSERTS to your schema and avoid all statements to create the schema and other non-critical info within ea. insert.This can mitigate a lot of problems is appropriate for use, but you will want to use a separate dump with the --nodata to export your schema ea. run to allow you to create all the empty tables etc.
/Create Raw data, exclude add-drop table, comment, lock and key check statements/
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --compact > c:\backup\visualRSS.dump
/Create Schema dump with no data:/
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --nodata > c:\backup\visualRSS.dump
Locking Issues: By default, mysqldump uses the LOCK TABLE (unless you specify
single transaction) to read a table while it is dumping and wants to acquire a read-lock on the table, DDL operations and your global lock type may create this case. Without seeing the hung query you will typically see a small backup file size as you described, and usually the mysqldump operation will sit until you kill it, or the server closes the idle connection. You can use the --single-transaction flag to set a REPEATABLE READ type for the transaction to essentially take a snapshot of the table without blocking operations or being block, saved for some older server vers that have issues with ALTER/TRUNCATE TABLE while in this mode.
FileSize issues: If I read incorrectly that this backup HAS NOT successfully run before, indication the 2GB filesize
potential issue, you can try piping mysqldump output straight into
something like 7zip on the fly:
mysqldump |7z.exe a -si name_in_outfile
output_path_and_filename
If you continue to have issues or there is an unavoidable issue prohibiting mysqldump from being used. Percona XtraBackup is what I prefer, or there is the Enterprise Backup for MySQL from Oracle. It is open source, far more versatile than mysqldump, has a very reliable group of developers working on it and has many great features that mysqldump does not have, like streaming/hot backups, etc. Unfortunately the windows build is old, unless you can compile from binary or run a local linux VM to handle that for you.
Very important I noticed that you are not backing up your information_schema table, this needs to be mentioned exclusively if it is of significance to your backup scheme.
I have a large (about 10 GB with a 20 GB innodb buffer pool) database, and have noticed that when I start it, for about the first half hour it's running, the database will periodically lock and unlock all tables, making it quite unpleasant for users who attempt to access our site for the first half hour after a database restart.
While I can't be 100% certain of the causal relationship, I notice that the time that the database is locking and unlocking itself coincides with the time that
/usr/bin/mysqlcheck --defaults-file=/etc/mysql/debian.cnf --all-databases --fast --silent
and
perl -e $_=join("", <>); s/^[^\n]+\n(error|note)\s+: The (handler|storage engine) for the table doesn.t support check\n//smg;print;
are running on my database server. My question is whether it's really necessary to run mysqlcheck (which is in the mysql /etc/init.d/mysql script by default) upon starting Mysql? The Google results I've found on mysqlcheck indicate that it "fixes & optimizes" tables, but I don't expect my tables to be broken, and I'm skeptical of the optimization advantages afforded by this utility.
If it matters, I'm running Mysql 5.0.32
No.
This is a 'debian' feature that they modify from the stock MySQL. It is designed to stop people from getting into problems with corrupted MyISAM tables. Since you are using InnoDB, I would recommend you remove it. I believe it is part of the init.d script, so you could edit that to remove it.
I would like to create a copy of a database with approximately 40 InnoDB tables and around 1.5GB of data with mysqldump and MySQL 5.1.
What are the best parameters (ie: --single-transaction) that will result in the quickest dump and load of the data?
As well, when loading the data into the second DB, is it quicker to:
1) pipe the results directly to the second MySQL server instance and use the --compress option
or
2) load it from a text file (ie: mysql < my_sql_dump.sql)
QUICKLY dumping a quiesced database:
Using the "-T " option with mysqldump results in lots of .sql and .txt files in the specified directory. This is ~50% faster for dumping large tables than a single .sql file with INSERT statements (takes 1/3 less wall-clock time).
Additionally, there is a huge benefit when restoring if you can load multiple tables in parallel, and saturate multiple cores. On an 8-core box, this could be as much as an 8X difference in wall-clock time to restore the dump, on top of the efficiency improvements provided by "-T". Because "-T" causes each table to be stored in a separate file, loading them in parallel is easier than splitting apart a massive .sql file.
Taking the strategies above to their logical extreme, one could create a script to dump a database widely in parallel. Well, that's exactly what the Maakit mk-parallel-dump (see http://www.maatkit.org/doc/mk-parallel-dump.html) and mk-parallel-restore tools are; perl scripts that make multiple calls to the underlying mysqldump program. However, when I tried to use these, I had trouble getting the restore to complete without duplicate key errors that didn't occur with vanilla dumps, so keep in mind that your milage may vary.
Dumping data from a LIVE database (w/o service interruption):
The --single-transaction switch is very useful for taking a dump of a live database without having to quiesce it or taking a dump of a slave database without having to stop slaving.
Sadly, -T is not compatible with --single-transaction, so you only get one.
Usually, taking the dump is much faster than restoring it. There is still room for a tool that take the incoming monolithic dump file and breaks it into multiple pieces to be loaded in parallel. To my knowledge, such a tool does not yet exist.
Transferring the dump over the Network is usually a win
To listen for an incoming dump on one host run:
nc -l 7878 > mysql-dump.sql
Then on your DB host, run
mysqldump $OPTS | nc myhost.mydomain.com 7878
This reduces contention for the disk spindles on the master from writing the dump to disk slightly speeding up your dump (assuming the network is fast enough to keep up, a fairly safe assumption for two hosts in the same datacenter). Plus, if you are building out a new slave, this saves the step of having to transfer the dump file after it is finished.
Caveats - obviously, you need to have enough network bandwidth not to slow things down unbearably, and if the TCP session breaks, you have to start all over, but for most dumps this is not a major concern.
Lastly, I want to clear up one point of common confusion.
Despite how often you see these flags in mysqldump examples and tutorials, they are superfluous because they are turned ON by default:
--opt
--add-drop-table
--add-locks
--create-options
--disable-keys
--extended-insert
--lock-tables
--quick
--set-charset.
From http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html:
Use of --opt is the same as specifying --add-drop-table, --add-locks, --create-options, --disable-keys, --extended-insert, --lock-tables, --quick, and --set-charset. All of the options that --opt stands for also are on by default because --opt is on by default.
Of those behaviors, "--quick" is one of the most important (skips caching the entire result set in mysqld before transmitting the first row), and can be with "mysql" (which does NOT turn --quick on by default) to dramatically speed up queries that return a large result set (eg dumping all the rows of a big table).
Pipe it directly to another instance, to avoid disk overhead. Don't bother with --compress unless you're running over a slow network, since on a fast LAN or loopback the network overhead doesn't matter.
i think it will be a lot faster and save you disk space if you tried database replication as opposed to using mysqldump. personally i use sqlyog enterprise for my really heavy lifting but there also a number of other tools that can provide the same services. unless of course you would like to use only mysqldump.
For innodb, --order-by-primary --extended-insert is usually the best combo. If your after every last bit of performance and the target box has many CPU cores, you might want to split the resulting dumpfile and do parallel inserts in many threads, up to innodb_thread_concurrency/2.
Also, tweak the innodb_buffer_pool_size on the target to the max you can afford, and increase innodb_log_file_size to 128 or 256 MB (careful with this, you need to remove the old logfiles before restarting the mysql daemon otherwise it won't restart)
Use mk-parallel-dump tool from Maatkit.
At least that would probably be faster. I'd trust mysqldump more.
How often are you doing this? Is it really an application performance problem? Perhaps you should design a way of doing this which doesn't need to dump the whole data (replication?)
On the other hand, 1.5G is quite a small database so it probably won't be much of a problem.
mydumper is a good choice, with paralel export, even paralell threads per table, and compressed files, see: