Do named fifo pipes use disk writes and reads ? - mysql

I want to parse MySQL general log and store that information on another server.
I was wondering if it would have a performance increase to have MySQL write its log to a Linux named pipe FIFO instead of just moving the log file and then parsing it.
My goal is to remove the hard disk access and increase the performance of the MySQL server.
This is all done on Linux centos.
So does FIFO use disk access or is everything done in memory?
If I had MySQL write to a FIFO and had a process that ran in memory parsing that information and then have it send to a different server would that save on disk writes?
Also would this be better than storing MySQL general log into a MySQL database.
I've noticed that insert statements can add .2 seconds to a script. So I am wondering if I turn on logging for MySQL that its going to add .2 to every query that's ran.

From the fifo(7) man-page:
FIFO special file has no contents on the file system
Whether it is a good idea to use fifo in an attempt to increase MySQL performance is another question.

Related

MySQL, recover database after mysqlbinlog loading error?

I am copying a MySQL DB from an initial dump file and the set of binlogs created after the dump.
The initial load from the dump is fine. Then, while loading the binlogs using mysqlbinlog, what happens is that one of the files will fail, for example, with a "server has gone away" error.
Is there any way to recover from a failed mysqlbinlog run, or is the database copy now irreparably corrupted? I know which log has failed, but I can't just rerun that log since the error could have occurred at any query within the log.
Is there a way to handle this moving forward?
I can look into minimizing the chances that there will be an error in the first place, but it doesn't seem like much of a recovery process (or master/slave process) if any MySQL issue during the loading completely ruins the database. I feel that I must be missing something.
I'd check the configuration value for max_allowed_packet. This is pretty small by default (4MB or 64MB depending on MySQL version). You might need to increase it.
Note that you need to increase that option both in the server and in the client that is applying binlogs. The effective limit on packet size is the lesser of the server and client's configuration value.
Even if the binlog succeeded through replication, it might not succeed when replaying binlogs, because you need to replay with mysql while specifying the --max-allowed-packet option.
See https://dev.mysql.com/doc/refman/8.0/en/gone-away.html for more explanation of the error you got.
If you don't know the binlog coordinates of the last binlog event that succeeded, you'll have to start over: remove the partially-restored instance and restore from the backup again, then apply the binlog.

Moving of large MySQL database from limited resource server

I have a Windows Server with MySQL Database Server installed.
Multiple databases exist among them, database A contains a huge table named 'tlog', size about 220gb.
I would like to move over database A to another server for backup purposes.
I know I can do SQL Dump or use MySQL Workbench/SQLyog to do table copy.
But due to limited disk storage in server (less than 50gb) SQL Dump is not possible.
The server is serving other works so basically the CPU & RAM is limited too. As a result, copy table without used up CPU & RAM is not possible.
Is there any other method that can do the moving of the huge database A over to another server please?
Thanks in advance.
You have a few ways:
Method 1
Dump and compress at the same time: mysqldump ... | gzip > blah.sql.gz
This method is good because chances are your database will be less than 50GB; as the database dump should be in ASCII; you're then compressing it on the fly.
Method 2
You can use slave replication; this method will require a dump of the data.
Method 3
You can also use xtrabackup.
Method 4
You can shutdown the database, and rsync the data directory.
Note: You don't actually have to shutdown the database; you can however do multiple rsyncs; and eventually nothing will change (unlikely if the database is busy; have to do during slow time); which means the database would have sync'd over.
I've had to do this method with fairly large PostgreSQL databases (1TB+). It takes a few rsyncs: but, hey; it's the cost of 0 down time.
Method 5
If you're in a virtual environment you could:
Clone the disk image.
If you're in AWS you could create an AMI.
You could add another disk and just sync locally; then detach the disk, and re-attach to the new VM.
If you're worried about consuming resources during the dump or transfer you can use ionice and renice to limit the priority of the dump/transfer.

mysql binary logs, specifying the database for each command

I am looking to write a backup / restore script based on MYSQL's binary logging.
I have a database on a mysql server, and my colleague also has his own database on the same mysql server.
Looking at the binary logs, I see the statements are logged for both these databases.
Is the database being written to specified in the logs?
Can I safely replay a binary log containing an extra database in it - i.e. I want to replicate database_A, my binary log file contains commands sent to database_A as well as database_B, can I replay these commands into a copy of database_A safely? Or do I need to ask my sysadmin to only log things for dataabse_A?
OK, studying the log files a bit more, it seems that the mysql binlog utility adds the "use database" statements in the appropriate place. I added a part to my script that effectively grepped out the relevant database statements.

Fastest method of producing a copy of a MySQL database

We have a very large database that we need to occasionally replicate on our dev+staging machines.
At the moment we use mysqldump and then import the database script using "mysql -u xx -p dbname < dumpscript.sql"
This would be fine if it didn't take 2 days to complete!
Is it possible to simply copy the entire database as a file from one server to another and skip the whole export/import nonsense?
Cheers
there are couple of solutions:
have a separate replication slave you can stop at any time and take the file-level backup
if you use the innodb engine - you can take file system level snapshot [eg with lvm] and then copy the files over to your test environment
if you have plenty of tables/databases - you can paralleled the dumping and restoring process to speed things up.
I have many restrictions on where I can run scripts, access sources and targets, and have enough space to prepare the data for the task.
I get my zipped database dump from the hosting provider.
I split the unzipped commands so INSERT INTO lines get put into one file, and all the others go into a second one.
Then I create the database structures from the second one.
I convert the INSERT INTO statements to table related CSV files.
Finally, I upload the csv files in parallel (up to 50 tables concurrently) and this way a 130GB text file dump is cloned in 3 hours, instead of the 17 hours it'd take when using the statement by statement method.
In the 3 hours, I include:
the copy over (10 minutes),
sanity check (10 minutes) and
filtering of logs (10 minutes), as the log entries need to be from the latest academic year only.
The remote zipped file is between 7GB to 13GB passed over a 40MBps line.
The upload is to a remote server via a 40MBps line.
If your mysql server is local, the speed of uploading can be faster.
I utilise scp, gzip, zgrep, sed, awk, ps, mysqlimport, mysql and some other utilities to speed up decompression and filtering (pv, rg, pigz) if available.
If I had direct access to the database server, an LVM with folder level snapshot abilities would be the preferred solution, giving you speeds restricted only by the copy speed of the media.

What's the quickest way to dump & load a MySQL InnoDB database using mysqldump?

I would like to create a copy of a database with approximately 40 InnoDB tables and around 1.5GB of data with mysqldump and MySQL 5.1.
What are the best parameters (ie: --single-transaction) that will result in the quickest dump and load of the data?
As well, when loading the data into the second DB, is it quicker to:
1) pipe the results directly to the second MySQL server instance and use the --compress option
or
2) load it from a text file (ie: mysql < my_sql_dump.sql)
QUICKLY dumping a quiesced database:
Using the "-T " option with mysqldump results in lots of .sql and .txt files in the specified directory. This is ~50% faster for dumping large tables than a single .sql file with INSERT statements (takes 1/3 less wall-clock time).
Additionally, there is a huge benefit when restoring if you can load multiple tables in parallel, and saturate multiple cores. On an 8-core box, this could be as much as an 8X difference in wall-clock time to restore the dump, on top of the efficiency improvements provided by "-T". Because "-T" causes each table to be stored in a separate file, loading them in parallel is easier than splitting apart a massive .sql file.
Taking the strategies above to their logical extreme, one could create a script to dump a database widely in parallel. Well, that's exactly what the Maakit mk-parallel-dump (see http://www.maatkit.org/doc/mk-parallel-dump.html) and mk-parallel-restore tools are; perl scripts that make multiple calls to the underlying mysqldump program. However, when I tried to use these, I had trouble getting the restore to complete without duplicate key errors that didn't occur with vanilla dumps, so keep in mind that your milage may vary.
Dumping data from a LIVE database (w/o service interruption):
The --single-transaction switch is very useful for taking a dump of a live database without having to quiesce it or taking a dump of a slave database without having to stop slaving.
Sadly, -T is not compatible with --single-transaction, so you only get one.
Usually, taking the dump is much faster than restoring it. There is still room for a tool that take the incoming monolithic dump file and breaks it into multiple pieces to be loaded in parallel. To my knowledge, such a tool does not yet exist.
Transferring the dump over the Network is usually a win
To listen for an incoming dump on one host run:
nc -l 7878 > mysql-dump.sql
Then on your DB host, run
mysqldump $OPTS | nc myhost.mydomain.com 7878
This reduces contention for the disk spindles on the master from writing the dump to disk slightly speeding up your dump (assuming the network is fast enough to keep up, a fairly safe assumption for two hosts in the same datacenter). Plus, if you are building out a new slave, this saves the step of having to transfer the dump file after it is finished.
Caveats - obviously, you need to have enough network bandwidth not to slow things down unbearably, and if the TCP session breaks, you have to start all over, but for most dumps this is not a major concern.
Lastly, I want to clear up one point of common confusion.
Despite how often you see these flags in mysqldump examples and tutorials, they are superfluous because they are turned ON by default:
--opt
--add-drop-table
--add-locks
--create-options
--disable-keys
--extended-insert
--lock-tables
--quick
--set-charset.
From http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html:
Use of --opt is the same as specifying --add-drop-table, --add-locks, --create-options, --disable-keys, --extended-insert, --lock-tables, --quick, and --set-charset. All of the options that --opt stands for also are on by default because --opt is on by default.
Of those behaviors, "--quick" is one of the most important (skips caching the entire result set in mysqld before transmitting the first row), and can be with "mysql" (which does NOT turn --quick on by default) to dramatically speed up queries that return a large result set (eg dumping all the rows of a big table).
Pipe it directly to another instance, to avoid disk overhead. Don't bother with --compress unless you're running over a slow network, since on a fast LAN or loopback the network overhead doesn't matter.
i think it will be a lot faster and save you disk space if you tried database replication as opposed to using mysqldump. personally i use sqlyog enterprise for my really heavy lifting but there also a number of other tools that can provide the same services. unless of course you would like to use only mysqldump.
For innodb, --order-by-primary --extended-insert is usually the best combo. If your after every last bit of performance and the target box has many CPU cores, you might want to split the resulting dumpfile and do parallel inserts in many threads, up to innodb_thread_concurrency/2.
Also, tweak the innodb_buffer_pool_size on the target to the max you can afford, and increase innodb_log_file_size to 128 or 256 MB (careful with this, you need to remove the old logfiles before restarting the mysql daemon otherwise it won't restart)
Use mk-parallel-dump tool from Maatkit.
At least that would probably be faster. I'd trust mysqldump more.
How often are you doing this? Is it really an application performance problem? Perhaps you should design a way of doing this which doesn't need to dump the whole data (replication?)
On the other hand, 1.5G is quite a small database so it probably won't be much of a problem.
mydumper is a good choice, with paralel export, even paralell threads per table, and compressed files, see: