Mysql dump: Copying MYI files in production is safe? - mysql

Want to copy some big tables which are used very often from main server to local server in production mode ofc.
Is it safe?
Some suggestions, tools are welcome. :)

I'm guessing from asking about MYI tables that all of your tables are MyISAM tables and not InnoDb ones. Each MyISAM table is made up of three files: .frm, .MYD and .MYI they contain the structure, data and index respectively.
People advise against copying these raw files from running systems but I've found that as long as you're sure nothing's writing to the tables then copying them works fine (I've done this more times than I care to remember).
If you're doing this on a replica then just stop replication on the slave before copying. If it's just a single server or the master I'd recommend running FLUSH TABLES WITH READ LOCK before you start copying files, this will prevent any process from writing to the tables. When you're done release the lock with UNLOCK TABLES.
I would always recommend doing a CHECK TABLE on the tables you've copied in this manner. I think this is what I've used in the past mysqlcheck --all-tables --fast --auto-repair
If you're using LVM on the server then taking an LVM snapshot could be another way of getting hold of a clean snapshot.
If you're going to be doing this regularly I'd recommend either using replication to keep the local server up to date or to setup a slave which you can use for taking backups (as it's not the main database there's no problem with you stopping it, dumping tables, etc)

Yes it's alright if you
Shut the mysql server down cleanly before you copy the files, or at least take a global read lock
Take the .frm .MYI and .MYD files at the same time.
Have identical my.cnf and run the same MySQL version on each system
So it's basically ok, but not necessarily a good idea.
If you have different versions of mysql (You can only generally move to a more recent version, not an older one), or are running with a significantly different my.cnf (i.e. any of the fulltext indexing parameters differ, and you have fulltext indexes), you may need to rebuild the table. Rebuilding the table can be done on the destination server with ALTER TABLE blah ENGINE=MyISAM; this might still be a bit faster than a mysqldump / restore.

Related

moving InnoDb DB

I have DB InnoDb innodb_db_1. I have turned on innodb_file_per_table.
If I go to var/lib/mysql/innodb_db_1/ I will find files table_name.ibd, table_name.frm, db.opt.
Now, I'm trying to copy these files to another DB for example to innodb_db_2(var/lib/mysql/innodb_db_2/) but nothing happened.
But if my DB will be MyIsam, I can copy in such way and everything be ok.
What suggestions to move the DB by copying the file of InnoDb DB?
Even when you use file-per-table, the tables keep some of their data and metadata in /var/lib/mysql/ibdata1. So you can't just move .ibd files to a new MySQL instance.
You'll have to backup and restore your database. You can use:
mysqldump, included with MySQL, reliable but slow.
mydumper, a community contributed substitute for mysqldump, this supports compression and parallel execution and other neat features.
Percona XtraBackup, which is free and performs high-speed physical backups of InnoDB (and also supports other storage engines). This is recommended to minimize interruption to your live operations, and also if your database is large.
Re your comment:
No, you cannot just copy .ibd files. You cannot turn off the requirement for ibdata1. This file includes, among other things, a data dictionary which you can think of like a table of contents for a book. It tells InnoDB what tables you have, and which physical file they reside in.
If you just move a .ibd file into another MySQL instance, this does not add it to that instance's data dictionary. So InnoDB has no idea to look in the new file, or which logical table it goes with.
If you want a workaround, you could ALTER TABLE mytable ENGINE=MyISAM, move that file and its .frm to another instance, and then ALTER TABLE mytable ENGINE=InnoDB to change it back. Remember to FLUSH TABLES WITH READ LOCK before you move MyISAM files.
But these steps are not for beginners. It would be a lot safer for you to use the backup & restore method unless you know what you're doing. I'm trying to save you some grief.
There is an easy procedure to move the whole Mysql InnoDB from pc A to pc B.
The conditions to perform the procedure are:
You need to have innodb_file_per_table option set
You need to be able to make a shutdown of the database
In my case i had to move whole 150Gb MySql database (the biggest table had aprox. 60Gb). Making sqldumps and loading them back was not an option (too slow).
So what I did was I made a "cold backup" of the mysql database (mysql doc) and then simply move files to another computer.
The steps to make after moving the databse are described here dba stackexchange.
I am writing this, because (assuming you are able to follow mentioned conditions) this is by far the fastest (and probalby the easiest) method to move a (large) MySql InnoDb and nobody mentioned it yet.
You can copy MyISAM tables all day long (safely, as long as they are flushed and locked or the server is stopped) but you can't do this with InnoDB, because the two storage engines handle tables and tablespaces very differently.
MyISAM automatically discovers tables by iterating the files in the directory named for the database.
InnoDB has an internal data dictionary stored in the system tablespace (ibdata1). Not only do the tables have to be consistent, there are identifiers in the .ibd files that must match what the data dictionary has stored internally.
Prior to MySQL 5.6, with the introduction of transportable tablespaces, this wasn't a supported operation. If you are using MySQL 5.6, the link provides you with information on how this works.
The alternatives:
use mysqldump [options] database_name > dumpfile.sql without the --databases option, which will dump the tables in the specified database but will omit any DATABASE commands (DROP DATABASE, CREATE DATABASE and USE), some or all of which, based on the combination of options specified, are normally added to the dump file. You can then import this with mysql [options] < dumpfile.sql.
CREATE TABLE db2.t1 LIKE db1.t1; INSERT INTO db2.t1 SELECT * FROM db1.t1; (for each table; you'll have to add any foreign key constraints back in)
ALTER TABLE on each table, changing it to MyISAM, then flushing and locking the tables with FLUSH TABLES WITH READ LOCK;, copying them over, then altering everything back to InnoDB. Not the greatest idea, since you'll lose all foreign key declarations and have to add them back on the original tables, but it is an alternative.
As far as I know, "hot copying" table files is a very bad idea (I've done it two times, and only made it work with MyISAM tables, and i did it only because I had no other choice).
My personal recomendation is: Use mysqldump. On your shell:
mysqldump -h yourHost -u yourUser -pYourPassword yourDatabase yourTable > dumpFile.sql
To copy the data from a dump file to another database, on your shell:
mysql -h yourHost -u yourUser -pYourPassword yourNewDatabase < dumpFile.sql
Check: mysqldump — A Database Backup Program.
If you insist on copying InnoDB files by hand, please read this: Backing Up and Recovering an InnoDB Database

Backup frequently a huge MySQL database

I have a huge MySQL InnoDB database (about 15 Go, and 100M rows), on a Debian server.
I have to save my database every two hours in another server, but without affect performances.
I looked at the MySQL replication, but it does not correspond to the fact that I look for, because I also want to protect of problems which the application could possibly cause.
What would be the best way of dealing with it?
Thank you very much!
I think you need incremental backups.
You can use Percona XtraBackup to make fast incremental backups. This works only if your database uses only InnoDB tables.
Refer to the documentation about how to create incremental backups:
http://www.percona.com/doc/percona-xtrabackup/howtos/recipes_ibkx_inc.html
Have you looked at writing a script that uses mysqldump to dump the contents of the DB, transfers it over to the backup DB (piping it to SSH would work), and inserts it via the command line?
There are options so that mysqldump won't lock the tables and so won't degrade performance too much.

Big Database backup best practice

I maintain big MySQL database. I need to backup it every night, but the DB is active all the time. There are queries from users.
Now I just disable the website and then do a backup, but this is very bad as the service is disabled and users don't like this.
What is a good way to backup the data if data is changed during the backup?
What is best practice for this?
I've implemented this scheme using a read-only replication slave of my database server.
MySQL Database Replication is pretty easy to set up and monitor. You can set it up to get all changes made to your production database, then take it off-line nightly to make a backup.
The Replication Slave server can be brought up as read-only to ensure that no changes can be made to it directly.
There are other ways of doing this that don't require the replication slave, but in my experience that was a pretty solid way of solving this problem.
Here's a link to the docs on MySQL Replication.
If you have a really large (50G+ like me) MySQL MyISAM only databases, you can use locks and rsync. According to MySQL documentation you can safely copy raw files while read lock is active and you cannot do it with InnoDB.
So if the goal is zero downtime and you have extra HD space, create a script:
rsync -aP --delete /var/lib/mysql/* /tmp/mysql/sync
Then do the following:
Do flush tables
Run script
Do flush tables with read lock;
Run script again
Do unlock tables;
On first run rsync will copy a lot without stopping MySQL. The second run will be very short, it will only delay write queries, so it is a real zero downtime solution.
Do another rsync from /tmp/mysql/sync to a remote server, compress, keep incremental versions, anything you like.
This partly depends upon whether you use innodb or myiasm. For innodb; mySQL have their own (which costs money) solution for this (innodb hot copy) but there is an open source version from Percona you may want to look at:
http://www.percona.com/doc/percona-xtrabackup/
What you want to do is called "online backup". Here's a pointer to a matrix of possible options with more information:
http://www.zmanda.com/blogs/?p=19
It essentially boils down to the storage backend that you are using and how much hardware you have available.

Fastest way to reload mysql databases in perl

I have a script that needs to add data to a folder created in the mysql data folder and then in some way force the mysql server to reload the data so the database reliably shows up. I am currently using a system call to the /etc/init.d/mysql script to restart the server but this is quite slow, I couldn't find anyhting in mysql that would reload the database through a query of some kind.. I was just wondering if there was a way to do this that was slightly quicker?
This depends on the storage engine you're using.
MyISAM tables will be reloaded automatically as soon as the table is next touched by MySQL (obviously, you need to make sure the .MYI, .MYD and .frm files are consistent).
InnoDB is a bit more complex, and depends on whether it's a traditional configuration with all tables in the same tablespace, or if you're using innodb_file_per_table. With the former, you pretty much have to restart the server. With file-per-table, you can copy individual .ibd files, but then you have to use an ALTER TABLE statement to import that tablespace.
See here for information on working with individual .ibd files without restarting the server.
This page is also useful for seeing how to move InnoDB data from place to place.
What do you mean by 'quite slow'?
You could do a dump/load of the new db in a one named orginal_db_name_temp, rename the original DB, and then rename orginal_db_name_temp to orginal_db_name - then you will have your new DB in place.
However, I don't know which method is faster by your means - because a dump&load roundtrip will be probably slower than a file-copy. But renaming the DBs will probably be faster than restarting the server

How do I backup a MySQL database?

What do I have to consider when backing up a database with millions of entries? Are there any tools (maybe bundled with the MySQL server) that I could use?
Depending on your requirements, there's several options that I have been using myself:
if you don't need hot backups, take down the db server and back up on the file system level, i. e. using tar, rsync or similar.
if you do need the database server to keep running, you can start out with the mysqlhotcopy tool (a perl script), which locks the tables that are being backed up and allows you to select single tables and databases.
if you want the backup to be portable, you might want to use mysqldump, which creates SQL scripts to recreate the data, but which is slower than mysqlhotcopy
if you have a copy of the db at a certain point in time, you could also just keep the binlogs (starting at that point in time) somewhere safe. This can be very easy to do and doesn't interfere with the server's operation, but might not be the fastest to restore, and you have to make sure you don't miss part of the logs.
Methods I haven't tried, but that make sense to me:
if you have a filesystem like ZFS or are running on LVM, it might be a good idea to do a snapshot of the database by doing a filesystem snapshot, because they are very, very quick. Just remember to ensure a consistent state of your db during the whole operation, e. g. by doing FLUSH TABLES WITH READ LOCK (and of course, don't forget UNLOCK TABLES afterwards)
Additionally:
you can use a master-slave setup to replicate your production server to either a different machine or a second instance on the same machine and do any of the above to the replicated copy, leaving your production machine alone. Instead of running continously, you can also fire up the slave on regular intervals, let it read the binlog, and switch it off again.
I think, MySQL cluster and the enterprise licensed version have more tools, but I have never tried them.
Mysqlhotcopy is badly described - it only works if you use MyISAM, and it's not hot.
The problem with mysqldump is the time it takes to restore the backup (but it can be made hot if you have all InnoDB tables, see --single-transaction).
I recommend using a hot backup tool, like what is available in XtraBackup:
http://www.percona.com/docs/wiki/percona-xtrabackup:start
Watch out if using mysqldump on large tables using the MyISAM storage engine; it blocks selects while the dump is running on each table and this can take down busy sites for 5-10 minutes in some cases.
Using InnoDB, by comparison, you get non-blocking backups because of its row-level locking, so this is not such an issue.
If you need to use MyISAM, a common strategy is to replicate to a second MySQL instance and do the mysqldump against the replicated copy instead.
Use the export tab in phpMyAdmin. phpMyAdmin is the free easy to use web interface for doing MySQL administration.
I think mysqldump is the proper way of doing it.