I need to dump a database from a shared hosting that somehow doesn't have mysqldump installed. In fact, I only have mysql and mysqladmin available from the whole set of MySQL utilities.
Is it doable or I'll need to resort to installing something like phpMyAdmin?
You could use the following methods (from Database Backups in the documentation)
Making Backups by Copying Files
MyISAM tables are stored as files, so it is easy to do a backup by copying files. To get a consistent backup, do a LOCK TABLES on the relevant tables, followed by FLUSH TABLES for the tables. You need only a read lock; this allows other clients to continue to query the tables while you are making a copy of the files in the database directory. The FLUSH TABLES statement is needed to ensure that the all active index pages are written to disk before you start the backup.
FLUSH TABLES WITH READ LOCK;
Closes all open tables and locks all tables for all databases with a read lock until you explicitly release the lock by executing UNLOCK TABLES. This is very convenient way to get backups if you have a file system such as Veritas that can take snapshots in time.
UNLOCK TABLES;
Making Delimited-Text File Backups
To create a text file containing a table's data, you can use:
SELECT * INTO OUTFILE 'file_name' FROM tbl_name
This method works for any kind of data file, but saves only table data, not the table structure.
To reload the output file, use"
LOAD DATA INFILE
how about shutting down the server and copying the datadir itself?
You can get SQLYog. It has a back up database as SQL Dump option for each database.
Anyways,
I had to resort to using the Sypex dumper, a web-based tool for fast (really fast, much faster than phpMyAdmin for instance) MySQL database dumping. It's in Russian, but the interface is fairly obvious.
You can connect to a server remotely with mysqldump. For example:
mysqldump -u poweruser -h remote.mysql.host database
Maatkit seems quite appropriate for this with mk-parallel-dump and mk-parallel-restore.
Related
I have a question regarding to mysql DB backup:
1.AT 8.00.00 AM ,I do backup Database by using mysqldump command.It somtime takes 5 second to finish.
2.While database backup is inprogress(At 8.00.01 AM ), someone make some changes to DB
Will backup version containt data changes of step 2 ?
I have goolge but not found explaination yet.Pls help me !
Percona provides a free tool for this purpose called xtrabackup.
For InnoDB tables MySQL uses log files to store DML commands, so that commands can be rolled back and some other things.
You can backup your database (in a non locking way), and after you created the backup you can apply the logs, so that you have a backup which holds the database status when the backup is finished. Don't know the commands without looking them up now, sorry, you'll have to have a look into the documentation.
It depends on your mysqldump command and which table the dump is on at the time of the update:
If you used the --single-transaction option, then, no, it will not contain the late change.
If you did not use that option then:
if the table being updated has not been dumped yet, then yes it will have the change.
If the table being updated has been dumped already, then no it won't have the change.
Chances are, you don't want that change in your backup, because you would like your backup to be consistent to a point in time. Here is some more discussion about all those kinds of things:
How to obtain a correct dump using mysqldump and single-transaction when DDL is used at the same time?
$ user=jocular; cat ~/list|while read db; do echo rm -vi /var/lib/mysql/$user_$db; done
That is what I came up with but my instructor gave me this feedback :
Removing MySQL databases in this manner can cause catastrophic problems for the MySQL server that can lead to loss of the MySQL server. Remove a database using InnoDB tables in this fashion and attempt to restore it from a backup to learn more.
What would be the safest command to remove the unmapped databases ?
Your instructor is right, InnoDB makes it harder to manipulate tables and databases using shell tools. The reason is that InnoDB manages a "data dictionary" inside the ibdata1 file, which catalogs the databases and tables and which tablespace files they belong in. If you move or rename or delete files in the shell, InnoDB's data dictionary now references out-of-date information, and subsequently trying to use those table names or database names runs into conflicts.
Sort of like when you get a new phone, and you keep getting calls from friends of the former owner of that phone number.
If you use SQL or other MySQL commands to drop the database, InnoDB makes sure to update its data dictionary and keep it in sync with reality.
user=jocular; cat ~/list | while read db; do mysqladmin drop "${user}_${db}" ; done
Mysqladmin prompts you before dropping a database, since there's no undoing that change. But you can also use the -f option to force dropping without prompting. Just be careful you don't drop the wrong database!
I have DB InnoDb innodb_db_1. I have turned on innodb_file_per_table.
If I go to var/lib/mysql/innodb_db_1/ I will find files table_name.ibd, table_name.frm, db.opt.
Now, I'm trying to copy these files to another DB for example to innodb_db_2(var/lib/mysql/innodb_db_2/) but nothing happened.
But if my DB will be MyIsam, I can copy in such way and everything be ok.
What suggestions to move the DB by copying the file of InnoDb DB?
Even when you use file-per-table, the tables keep some of their data and metadata in /var/lib/mysql/ibdata1. So you can't just move .ibd files to a new MySQL instance.
You'll have to backup and restore your database. You can use:
mysqldump, included with MySQL, reliable but slow.
mydumper, a community contributed substitute for mysqldump, this supports compression and parallel execution and other neat features.
Percona XtraBackup, which is free and performs high-speed physical backups of InnoDB (and also supports other storage engines). This is recommended to minimize interruption to your live operations, and also if your database is large.
Re your comment:
No, you cannot just copy .ibd files. You cannot turn off the requirement for ibdata1. This file includes, among other things, a data dictionary which you can think of like a table of contents for a book. It tells InnoDB what tables you have, and which physical file they reside in.
If you just move a .ibd file into another MySQL instance, this does not add it to that instance's data dictionary. So InnoDB has no idea to look in the new file, or which logical table it goes with.
If you want a workaround, you could ALTER TABLE mytable ENGINE=MyISAM, move that file and its .frm to another instance, and then ALTER TABLE mytable ENGINE=InnoDB to change it back. Remember to FLUSH TABLES WITH READ LOCK before you move MyISAM files.
But these steps are not for beginners. It would be a lot safer for you to use the backup & restore method unless you know what you're doing. I'm trying to save you some grief.
There is an easy procedure to move the whole Mysql InnoDB from pc A to pc B.
The conditions to perform the procedure are:
You need to have innodb_file_per_table option set
You need to be able to make a shutdown of the database
In my case i had to move whole 150Gb MySql database (the biggest table had aprox. 60Gb). Making sqldumps and loading them back was not an option (too slow).
So what I did was I made a "cold backup" of the mysql database (mysql doc) and then simply move files to another computer.
The steps to make after moving the databse are described here dba stackexchange.
I am writing this, because (assuming you are able to follow mentioned conditions) this is by far the fastest (and probalby the easiest) method to move a (large) MySql InnoDb and nobody mentioned it yet.
You can copy MyISAM tables all day long (safely, as long as they are flushed and locked or the server is stopped) but you can't do this with InnoDB, because the two storage engines handle tables and tablespaces very differently.
MyISAM automatically discovers tables by iterating the files in the directory named for the database.
InnoDB has an internal data dictionary stored in the system tablespace (ibdata1). Not only do the tables have to be consistent, there are identifiers in the .ibd files that must match what the data dictionary has stored internally.
Prior to MySQL 5.6, with the introduction of transportable tablespaces, this wasn't a supported operation. If you are using MySQL 5.6, the link provides you with information on how this works.
The alternatives:
use mysqldump [options] database_name > dumpfile.sql without the --databases option, which will dump the tables in the specified database but will omit any DATABASE commands (DROP DATABASE, CREATE DATABASE and USE), some or all of which, based on the combination of options specified, are normally added to the dump file. You can then import this with mysql [options] < dumpfile.sql.
CREATE TABLE db2.t1 LIKE db1.t1; INSERT INTO db2.t1 SELECT * FROM db1.t1; (for each table; you'll have to add any foreign key constraints back in)
ALTER TABLE on each table, changing it to MyISAM, then flushing and locking the tables with FLUSH TABLES WITH READ LOCK;, copying them over, then altering everything back to InnoDB. Not the greatest idea, since you'll lose all foreign key declarations and have to add them back on the original tables, but it is an alternative.
As far as I know, "hot copying" table files is a very bad idea (I've done it two times, and only made it work with MyISAM tables, and i did it only because I had no other choice).
My personal recomendation is: Use mysqldump. On your shell:
mysqldump -h yourHost -u yourUser -pYourPassword yourDatabase yourTable > dumpFile.sql
To copy the data from a dump file to another database, on your shell:
mysql -h yourHost -u yourUser -pYourPassword yourNewDatabase < dumpFile.sql
Check: mysqldump — A Database Backup Program.
If you insist on copying InnoDB files by hand, please read this: Backing Up and Recovering an InnoDB Database
Want to copy some big tables which are used very often from main server to local server in production mode ofc.
Is it safe?
Some suggestions, tools are welcome. :)
I'm guessing from asking about MYI tables that all of your tables are MyISAM tables and not InnoDb ones. Each MyISAM table is made up of three files: .frm, .MYD and .MYI they contain the structure, data and index respectively.
People advise against copying these raw files from running systems but I've found that as long as you're sure nothing's writing to the tables then copying them works fine (I've done this more times than I care to remember).
If you're doing this on a replica then just stop replication on the slave before copying. If it's just a single server or the master I'd recommend running FLUSH TABLES WITH READ LOCK before you start copying files, this will prevent any process from writing to the tables. When you're done release the lock with UNLOCK TABLES.
I would always recommend doing a CHECK TABLE on the tables you've copied in this manner. I think this is what I've used in the past mysqlcheck --all-tables --fast --auto-repair
If you're using LVM on the server then taking an LVM snapshot could be another way of getting hold of a clean snapshot.
If you're going to be doing this regularly I'd recommend either using replication to keep the local server up to date or to setup a slave which you can use for taking backups (as it's not the main database there's no problem with you stopping it, dumping tables, etc)
Yes it's alright if you
Shut the mysql server down cleanly before you copy the files, or at least take a global read lock
Take the .frm .MYI and .MYD files at the same time.
Have identical my.cnf and run the same MySQL version on each system
So it's basically ok, but not necessarily a good idea.
If you have different versions of mysql (You can only generally move to a more recent version, not an older one), or are running with a significantly different my.cnf (i.e. any of the fulltext indexing parameters differ, and you have fulltext indexes), you may need to rebuild the table. Rebuilding the table can be done on the destination server with ALTER TABLE blah ENGINE=MyISAM; this might still be a bit faster than a mysqldump / restore.
The databases are prohibitively large (> 400MB), so dump > SCP > source is proving to be hours and hours work.
Is there an easier way? Can I connect to the DB directly and import from the new server?
You can simply copy the whole /data folder.
Have a look at High Performance MySQL - transferring large files
Use can use ssh to directly pipe your data over the Internet. First set up SSH keys for password-less login. Next, try something like this:
$ mysqldump -u db_user -p some_database | gzip | ssh someuser#newserver 'gzip -d | mysql -u db_user --password=db_pass some_database'
Notes:
The basic idea is that you are just dumping standard output straight into a command on the other side, which SSH is perfect for.
If you don't need encryption then you can use netcat but it's probably not worth it
The SQL text data goes over the wire compressed!
Obviously, change db_user to user user and some_database to your database. someuser is the (Linux) system user, not the MySQL user.
You will also have to use --password the long way because having mysql prompt you will be a lot of headache.
You could setup a MySQL slave replication and let MySQL copy the data, and then make the slave the new master
400M is really not a large database; transferring it to another machine will only take a few minutes over a 100Mbit network. If you do not have 100M networks between your machines, you are in a big trouble!
If they are running the exact same version of MySQL and have identical (or similar ENOUGH) my.cnf and you just want a copy of the entire data, it is safe to copy the server's entire data directory across (while both instances are stopped, obviously). You'll need to delete the data directory of the target machine first of course, but you probably don't care about that.
Backup/restore is usually slowed down by the restoration having to rebuild the table structure, rather than the file copy. By copying the data files directly, you avoid this (subject to the limitations stated above).
If you are migrating a server:
The dump files can be very large so it is better to compress it before sending or use the -C flag of scp. Our methodology of transfering files is to create a full dump, in which the incremental logs are flushed (use --master-data=2 --flush logs, please check you don't mess any slave hosts if you have them). Then we copy the dump and play it. Afterwards we flush the logs again (mysqladmin flush-logs), take the recent incremental log (which shouldn't be very large) and play only it. Keep doing it until the last incremental log is very small so that you can stop the database on the original machine, copy the last incremental log and then play it - it should take only a few minutes.
If you just want to copy data from one server to another:
mysqldump -C --host=oldhost --user=xxx --database=yyy -p | mysql -C --host=newhost --user=aaa -p
You will need to set the db users correctly and provide access to external hosts.
try importing the dump on the new server using mysql console, not an auxiliar software
I have no experience with doing this with mysql, but to me it seems the bottleneck is transferring the actual data?
4oo MB isnt that much. But if dump -> SCP is slow, i dont think connecting to the db server from the remove box would be any faster?
I'd suggest dumping, compressing, then copying over network or burning to disk and manually transfering the data.
Compressing such a dump will most likely give you quite good compression rate since, most likely , theres a lot of repeptetive data.
If you are only copying all the databases of the server, copy the entire /data directory.
If you are just copying one or more databases and adding them to an existing mysql server:
create the empty database in the new server, set up the permissions for users etc.
copy the folder for the database in /data/databasename to the new server /data/databasename
I like to use BigDump: Staggered Mysql Dump Importer after Exporting my database from the old server.
http://www.ozerov.de/bigdump/
One thing to note though, if you don't set the export options (namely the maximum length of created queries) respective to the load your new server can handle, it'll just fail and you will have to try again with different parameters. Personally, I set mine to about 25,000, but that's just me. Test it out a bit and you'll get the hang of it.