Importing all MySQL databases - mysql

I mysqldump --all-databases nightly as a backup. But on importing this dump into a clean installation, I obviously run into a couple issues.
I obviously can't (and don't want to) overwrite the new information_schema.
All my users and permissions settings are lost, unless I overwrite the mysql database.
What is standard practice in this situation? Parse out information_schema from .sql file before uploading? And do I overwrite the mysql database or not?

you will not have problems with the info schema
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
mysqldump does not dump the INFORMATION_SCHEMA database. If you name that database explicitly on the command line, mysqldump silently ignores it.

For excluding database, try this bash script.
for DB in $(echo "show databases" | mysql -u <username> -p'<password>' | grep -v Database grep -v <some_db_to_exclude>)
do
mysqldump -u <username> -p'<password>' ${DB}
done

Related

mysqldump fails with "Skipping dump data for table 'table1', it has no fields"

I'm running mysqldump from an older mysql database. The mysqldump is part of a mariadb distribution if it matters.
When I run mysqldump locally, it's fine. When I run it on a remote system, I get no data dumped. If I run it with mysqldump -v the last line is
Skipping dump data for table 'table1', it has no fields
From some googling and this reddit thread, I determined that you need to set the default locale.
So the command that worked for me was:
mysqldump --default-character-set=latin1 --lock-tables=false --single-transaction=TRUE --host=$HOST --user=$USER --password=$PASSWORD $DB
I used both lock-tables and single transaction because I have a mix of myisam and innodb tables.

How to make dump of all MySQL databases besides two

this is probably massively simple, however I will be doing this for a live server and don't want to mess it up.
Can someone please let me know how I can do a mysqldump of all databases, procedures, triggers etc except the mysql and performance_schema databases?
Yes, you can dump several schemas at the same time :
mysqldump --user=[USER] --password=[PASS] --host=[HOST] --databases mydb1 mydb2 mydb3 [...] --routines > dumpfile.sql
OR
mysqldump --user=[USER] --password=[p --host=[HOST] --all-databases --routines > dumpfile.sql
concerning the last command, if you don't want to dump performance_schema (EDIT: as mentioned by #Barranka, by default mysqldump won't dump it), mysql, phpMyAdmin schema, etc. you just need to ensure that [USER] can't access them.
As stated in the reference manual:
mysqldump does not dump the INFORMATION_SCHEMA or performance_schema database by default. To dump either of these, name it explicitly on the command line and also use the --skip-lock-tables option. You can also name them with the --databases option.
So that takes care of your concern about dumping those databases.
Now, to dump all databases, I think you should do something like this:
mysqldump -h Host -u User -pPassword -A -R > very_big_dump.sql
To test it without dumping all data, you can add the -d flag to dump only database, table (and routine) definitions with no data.
As mentioned by Basile in his answer, the easiest way to ommit dumping the mysql database is to invoke mysqldump with a user that does not have access to it. So the punch line is: use or create a user that has access only to the databases you mean to dump.
There's no option in mysqldump that you could use to filter the databases list, but you can run two commands:
# DATABASES=$(mysql -N -B -e "SHOW DATABASES" | grep -Ev '(mysql|performance_schema)')
# mysqldump -B $DATABASES

how to migrate a large database to new server

I need to migrate my database from my old server to my new server. I have a very big problem by transferring the database because I have a large database with 5gb. I tried to transfer using c panel transfer but I can't it is not useful. I need a more efficient way to transfer the data.
Can anyone guide me with the full transfer details? How to transfer using import and export or do I need to use any other method?
MySQL type is MyISAM and size is 5gb.
You can try command line if you have access to SSH for both server as command below if not you can try using Navicat application to sync databases
SSH commands
Take mysqldump of database
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
create tar ball of SQL dump file using
tar -zcvf db.tar.gz db.sql
now upload tar.gz file to other server using scp command
scp -Cp db.gz {username}#{server}:{path}
now login on other server using SSH
Untar file using linux cmd
tar -zxvf db.tar.gz
import to database
mysql -u{username} -p {database} < db.sql
Please have a look at syntax though syntax will work but consider this as direction only
Thanks..
For large databases I would suggest to use mysqldump if you have SSH access to the server.
From the manual:
Use mysqldump --help to see what options are available.
The easiest (although not the fastest) way to move a database between two machines is to run the following commands on the machine on which the database is located:
shell> mysqladmin -h 'other_hostname' create db_name
shell> mysqldump db_name | mysql -h 'other_hostname' db_name
If you want to copy a database from a remote machine over a slow network, you can use these commands:
shell> mysqladmin create db_name
shell> mysqldump -h 'other_hostname' --compress db_name | mysql db_name
You can also store the dump in a file, transfer the file to the target machine, and then load the file into the database there. For example, you can dump a database to a compressed file on the source machine like this:
shell> mysqldump --quick db_name | gzip > db_name.gz
Transfer the file containing the database contents to the target machine and run these commands there:
shell> mysqladmin create db_name
shell> gunzip < db_name.gz | mysql db_name
You can also use mysqldump and mysqlimport to transfer the database. For large tables, this is much faster than simply using mysqldump. In the following commands, DUMPDIR represents the full path name of the directory you use to store the output from mysqldump.
First, create the directory for the output files and dump the database:
shell> mkdir DUMPDIR
shell> mysqldump --tab=DUMPDIR db_name
Then transfer the files in the DUMPDIR directory to some corresponding directory on the target machine and load the files into MySQL there:
shell> mysqladmin create db_name # create database
shell> cat DUMPDIR/*.sql | mysql db_name # create tables in database
shell> mysqlimport db_name DUMPDIR/*.txt # load data into tables
Do not forget to copy the mysql database because that is where the grant tables are stored. You might have to run commands as the MySQL root user on the new machine until you have the mysql database in place.
After you import the mysql database on the new machine, execute mysqladmin flush-privileges so that the server reloads the grant table information.

Split large text file intelligent

I have a large sql file (500mb) and want to split it in chunks.
I used the shell command split but it doesn't split context-aware before a special pattern (e.g. INSERT) and thus breaks the SQL statement.
The aim is to have two 250mb files both still containing only valid SQL commands. Is this possible?
Use:
mysqldump -u admin -p database1 > /backup/db/database1.sql
or
mysqldump -u admin -p --all-databases > /backup/db/all_databases.sql
If you have only MyISAM tables you can use:
mysqlhotcopy -u admin -p password123 database1 /backup
for faster backups. mysqlhotcopy doesn't generating sql but copying the files of the database.
For recovery of mysqldumped databases use:
mysql -u admin -p database1 < database.sql
or
mysql -u admin -p <all_databases.sql
For mysqlhotcopy:
To restore the backup from the mysqlhotcopy backup, simply copy the files from the backup directory to the /var/lib/mysql/{db-name} directory. Just to be on the safe-side, make sure to stop the mysql before you restore (copy) the files. After you copy the files to the /var/lib/mysql/{db-name} start the mysql again.
See here: http://www.thegeekstuff.com/2008/07/backup-and-restore-mysql-database-using-mysqlhotcopy/

Copying a mysql database from localhost to remote server using mysqldump.exe

I want to copy a mysql database from my local computer to a remote server.
I am trying to use the mysql dump command. All the examples on the internet suggest doing something like
The initial mysql> is just the prompt I get after logging in.
mysql> mysqldump -u user -p pass myDBName | NewDBName.out;
But when I do this I get You have an error in your SQL syntax; check the manual that corresponds ... to use near 'mysqldump -u user -p pass myDBName | NewDBName.out'
Since I have already logged in do I need to use -u and -p? Not doing so gives me the same error. Can you see what is wrong?
In addition to what Alexandre said, you probably don't want to pipe (|) output to NewDBName.out, but rather redirect it there (>).
So from the Windows/Unix command line:
mysqldump -u user -p pass myDBName > NewDBName.out
Note that if you have large binary fields (e.g. BLOBS) in some columns you may need to set an additional option (I think it was --hex-blob, but there might have been another option too). If that applies to you, add a comment and I'll research the setting.
mysqldump is not an SQL statement that you execute inside a mysql session but a distinct binary that should be started from your OS shell.
The are a few ways to use this. One of them is to pipe the output of mysqldump to another MySQL instance:
echo CREATE DATABASE remote_db | mysql -h remote_host -u remote_user -premote_password
mysqldump -h source_host -u root -ppassword source_db | mysql -h remote_host -u remote_user -premote_password -D remote_db
I have had to dump large sets of data recently. From what I have found on a 200Mb database with 10,000+ records in many of the tables is the following. I used the linux 'time' command to get actual time.
12 minutes using:
mysqldump -u user -p pass myDBName > db-backups.sql
7 minutes to clone the database:
mysqldump -u user -p pass myDBName | mysql -u user -p pass cloneDBName
And in less than a second:
mysqlhotcopy -u user -p pass myDBName cloneDBName
The last one blew my mind, but you have to be logged in locally where the database server resides. Personally I think this is much faster than remotely doing a dump, the you can compress the .sql file and transfer it manually.