What's the easiest way to move mysql schemas (tables, data, everything) from one server to another?
Is there an easy method move all this from one server running mysql to another also already running mysql?
If you are using SSH keys:
$ mysqldump --all-databases -u[user] -p[pwd] | ssh [host/IP] mysql -u[user] -p[pwd]
If you are NOT using SSH keys:
$ mysqldump --all-databases -u[user] -p[pwd] | ssh user#[host/IP] mysql -u[user] -p[pwd]
WARNING: You'll want to clear your history after this to avoid anyone finding your passwords.
$ history -c
Dump the Database either using mysqldump or if you are using PHPMyAdmin then Export the structure and data.
For mysqldump you will require the console and use the following command:
mysqldump -u <user> -p -h <host> <dbname> > /path/to/dump.sql
Then in the other server:
mysql -u <user> -p <dbname> < /path/to/dump.sql
If you're moving from the same architecture to the same architecture (x86->x86, x86_64 -> x86_64), you can just rsync your MySQL datadir from one server to the other. Obviously, you should not run this while your old MySQL daemon is running.
If your databases are InnoDB-based, then you will want to make sure that your InnoDB log files have been purged and their contents merged to disk before you copy files. You can do this by setting innodb_fast_shutdown to 0 (the default is 1, which will not flush the logs to disk), which will cause the log file to be flushed on the next server shutdown. You can do this by logging on to MySQL as root, and in the MySQL shell, do:
SET GLOBAL innodb_fast_shutdown=0
Or by setting the option in your my.cnf and restarting the server to pull in the change, then shutting down to flush the log.
Do something like:
#On old server (notice the ending slash and lack thereof, it's very important)
rsync -vrplogDtH /var/mysql root#other.server:/var/mysql/
#Get your my.cnf
scp /etc/my.cnf root#other.server:/etc/my.cnf
After that you might want to run mysql_upgrade [-p your_root_password] to make sure the databases are up-to-date.
I will say it's worked for me in the (very recent) past (moving from an old server to a new one, both running FreeBSD 8.x), but YMMV depending on how many versions you were in the past.
Related
I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359
so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.
so im running the following command to dump my table:
mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip > dump_test.sql.gz
and from some reason, i still get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499
am i doing something wrong?
its weird, only 9m records...not too big.
Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.
mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
gzip > dump_test.sql.gz
You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.
It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.
These set the timeouts to one calendar day.
SET GLOBAL wait_timeout=86400;
SET GLOBAL interactive_timeout=86400;
Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.
Try logging into a machine closer to the server and running mysqldump on it.
Then use some other means (sftp?) to copy your gz file to your own machine.
Or, you may have to segment the dump of this file. You can do something like this (not debugged).
mysqldump -uroot -h my.host -p'mypassword' \
db_name table_name --skip-create-options --skip-add-drop-table \
--where="id>=0 AND id < 1000000" | \
gzip....
Then repeat that with these lines.
--where="id>=1000000 AND id < 2000000" | \
--where="id>=2000000 AND id < 3000000" | \
...
until you get all the rows. Pain in the neck, but it will work.
For me, all worked fine when I skip lock tables
mysqldump -u xxxxx --password=xxxxx --quick --max_allowed_packet=512M --skip-lock-tables --verbose -h xxx.xxx.xxx.xxx > db.sql
I may create problems with consistency but allowed me to backup a 5GB database without any issue.
other option to try:
net_read_timeout=3600
net_write_timeout=3600
on my.ini/my.cnf or via SET GLOBAL ...
Using JohnBigs comment above, the --compress flag was what worked for me.
I had previously tried --single-transaction, --skip-extended-insert, and --quick the w/o success.
Also, make sure you MYSQL.EXE client is the same version as your mysql server.
So, if you're mysql version is 8.0.23 but your client version is 8.0.17 or 8.0.25, you may have issues. I ran into this problem using a version 8.0.17 on a mysql server 8.0.23 - changing the client version to match the server version resolved the issue.
I had a similar problem on my server, where MySQL would apparently restart during the nightly backups. It was always the same database, but the actual table sometimes varied.
Tried several from the other answers here, but in the end it was just some cronjob executing queries that didn't finish. This caused not so much CPU and RAM usage that it triggered the monitoring, but apparently enough that compressing the dump caused the OOM killer to become active. Fixed the cronjob and the next backup was ok again.
Things to look for:
OOM? dmesg | grep invoked
Process killed? grep killed /var/log/kern.log
If none of the other works, you can use the mysqldump where features, Break your huge query into multiple smaller query.
It might be tedious but it would most likely work.
e.g.
"C:\Program Files\MySQL\MySQL Workbench 8.0 CE\mysqldump.exe" --defaults-file="C:\...\my_password.cnf"
--host=localhost --protocol=tcp --user=mydbuser --compress=TRUE --port=16861 --default-character-set=utf8 --quick --complete-insert --replace
--where="last_modify > '2022-01-01 00:00:00'"
> "C:\...\dump.txt"
my_password.cnf
[client]
password=xxxxxxxx
[mysqldump]
ignore-table=db.table1
ignore-table=db.table2
Then, you just modified the last_modify column to move further back into the future, and your huge table is now split into many small tables.
Okay, I have a little problem.
My password is expired and my users table is corrupted. I can login via
mysql -u root -p
but on every action I perform I get the folowing error:
Column count of mysql.user is wrong. Expected 45, found 46. The table is probably corrupted.
I have read that you can fix the mysql.user table with the folowing command:
mysql_upgrade -u root -p
But when I do that I get the folowing error:
mysql_upgrade: Got error: 1862: Your password has expired. To log in you
must change it using a client that supports expired passwords. while
connecting to the MySQL server
Upgrade process encountered error and will not continue.
So, How do I fix this?
I have backups of all my tables so I won't be a problem if I have to reset all my databases.
(why the weird format? Stackoverflow thinks it's all code and wants me to put it in code blocks, otherwise I can not save it)
EDIT:
I know my password. That's not the problem at all.
My problem is that the password is expired and I am not able to do anything becuase my mysql.user is corrupted!
Try to disable the password expiration option: edit the my.cnf and put
[mysqld]
default_password_lifetime=0
and try to restart mysql server and try again login again.
the source is here https://dev.mysql.com/doc/refman/5.7/en/password-expiration-policy.html
For repairing the database you run mysqlcheck --repair --databases db_name or mysqlcheck --repair --all-databases for repairing all databases
The source is here https://dev.mysql.com/doc/refman/5.7/en/rebuilding-tables.html
You could first try to repair the database then you could try to disable password lifetime.
Had the same issue when restoring an old backup from 2018, reinstalling MySQL as you said in a comment didn't solve the issue.
How I did:
Stop MySQL service
Run mysqld_safe --skip-grant-tables --skip-networking &
(if you get an error you may need to manually create and chown the directory /run/mysqld)
--skip-grant-tables will allow passwordless logins and will also disable any check on the password expiration
Now run mysql_upgrade --force and mysqlcheck --repair --all-databases
You can now kill the running mysqld_safe (ps aux | grep mysql to find the PID to kill) and then start the server normally with service mysql start.
In my case it didn't work and I still had the "Expected 45, found 46" error. In that case go ahead:
Stop the server again and restart it in safe mode as point 2 above
Now you should be able to dump the content, but we must exclude the mysql schema from being dumped.
Since mysqldump doesn't have a --exclude-database option, we need to get the list of databases to dump. To get the list of existing databases, except system schemas, run:
mysql -Nse "SELECT GROUP_CONCAT(SCHEMA_NAME SEPARATOR ' ') FROM information_schema.SCHEMATA WHERE SCHEMA_NAME NOT IN ('mysql','information_schema','performance_schema','sys');"
Remove from the list any other db you don't need, and run the dump:
mysqldump --databases db1 db2 ... db50 > mysqldump.sql
Kill mysqld, move the datadir away and create an empty one (mv /var/lib/mysql /var/lib/mysql-old && mkdir /var/lib/mysql && chown mysql:mysql /var/lib/mysql)
service mysql start and a fresh datadir will be populated.
Run mysql_secure_installation to set a new root password
Import the dump file:
cat mysqldump.sql | mysql -u root -p
After that, the server is UP and running without issues.
I have to set up a backup strategy.
I choosed innobackupex to do so running on Debian 6 Squeeze. There are two servers, The production server and a backup server (that should work if the production server crashes). There is no replication, I use rsync to transfer the backups.
I have a php script that looks in a conf file to know when it have to do backups.
My question is: How the skip the mysql database or the users table with innobackupex ?
On the master : I do the following commands :
innobackupex --user=root --password=xxx --no-timestamp /opt/backups/full/
rsync -avz --progress -e 'ssh -i -p 1000 ' /opt/backups/full/ user#xxx:/home/backups/full/
this works fine
On the backup server, I have just to prepare and restore the files :
innobackupex --apply-log --redo-only --user=xxx --password=xxx /home/backups/full/
innobackupex --copy-back /home/backups/full --user=root --password=xxx
Everything is alright but on the backup server, the root user's password changed and even the debian-sys-maint password.
The root users password becomes the one from the master.
I did a script to correct this.
The debian-sys-maint password is written in clear in the /etc/mysql/debian.cnf file so a extract it but in php (I use a pdo object) I can't change this password so I can't restart the mysql server.
Sometimes I can't retrieve the password of root user it's not master servers root password.
Sometimes I can stop/start mysql with /etc/init.d/mysql stop/start, sometimes with service mysql stop/start it this doesn't works I try with : mysqladmin -u root -p shutdown (if I could change the password)
If really I can't stop mysql I do killall mysql ( I know it's wrong)
and then I change the root password :
/usr/bin/mysqld_safe --skip-grant-tables &
mysql --user=root mysql
Do someone had a problem similar to my ? How the skip the mysql database with innobackupex ?
The innobackupex tool is part of Percona XtraBackup. I work for Percona and I have developed training on Percona XtraBackup.
There are options for innobackupex to back up specific databases by name or by a regular expression. #YaK gives one option, or you can see other options here: http://www.percona.com/doc/percona-xtrabackup/innobackupex/partial_backups_innobackupex.html
However, --copy-back assumes you're restoring a full backup to an empty datadir. I.e. if the destination directory is not empty, --copy-back will give an error and refuse to overwrite files.
If you are trying to restore InnoDB tables to an instance where you already have a mysql database, you'll have to do the restore manually. This can be as simple as using mv of the files into your existing datadir (with the MySQL server shut down of course). Also remember to use chown mysql:mysql on the files before you start mysqld.
PS: You don't need to use --redo-only before you restore. That option is for doing incremental backups, and even then you'd skip that option before you do the final restore.
I believe you are using this tool from Percona.
Then the only helpful option I can find is --databases. I will assume you do not want to maintain a list of databases in your script, you can build the list dynamically with a command like this:
shell > mysql [options] -NBe \
"SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('mysql', 'information_schema')"
You should be able to integrate this call in something like this:
shell > innobackupex \
--databases=`mysql [options] -NBe \
"SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('mysql', 'information_schema')"`
(some extra double quotes may be required, sorry, I do not have access to the tool at this time)
Thank you for the answers.
Yes all my tables are working on InnoDB engine.
Thank you for your advices Bill Karwin, but innobackupex is a great tool, I'll use it for ignoring the mysql database with --databases
Yes, I read that --redo-only is for incremental, I do incremental too.
On the master server, I have a directory named full and one named incremental, after the last incremental, I delete the directories.
After testing, I'll let you know if everything was ok.
I need to copy an entire database from a mysql installation on a remote machine via SSH to my local machines mysql.
I know the SSH and both local and remote MYSQL admin user and password.
Is this enough information, and how is it done?
From remote server to local machine
ssh {ssh.user}#{remote_host} \
'mysqldump -u {remote_dbuser} --password={remote_dbpassword}
{remote_dbname} | bzip2 -c' \ | bunzip2 -dc | mysql -u {local_dbuser}
--password={local_dbpassword} -D {local_dbname}
That will dump remote DB in your local MySQL via pipes :
ssh mysql-server "mysqldump --all-databases --quote-names --opt --hex-blob --add-drop-database" | mysql
You should take care about users in mysql.users
Moreover, to avoid typing users and passwords for mysqldump and mysql on local and remote hosts, you can create a file ~/.my.cnf :
[mysql]
user = dba
password = foobar
[mysqldump]
user = dba
password = foobar
See http://dev.mysql.com/doc/refman/5.1/en/option-files.html
Try reading here:
Modified from http://www.cyberciti.biz/tips/howto-copy-mysql-database-remote-server.html - modified because I prefer to use .sql as the extension for SQL files:
Usually you run mysqldump to create a database copy and backups as
follows:
$ mysqldump -u user -p db-name > db-name.sql
Copy db-name.out file using sftp/ssh to remote MySQL server:
$ scp db-name.sql user#remote.box.com:/backup
Restore database at remote server (login over ssh):
$ mysql -u user -p db-name < db-name.sql
Basically you'll use mysqldump to generate a dump of your database, copy it to your local machine, then pipe the contents into mysql to regenerate the DB.
You can copy the DB files themselves, rather than using mysqldump, but only if you can shutdown the MySQL service on the remote machine.
I would recommend the Xtrabackup tool by Percona. It has support for hot copying data via SSH and has excelent documentation. Unlike using mysqldump, this will copy all elements of the MySQL instance including user permissions, triggers, replication, etc...
ssh into the remote machine
make a backup of the database using mysqldump
transfer the file to local machine using scp
restore the database to your local mysql
I'm creating a snippet to be used in my Mac OS X terminal (bash) which will allow me to do the following in one step:
Log in to my server via ssh
Create a mysqldump backup of my Wordpress database
Download the backup file to my local harddrive
Replace my local Mamp Pro mysql database
The idea is to create a local version of my current online site to do development on. So far I have this:
ssh server 'mysqldump -u root -p'mypassword' --single-transaction wordpress_database > wordpress_database.sql' && scp me#myserver.com:~/wordpress_database.sql /Users/me/Downloads/wordpress_database.sql && /Applications/MAMP/Library/bin/mysql -u root -p'mylocalpassword' wordpress_database < /Users/me/Downloads/wordpress_database.sql
Obviously I'm a little new to this, and I think I've got a lot of unnecessary redundancy in there. However, it does work. Oh, and the ssh command ssh server is working because I've created an alias in a local .ssh file to do that bit.
Here's what I'd like help with:
Can this be shortened? Made simpler?
Am I doing this in a good way? Is there a better way?
How could I add gzip compression to this?
I appreciate any guidance on this. Thank you.
You can dump it out of your server and into your local database in one step (with a hint of gzip for compression):
ssh server "mysqldump -u root -p'mypassword' --single-transaction wordpress_database | gzip -c" | gunzip -c | /Applications/MAMP/Library/bin/mysql -u root -p'mylocalpassword' wordpress_database
The double-quotes are key here, since you want gzip to be executed on the server and gunzip to be executed locally.
I also store my mysql passwords in ~/.my.cnf (and chmod 600 that file) so that I don't have to supply them on the command line (where they would be visible to other users on the system):
[mysql]
password=whatever
[mysqldump]
password=whatever
That's the way I would do it too.
To answer your question #3:
Q: How could I add gzip compression to this?
A: You can run gzip wordpress_database.sql right after the mysqldump command and then scp the gzipped file instead (wordpress_database.sql.gz)
There is a python script that would download the sql dump file to local. You can take a look into the script and modify a bit to perform your requirements:
download-remote-mysql-dump-local-using-python-script