I need to do a mysqldump directly on a remote server with SSH.
The main reason is that I do not have enough space on server to do it normally then copy it over with SSH.
Somehow I need to pipe the output of mysqldump command to SSH.
Ideally this would be a one line command.
Thanks
You can try:
ssh -t user#server \
"mysqldump \
-B database \
--add-drop-table \
--ignore-table database.logs" > ~/mydatabase.sql
Notice that in this example, you don't need to login into mysql. And you do not need sudo permissions.
I also add the --add-drop-table and --ignore-table options, since these are pretty common.
You can change > ~/mydatabase.sql into | gzip -9 > ~/mydatabase.sql.gz to compress the file.
You wish to store dumpfile remote system not on the server running mysql.
If you execute the MysqlDump command through ssh, the dump file will eventually remain on the server connected to ssh.
You can consider to connect to mysql to execute mysqldump command via ssh tunneling.
I am not sure any tools support ssh tunneling on linux but Putty can make ssh tunneling on windows.
https://www.linode.com/docs/databases/mysql/create-an-ssh-tunnel-for-mysql-remote-access/
Related
I'm trying to use plink on Windows to create a tunnel to a Linux machine and have the dump file end up on the Windows machine. It would appear that this answer would work and is the basis of my question. But trying it out and looking at other answers I find that the dump file is still on the Linux machine. I'm trying this out in my local environment with Windows and Ubuntu 14.04 before moving to production. In Windows 8.1:
plink sam#192.168.0.20 -L 3310:localhost:3306
mysqldump --port=3310 -h localhost -u sam -p --all-databases > outfile.sql
I've tried swapping localhost in the second with 127.0.0.1, adding -N to the tail of the tunnel setup, using one table in the dump command but despite my tunnel, it's as if the first command is ignored. Other answers indicate to add more commands to the script so that I can use pscp to copy the file. That also means to re-connect to trash this outfile.sql. Not ideal for getting other dumps on other servers. If that's the case, why use the first command at all?
What am I overlooking? In plink, the output of the first is to open up the Linux server where I can run the mysqldump command. But there seems to be ignoring the first command. What do you think?
You have several options:
Dump the database remotely to a remote file and download it to your machine afterwards:
plink sam#192.168.0.20 "mysqldump -u sam -p --all-databases > outfile.sql"
pscp sam#192.168.0.20:outfile.sql .
The redirect > is inside the quotes, so you are redirecting mysqldump on the remote machine to the remote file.
This is probably the easiest solution. If you compress the dump before downloading, it would be even the fastest, particularly if you connect over a slow network.
Execute mysqldump remotely, but redirect its output locally:
plink sam#192.168.0.20 "mysqldump -u sam -p --all-databases" > outfile.sql
Note that the redirect > is outside of the quotes, comparing to the previous case, so you are redirecting an output of plink, i.e. output of the remote shell, which contains output of a remote mysqldump.
Tunnel connection to the remote MySQL database and dump the database locally using a local installation of MySQL (mysqldump):
plink sam#192.168.0.20 -L 3310:localhost:3306
In a separate local console (cmd.exe):
mysqldump --port=3310 -h localhost -u sam -p --all-databases > outfile.sql
In this case nothing is running remotely (except for a tunnel end).
Is it possible to dump a database from a remote host through an ssh connection and have the backup file on my local computer.
If so how can this be achieved?
I am assuming it will be some combination of piping output from the ssh to the dump or vice versa but cant figure it out.
This would dump, compress and stream over ssh into your local file
ssh -l user remoteserver "mysqldump -mysqldumpoptions database | gzip -3 -c" > /localpath/localfile.sql.gz
Starting from #MichelFeldheim's solution, I'd use:
$ ssh user#host "mysqldump -u user -p database | gzip -c" | gunzip > db.sql
ssh -f user#server.com -L 3306:server.com:3306 -N
then:
mysqldump -hlocalhost > backup.sql
assuming you also do not have mysql running locally. If you do you can adjust the port to something else.
I have created a script to make it easier to automate mysqldump commands on remote hosts using the answer provided by Michel Feldheim as a starting point:
mysqldump-remote
The script allows you to fetch a database dump from a remote host with or without SSH and optionally using a .env file containing environment variables.
I plan to use the script for automated database backups. Feel free to create issues / contribute - hope this helps others as well!
I need to copy an entire database from a mysql installation on a remote machine via SSH to my local machines mysql.
I know the SSH and both local and remote MYSQL admin user and password.
Is this enough information, and how is it done?
From remote server to local machine
ssh {ssh.user}#{remote_host} \
'mysqldump -u {remote_dbuser} --password={remote_dbpassword}
{remote_dbname} | bzip2 -c' \ | bunzip2 -dc | mysql -u {local_dbuser}
--password={local_dbpassword} -D {local_dbname}
That will dump remote DB in your local MySQL via pipes :
ssh mysql-server "mysqldump --all-databases --quote-names --opt --hex-blob --add-drop-database" | mysql
You should take care about users in mysql.users
Moreover, to avoid typing users and passwords for mysqldump and mysql on local and remote hosts, you can create a file ~/.my.cnf :
[mysql]
user = dba
password = foobar
[mysqldump]
user = dba
password = foobar
See http://dev.mysql.com/doc/refman/5.1/en/option-files.html
Try reading here:
Modified from http://www.cyberciti.biz/tips/howto-copy-mysql-database-remote-server.html - modified because I prefer to use .sql as the extension for SQL files:
Usually you run mysqldump to create a database copy and backups as
follows:
$ mysqldump -u user -p db-name > db-name.sql
Copy db-name.out file using sftp/ssh to remote MySQL server:
$ scp db-name.sql user#remote.box.com:/backup
Restore database at remote server (login over ssh):
$ mysql -u user -p db-name < db-name.sql
Basically you'll use mysqldump to generate a dump of your database, copy it to your local machine, then pipe the contents into mysql to regenerate the DB.
You can copy the DB files themselves, rather than using mysqldump, but only if you can shutdown the MySQL service on the remote machine.
I would recommend the Xtrabackup tool by Percona. It has support for hot copying data via SSH and has excelent documentation. Unlike using mysqldump, this will copy all elements of the MySQL instance including user permissions, triggers, replication, etc...
ssh into the remote machine
make a backup of the database using mysqldump
transfer the file to local machine using scp
restore the database to your local mysql
I'm creating a snippet to be used in my Mac OS X terminal (bash) which will allow me to do the following in one step:
Log in to my server via ssh
Create a mysqldump backup of my Wordpress database
Download the backup file to my local harddrive
Replace my local Mamp Pro mysql database
The idea is to create a local version of my current online site to do development on. So far I have this:
ssh server 'mysqldump -u root -p'mypassword' --single-transaction wordpress_database > wordpress_database.sql' && scp me#myserver.com:~/wordpress_database.sql /Users/me/Downloads/wordpress_database.sql && /Applications/MAMP/Library/bin/mysql -u root -p'mylocalpassword' wordpress_database < /Users/me/Downloads/wordpress_database.sql
Obviously I'm a little new to this, and I think I've got a lot of unnecessary redundancy in there. However, it does work. Oh, and the ssh command ssh server is working because I've created an alias in a local .ssh file to do that bit.
Here's what I'd like help with:
Can this be shortened? Made simpler?
Am I doing this in a good way? Is there a better way?
How could I add gzip compression to this?
I appreciate any guidance on this. Thank you.
You can dump it out of your server and into your local database in one step (with a hint of gzip for compression):
ssh server "mysqldump -u root -p'mypassword' --single-transaction wordpress_database | gzip -c" | gunzip -c | /Applications/MAMP/Library/bin/mysql -u root -p'mylocalpassword' wordpress_database
The double-quotes are key here, since you want gzip to be executed on the server and gunzip to be executed locally.
I also store my mysql passwords in ~/.my.cnf (and chmod 600 that file) so that I don't have to supply them on the command line (where they would be visible to other users on the system):
[mysql]
password=whatever
[mysqldump]
password=whatever
That's the way I would do it too.
To answer your question #3:
Q: How could I add gzip compression to this?
A: You can run gzip wordpress_database.sql right after the mysqldump command and then scp the gzipped file instead (wordpress_database.sql.gz)
There is a python script that would download the sql dump file to local. You can take a look into the script and modify a bit to perform your requirements:
download-remote-mysql-dump-local-using-python-script
i know how to import an sql file via the cli:
mysql -u USER -p DBNAME < dump.sql
but that's if the dump.sql file is local. how could i use a file on a remote server?
You didn't say what network access you have to the remote server.
Assuming you have SSH access to the remote server, you could pipe the results of a remote mysqldump to the mysql command. I just tested this, and it works fine:
ssh remote.com "mysqldump remotedb" | mysql localdb
I put stuff like user, password, host into .my.cnf so I'm not constantly typing them -- annoying and bad for security on multiuser systems, you are putting passwords in cleartext into your bash_history! But you can easily add the -u -p -h stuff back in on both ends if you need it:
ssh remote.com "mysqldump -u remoteuser -p'remotepass' remotedb" | mysql -u localuser -p'localpass' localdb
Finally, you can pipe through gzip to compress the data over the network:
ssh remote.com "mysqldump remotedb | gzip" | gzip -d | mysql localdb
Just thought I'd add to this as I was seriously low on space on my local VM, but if the .sql file exists already on the remote server you could do;
ssh <ip-address> "cat /path/to/db.sql" | mysql -u <user> -p<password> <dbname>
I'd use wget to either download it to a file or pipe it in.