I want to do a mysqldump directly to my remotehost. I've seen suggestions to use the -c switch or use gzip to compress the data on the fly (and not in a file). What's the difference between the two? How do I know if both machines support the -C switch? How would I do a gzip on the fly? I am using linux on both machines.
mysqldump -C -u root -p database_name | mysql -h other-host.com database_name
The -C option uses compression that may be present in the MySQL client-server protocol. Gzip'ing would use the gzip utility in a pipeline. I'm pretty sure that the latter would not do any good since the compression and uncompression would occur on the same machine in this case. If the machine that you are dumping from is local, then the -C option is probably just wasting CPU cycles - it compresses the protocol messages between mysqldump and the mysqld daemon.
The only command pipeline that might make sense here is something like:
mysqldump -u root -p database_name | mysql -C -h other-host -Ddatabase_name -B
The output of mysqldump is going to the pipeline which the mysql command-line client will read. The -C option tells mysql to compress the messages that it is sending to other-host. The -B option disables buffering and interactive behavior in the mysql client which might speed things up a little more.
It would probably be faster to do something like:
mysqldump -u root -p database_name | gzip > dump.gz
scp dump.gz user#other-host:/tmp
ssh user#other-host "gunzip /tmp/dump.gz | mysql -Ddatabase_name -B; rm /tmp/dump.gz"
Provided that you have SSH running on the other machine anyway.
I always read the man pages for these types of things. If you look at the man pages for mysqldump you can see that the -C (thats a capital C) flag makes the mysqldump compress all data in transit only. this makes it stream compressed but arrive, as you will see it, uncompressed. You could dump the file to the local system then transfer a gzip of everything at once too.
from the man page:
o --compress, -C
Compress all information sent between the client and the server if
both support compression.
Related
I need your help in configuring mysqldump in order to automatically download backups from one server to an external disk in a fixed time (Everyday at 6:00 Pm). If mysqldump is unable to do that, can your suggest another software?
You can use your operating system's scheduler to automatically run the mysqldump command at your desired times, e.g. cron on Linux and the Task Scheduler on Windows.
To have mysqldump write to the external disk, you can mount the disk so the destination path is reachable to mysqldump and use it in the output path. To specify the output path, you can either use > redirection or the --result-file option.
Both approaches work on Linux/Unix but this distinction is important for Windows where redirection won't work properly using PowerShell. This is because redirection on PowerShell produces UTF-16 using redirection which cannot be read. The --result-file option will provide ASCII output that can be read.
Windows Examples
C:\> mysqldump.exe –e –u[username] -p[password] -h[hostname] [dbname] > [C:\path\to\mybackup.sql]
C:\> mysqldump.exe –e –u[username] -p[password] -h[hostname] [dbname] --result-file [C:\path\to\mybackup.sql]
Linux/Unix Examples
# mysqldump -u[username] -p[password] -h[hostname] [dbname] > [/path/to/mybackup.sql]
# mysqldump -u[username] -p[password] -h[hostname] [dbname] --result-file [/path/to/mybackup.sql]
More information is available here:
Complete: http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html
Quick: http://www.thegeekstuff.com/2008/09/backup-and-restore-mysql-database-using-mysqldump/
Depending on your backup requirements, you may also want to consider other options such as Percona XtraBackup, LVM snapshots (e.g. mylvmbackup), etc. This Percona XtraBackup presentation has a comparison table on slide 3.
Server1 has a MySQL server, Server2 has a file which I need to import into a table Server1's MySQL server. I can only access Server2 and it's files using SSH.
Now what can be the best solution for this? One inefficient method would be to scp the file onto Server2's hard disk, then execute a LOAD DATA INFILE for that file. But since the file is large, I want to avoid doing this.
Is there a way to directly load the file into Server1's Mysql from Server2?
cat file.sql | ssh -C -c blowfish username#myserver mysql -u username -p database_name
In this command, -C enables compression during transfer and -c blowfish selects a cyphering algorithm that uses less CPU than the default one.
Common sense would suggest to transfer the compressed file so that you can verify the checksum with for example MD5 and then run the import from the compressed file.
Hope this helps.
You want to use ssh tunneling to access your mysql server remotely and securely.
The following command will create the port forwarding:
$ ssh -f -L <[local address:]port>:<remote address;port> -N <user id>
For example:
$ ssh -F -L 45678:localhost:3307 -N foo#localhost
will forward the default mysql server port on localhost to the port 45678 using foo's credentials (this may be useful for testing purposes).
Then, you may simply connect to the server with your local program:
$ mysql -p -u foo -P 45678
At the mysql prompt, it is possible to bulk load a data file using the LOAD DATA statement, which takes the optional keyword LOCAL to indicate that the file is located on the client end of the connection.
Documentation:
ssh manual page
LOAD DATA statement
Is it possible to dump a database from a remote host through an ssh connection and have the backup file on my local computer.
If so how can this be achieved?
I am assuming it will be some combination of piping output from the ssh to the dump or vice versa but cant figure it out.
This would dump, compress and stream over ssh into your local file
ssh -l user remoteserver "mysqldump -mysqldumpoptions database | gzip -3 -c" > /localpath/localfile.sql.gz
Starting from #MichelFeldheim's solution, I'd use:
$ ssh user#host "mysqldump -u user -p database | gzip -c" | gunzip > db.sql
ssh -f user#server.com -L 3306:server.com:3306 -N
then:
mysqldump -hlocalhost > backup.sql
assuming you also do not have mysql running locally. If you do you can adjust the port to something else.
I have created a script to make it easier to automate mysqldump commands on remote hosts using the answer provided by Michel Feldheim as a starting point:
mysqldump-remote
The script allows you to fetch a database dump from a remote host with or without SSH and optionally using a .env file containing environment variables.
I plan to use the script for automated database backups. Feel free to create issues / contribute - hope this helps others as well!
I am trying to understand how mysqldump works:
if I execute mysqldump on my pc and connect to a remote server:
mysqldump -u mark -h 34.32.23.23 -pxxx --quick | gzip > dump.sql.gz
will the server compress it and send it over to me as gzip or will my computer receive all the data first and then compress it?
Because I have a very large remote db to export, and I would like to know the fastest way to do it over a network!
You should make use of ssh + scp,
because the dump on localhost is faster,
and you only need to scp over the gzip (lesser network overhead)
likely you can do this
ssh $username#34.32.23.23 "mysqldump -u mark -h localhost -pxxx --quick | gzip > /tmp/dump.sql.gz"
scp $username#34.32.23.23:/tmp/dump.sql.gz .
(optional directory of /tmp, should be change to whatever directory you comfortable with)
Have you tried the --compress parameter?
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_compress
This is how I do it:
Do a partial export using SELECT INTO OUTFILE and create the files on the same server.
If your table contains 10 million rows. Do a partial export of 1 million rows at a time, each time in a separate file.
Once the 1st file is ready you can compress and transfer it. In the meantime MySQL can continue exporting data to the next file.
On the other server you can start loading the file into the new database.
BTW, lot of this can be scripted.
I'm creating a snippet to be used in my Mac OS X terminal (bash) which will allow me to do the following in one step:
Log in to my server via ssh
Create a mysqldump backup of my Wordpress database
Download the backup file to my local harddrive
Replace my local Mamp Pro mysql database
The idea is to create a local version of my current online site to do development on. So far I have this:
ssh server 'mysqldump -u root -p'mypassword' --single-transaction wordpress_database > wordpress_database.sql' && scp me#myserver.com:~/wordpress_database.sql /Users/me/Downloads/wordpress_database.sql && /Applications/MAMP/Library/bin/mysql -u root -p'mylocalpassword' wordpress_database < /Users/me/Downloads/wordpress_database.sql
Obviously I'm a little new to this, and I think I've got a lot of unnecessary redundancy in there. However, it does work. Oh, and the ssh command ssh server is working because I've created an alias in a local .ssh file to do that bit.
Here's what I'd like help with:
Can this be shortened? Made simpler?
Am I doing this in a good way? Is there a better way?
How could I add gzip compression to this?
I appreciate any guidance on this. Thank you.
You can dump it out of your server and into your local database in one step (with a hint of gzip for compression):
ssh server "mysqldump -u root -p'mypassword' --single-transaction wordpress_database | gzip -c" | gunzip -c | /Applications/MAMP/Library/bin/mysql -u root -p'mylocalpassword' wordpress_database
The double-quotes are key here, since you want gzip to be executed on the server and gunzip to be executed locally.
I also store my mysql passwords in ~/.my.cnf (and chmod 600 that file) so that I don't have to supply them on the command line (where they would be visible to other users on the system):
[mysql]
password=whatever
[mysqldump]
password=whatever
That's the way I would do it too.
To answer your question #3:
Q: How could I add gzip compression to this?
A: You can run gzip wordpress_database.sql right after the mysqldump command and then scp the gzipped file instead (wordpress_database.sql.gz)
There is a python script that would download the sql dump file to local. You can take a look into the script and modify a bit to perform your requirements:
download-remote-mysql-dump-local-using-python-script