i have a database server ( MySQL ) that does not have enough free space to save dump of MySQL. i want to create dump on it and send to archive server straight without save on local host because lack of space. how can i do this on my local database server ?
all of my servers are Linux base.
thanks in advance
You can do it like this:
mysqldump -u MYSQL_USER -p'MYSQL_PASSWORD' YOUR_DATABASE | gzip -c | ssh you#your_host 'cat > ~/backup.sql.gz'
Related
Is it possible to dump a database from a remote host through an ssh connection and have the backup file on my local computer.
If so how can this be achieved?
I am assuming it will be some combination of piping output from the ssh to the dump or vice versa but cant figure it out.
This would dump, compress and stream over ssh into your local file
ssh -l user remoteserver "mysqldump -mysqldumpoptions database | gzip -3 -c" > /localpath/localfile.sql.gz
Starting from #MichelFeldheim's solution, I'd use:
$ ssh user#host "mysqldump -u user -p database | gzip -c" | gunzip > db.sql
ssh -f user#server.com -L 3306:server.com:3306 -N
then:
mysqldump -hlocalhost > backup.sql
assuming you also do not have mysql running locally. If you do you can adjust the port to something else.
I have created a script to make it easier to automate mysqldump commands on remote hosts using the answer provided by Michel Feldheim as a starting point:
mysqldump-remote
The script allows you to fetch a database dump from a remote host with or without SSH and optionally using a .env file containing environment variables.
I plan to use the script for automated database backups. Feel free to create issues / contribute - hope this helps others as well!
I am trying to understand how mysqldump works:
if I execute mysqldump on my pc and connect to a remote server:
mysqldump -u mark -h 34.32.23.23 -pxxx --quick | gzip > dump.sql.gz
will the server compress it and send it over to me as gzip or will my computer receive all the data first and then compress it?
Because I have a very large remote db to export, and I would like to know the fastest way to do it over a network!
You should make use of ssh + scp,
because the dump on localhost is faster,
and you only need to scp over the gzip (lesser network overhead)
likely you can do this
ssh $username#34.32.23.23 "mysqldump -u mark -h localhost -pxxx --quick | gzip > /tmp/dump.sql.gz"
scp $username#34.32.23.23:/tmp/dump.sql.gz .
(optional directory of /tmp, should be change to whatever directory you comfortable with)
Have you tried the --compress parameter?
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_compress
This is how I do it:
Do a partial export using SELECT INTO OUTFILE and create the files on the same server.
If your table contains 10 million rows. Do a partial export of 1 million rows at a time, each time in a separate file.
Once the 1st file is ready you can compress and transfer it. In the meantime MySQL can continue exporting data to the next file.
On the other server you can start loading the file into the new database.
BTW, lot of this can be scripted.
I need to copy an entire database from a mysql installation on a remote machine via SSH to my local machines mysql.
I know the SSH and both local and remote MYSQL admin user and password.
Is this enough information, and how is it done?
From remote server to local machine
ssh {ssh.user}#{remote_host} \
'mysqldump -u {remote_dbuser} --password={remote_dbpassword}
{remote_dbname} | bzip2 -c' \ | bunzip2 -dc | mysql -u {local_dbuser}
--password={local_dbpassword} -D {local_dbname}
That will dump remote DB in your local MySQL via pipes :
ssh mysql-server "mysqldump --all-databases --quote-names --opt --hex-blob --add-drop-database" | mysql
You should take care about users in mysql.users
Moreover, to avoid typing users and passwords for mysqldump and mysql on local and remote hosts, you can create a file ~/.my.cnf :
[mysql]
user = dba
password = foobar
[mysqldump]
user = dba
password = foobar
See http://dev.mysql.com/doc/refman/5.1/en/option-files.html
Try reading here:
Modified from http://www.cyberciti.biz/tips/howto-copy-mysql-database-remote-server.html - modified because I prefer to use .sql as the extension for SQL files:
Usually you run mysqldump to create a database copy and backups as
follows:
$ mysqldump -u user -p db-name > db-name.sql
Copy db-name.out file using sftp/ssh to remote MySQL server:
$ scp db-name.sql user#remote.box.com:/backup
Restore database at remote server (login over ssh):
$ mysql -u user -p db-name < db-name.sql
Basically you'll use mysqldump to generate a dump of your database, copy it to your local machine, then pipe the contents into mysql to regenerate the DB.
You can copy the DB files themselves, rather than using mysqldump, but only if you can shutdown the MySQL service on the remote machine.
I would recommend the Xtrabackup tool by Percona. It has support for hot copying data via SSH and has excelent documentation. Unlike using mysqldump, this will copy all elements of the MySQL instance including user permissions, triggers, replication, etc...
ssh into the remote machine
make a backup of the database using mysqldump
transfer the file to local machine using scp
restore the database to your local mysql
I have a 3 huge 5GB .sql files which I need to upload to my server so I can process it. Its too large to run on my computer so I can't process the information offline.
So the question, How can I upload a huge 5GB sql file to MySql server without using PHP because its upload limit is 2MB.
The MySql is hosted on a remote server miles away so I can't just plug it in but I have turned remote access on which should be of some help.
All ideas welcome.
Thank-you for your time
Paul
You cannot use PHP to import such huge files. You need to use command prompt to do so. If you have access to the server then it's quite easy to do so. You can login to the server through SecureCRT(paid) or putty(free).
Use this command to import the sql database to the server - Using IP Address
$ mysql -u username -p -h 202.54.1.10 databasename < data.sql
OR - Using hostname
$ mysql -u username -p -h mysql.hosting.com database-name < data.sql
If you wanna try it out locally first to get used to the command prompt. You can use this.
$ mysql -u username -p -h localhost data-base-name < data.sql
Are you using something like phpMyAdmin? If you have ftp/sftp access upload the file and then run it with phpMyAdmin.
I'm hoping I can use a shell script that will pull a sql dump down from my production site and into my local database. Ideally, I'd like to be able to run something like this:
sync_site example_com example_local
Where the first argument is the production database and the second is the local database. The remote database is always on the same server, behind SSH, with known MySQL credentials.
Figured it out:
ssh me#myserver.com mysqldump -u user -ppass remote_db | mysql -u user -ppass local_db