I cannot perform a mysqldump on the local host due to space restrictions. I can only find methods for doing a dump to a local file or directly into a sql database on another server.
Is there a way to make a dump to a file on a remote server?
I'm using MySQL 5.1.
If you have SSH access to the remote server, you can pipe the output to the ssh command:
mysqldump <params> | ssh user#server 'cat > dumpfile.sql'
Related
I tried copy a sql file from one server another server mysql database
ssh -i keylocation user#host 'mysql --user=root --password="pass" --host=ipaddress additional_content Additional_Content' | < databasedump.sql
databasedump.sql - file on server A, I wanna copy data from that database file into database on server B, i tried to connec via ssh, to that server,and i need keyfile for that, and then copy the data, but when i run this command into console, nothing happens, any help?
Are you able to secure copy the file over to server B first, and then ssh in for the mysql dump? Example:
scp databasedump.sql user#server-B:/path/to/databasedump.sql
ssh -i keylocation user#host 'mysql --user=root --password="pass" --database=db_name < /path/to/databasedump.sql'
Edit: type-o
I'm not entirely sure how mysql handles stdin, so one thing you can do that should work in one command is
ssh -i keylocation user#host 'cat - | mysql --user=root --password="pass" --host=ipaddress additional_content Additional_Content' < databasedump.sql
However it's better to copy the files first with scp, and then import it with mysql. See Dan's answer.
I took over a website at a less-than-optimal hoster with no backups yet.
I do have an FTP-access and I know the database access parameters of the installed web-app to the MySQL server, but I don't have access to the MySQL interface or the underlying server.
I would like to do an automated backup to a Linux server under my control.
I can download all data via FTP, zip it and store it on a backed up storage.
How to do this for the database?
As an initial solution I installed phpMyAdmin and did a manual backup, but I would like to automate this process.
You can use mysqldump to back up a remote MySQL database.
Suppose your MySQL database is on a host called "dbhost". You can reach that host over the network from your new Linux host.
Run this command on your new Linux host:
$ mysqldump --single-transaction --all-databases --host dbhost > datadump.sql
(You might also need to add the --user and --password options.)
You can automate any command you can run at the command-line. Put it in a shell script. Then you an invoke the script for example from cron.
In order to load databases to mysql using terminal, as shown here, the sql file must be already in the server? Or I can upload from my local pc?
If you have mysql installed in your local machine: then you can at once run the import command from local machine's command line
mysql -hmy_server -umy_user -p database_name < full/path/to/file.sql
or you can just go to the directory where your sql file exists and run
mysql -hmy_server -umy_user -p database_name < file.sql
(password will be required after hitting Enter). This way the file will be imported into the remote database from your local computer.
No mysql is installed in your local machine: in this case you have to manually upload the file into the server with some ftp client and then connect remotely(e.g. by Putty) and run similar command
mysql -umy_user -p database_name < path/to/sql/file.sql
Yes. You can SCP or SFTP it to the server, but that command expects a file on the machine. In fact, that particular command expects it to be in the same directory that you are in.
Assuming the file contains SQL statements it can go anywhere. What is important is that the MySQL client you use to read the file can access the database server - usually on port 3306 for MySQL.
On the server, unless your local directory is accessible via network to the server (i.e. network share on another machine which is mapped to some address on your server - convoluted but essentially possible).
Basically, just upload the file to the server and run the command locally there.
NO, the SQL file or SQL script file generally can be present in your machine. When you execute the command, said sql script gets executed against the database.
mysql -u user -p -h servername DB < [Full Path]/test.sql
Server1 has a MySQL server, Server2 has a file which I need to import into a table Server1's MySQL server. I can only access Server2 and it's files using SSH.
Now what can be the best solution for this? One inefficient method would be to scp the file onto Server2's hard disk, then execute a LOAD DATA INFILE for that file. But since the file is large, I want to avoid doing this.
Is there a way to directly load the file into Server1's Mysql from Server2?
cat file.sql | ssh -C -c blowfish username#myserver mysql -u username -p database_name
In this command, -C enables compression during transfer and -c blowfish selects a cyphering algorithm that uses less CPU than the default one.
Common sense would suggest to transfer the compressed file so that you can verify the checksum with for example MD5 and then run the import from the compressed file.
Hope this helps.
You want to use ssh tunneling to access your mysql server remotely and securely.
The following command will create the port forwarding:
$ ssh -f -L <[local address:]port>:<remote address;port> -N <user id>
For example:
$ ssh -F -L 45678:localhost:3307 -N foo#localhost
will forward the default mysql server port on localhost to the port 45678 using foo's credentials (this may be useful for testing purposes).
Then, you may simply connect to the server with your local program:
$ mysql -p -u foo -P 45678
At the mysql prompt, it is possible to bulk load a data file using the LOAD DATA statement, which takes the optional keyword LOCAL to indicate that the file is located on the client end of the connection.
Documentation:
ssh manual page
LOAD DATA statement
I need to copy an entire database from a mysql installation on a remote machine via SSH to my local machines mysql.
I know the SSH and both local and remote MYSQL admin user and password.
Is this enough information, and how is it done?
From remote server to local machine
ssh {ssh.user}#{remote_host} \
'mysqldump -u {remote_dbuser} --password={remote_dbpassword}
{remote_dbname} | bzip2 -c' \ | bunzip2 -dc | mysql -u {local_dbuser}
--password={local_dbpassword} -D {local_dbname}
That will dump remote DB in your local MySQL via pipes :
ssh mysql-server "mysqldump --all-databases --quote-names --opt --hex-blob --add-drop-database" | mysql
You should take care about users in mysql.users
Moreover, to avoid typing users and passwords for mysqldump and mysql on local and remote hosts, you can create a file ~/.my.cnf :
[mysql]
user = dba
password = foobar
[mysqldump]
user = dba
password = foobar
See http://dev.mysql.com/doc/refman/5.1/en/option-files.html
Try reading here:
Modified from http://www.cyberciti.biz/tips/howto-copy-mysql-database-remote-server.html - modified because I prefer to use .sql as the extension for SQL files:
Usually you run mysqldump to create a database copy and backups as
follows:
$ mysqldump -u user -p db-name > db-name.sql
Copy db-name.out file using sftp/ssh to remote MySQL server:
$ scp db-name.sql user#remote.box.com:/backup
Restore database at remote server (login over ssh):
$ mysql -u user -p db-name < db-name.sql
Basically you'll use mysqldump to generate a dump of your database, copy it to your local machine, then pipe the contents into mysql to regenerate the DB.
You can copy the DB files themselves, rather than using mysqldump, but only if you can shutdown the MySQL service on the remote machine.
I would recommend the Xtrabackup tool by Percona. It has support for hot copying data via SSH and has excelent documentation. Unlike using mysqldump, this will copy all elements of the MySQL instance including user permissions, triggers, replication, etc...
ssh into the remote machine
make a backup of the database using mysqldump
transfer the file to local machine using scp
restore the database to your local mysql