Server1 has a MySQL server, Server2 has a file which I need to import into a table Server1's MySQL server. I can only access Server2 and it's files using SSH.
Now what can be the best solution for this? One inefficient method would be to scp the file onto Server2's hard disk, then execute a LOAD DATA INFILE for that file. But since the file is large, I want to avoid doing this.
Is there a way to directly load the file into Server1's Mysql from Server2?
cat file.sql | ssh -C -c blowfish username#myserver mysql -u username -p database_name
In this command, -C enables compression during transfer and -c blowfish selects a cyphering algorithm that uses less CPU than the default one.
Common sense would suggest to transfer the compressed file so that you can verify the checksum with for example MD5 and then run the import from the compressed file.
Hope this helps.
You want to use ssh tunneling to access your mysql server remotely and securely.
The following command will create the port forwarding:
$ ssh -f -L <[local address:]port>:<remote address;port> -N <user id>
For example:
$ ssh -F -L 45678:localhost:3307 -N foo#localhost
will forward the default mysql server port on localhost to the port 45678 using foo's credentials (this may be useful for testing purposes).
Then, you may simply connect to the server with your local program:
$ mysql -p -u foo -P 45678
At the mysql prompt, it is possible to bulk load a data file using the LOAD DATA statement, which takes the optional keyword LOCAL to indicate that the file is located on the client end of the connection.
Documentation:
ssh manual page
LOAD DATA statement
Related
I tried copy a sql file from one server another server mysql database
ssh -i keylocation user#host 'mysql --user=root --password="pass" --host=ipaddress additional_content Additional_Content' | < databasedump.sql
databasedump.sql - file on server A, I wanna copy data from that database file into database on server B, i tried to connec via ssh, to that server,and i need keyfile for that, and then copy the data, but when i run this command into console, nothing happens, any help?
Are you able to secure copy the file over to server B first, and then ssh in for the mysql dump? Example:
scp databasedump.sql user#server-B:/path/to/databasedump.sql
ssh -i keylocation user#host 'mysql --user=root --password="pass" --database=db_name < /path/to/databasedump.sql'
Edit: type-o
I'm not entirely sure how mysql handles stdin, so one thing you can do that should work in one command is
ssh -i keylocation user#host 'cat - | mysql --user=root --password="pass" --host=ipaddress additional_content Additional_Content' < databasedump.sql
However it's better to copy the files first with scp, and then import it with mysql. See Dan's answer.
I'm trying to use plink on Windows to create a tunnel to a Linux machine and have the dump file end up on the Windows machine. It would appear that this answer would work and is the basis of my question. But trying it out and looking at other answers I find that the dump file is still on the Linux machine. I'm trying this out in my local environment with Windows and Ubuntu 14.04 before moving to production. In Windows 8.1:
plink sam#192.168.0.20 -L 3310:localhost:3306
mysqldump --port=3310 -h localhost -u sam -p --all-databases > outfile.sql
I've tried swapping localhost in the second with 127.0.0.1, adding -N to the tail of the tunnel setup, using one table in the dump command but despite my tunnel, it's as if the first command is ignored. Other answers indicate to add more commands to the script so that I can use pscp to copy the file. That also means to re-connect to trash this outfile.sql. Not ideal for getting other dumps on other servers. If that's the case, why use the first command at all?
What am I overlooking? In plink, the output of the first is to open up the Linux server where I can run the mysqldump command. But there seems to be ignoring the first command. What do you think?
You have several options:
Dump the database remotely to a remote file and download it to your machine afterwards:
plink sam#192.168.0.20 "mysqldump -u sam -p --all-databases > outfile.sql"
pscp sam#192.168.0.20:outfile.sql .
The redirect > is inside the quotes, so you are redirecting mysqldump on the remote machine to the remote file.
This is probably the easiest solution. If you compress the dump before downloading, it would be even the fastest, particularly if you connect over a slow network.
Execute mysqldump remotely, but redirect its output locally:
plink sam#192.168.0.20 "mysqldump -u sam -p --all-databases" > outfile.sql
Note that the redirect > is outside of the quotes, comparing to the previous case, so you are redirecting an output of plink, i.e. output of the remote shell, which contains output of a remote mysqldump.
Tunnel connection to the remote MySQL database and dump the database locally using a local installation of MySQL (mysqldump):
plink sam#192.168.0.20 -L 3310:localhost:3306
In a separate local console (cmd.exe):
mysqldump --port=3310 -h localhost -u sam -p --all-databases > outfile.sql
In this case nothing is running remotely (except for a tunnel end).
Is it possible to dump a database from a remote host through an ssh connection and have the backup file on my local computer.
If so how can this be achieved?
I am assuming it will be some combination of piping output from the ssh to the dump or vice versa but cant figure it out.
This would dump, compress and stream over ssh into your local file
ssh -l user remoteserver "mysqldump -mysqldumpoptions database | gzip -3 -c" > /localpath/localfile.sql.gz
Starting from #MichelFeldheim's solution, I'd use:
$ ssh user#host "mysqldump -u user -p database | gzip -c" | gunzip > db.sql
ssh -f user#server.com -L 3306:server.com:3306 -N
then:
mysqldump -hlocalhost > backup.sql
assuming you also do not have mysql running locally. If you do you can adjust the port to something else.
I have created a script to make it easier to automate mysqldump commands on remote hosts using the answer provided by Michel Feldheim as a starting point:
mysqldump-remote
The script allows you to fetch a database dump from a remote host with or without SSH and optionally using a .env file containing environment variables.
I plan to use the script for automated database backups. Feel free to create issues / contribute - hope this helps others as well!
I need to copy an entire database from a mysql installation on a remote machine via SSH to my local machines mysql.
I know the SSH and both local and remote MYSQL admin user and password.
Is this enough information, and how is it done?
From remote server to local machine
ssh {ssh.user}#{remote_host} \
'mysqldump -u {remote_dbuser} --password={remote_dbpassword}
{remote_dbname} | bzip2 -c' \ | bunzip2 -dc | mysql -u {local_dbuser}
--password={local_dbpassword} -D {local_dbname}
That will dump remote DB in your local MySQL via pipes :
ssh mysql-server "mysqldump --all-databases --quote-names --opt --hex-blob --add-drop-database" | mysql
You should take care about users in mysql.users
Moreover, to avoid typing users and passwords for mysqldump and mysql on local and remote hosts, you can create a file ~/.my.cnf :
[mysql]
user = dba
password = foobar
[mysqldump]
user = dba
password = foobar
See http://dev.mysql.com/doc/refman/5.1/en/option-files.html
Try reading here:
Modified from http://www.cyberciti.biz/tips/howto-copy-mysql-database-remote-server.html - modified because I prefer to use .sql as the extension for SQL files:
Usually you run mysqldump to create a database copy and backups as
follows:
$ mysqldump -u user -p db-name > db-name.sql
Copy db-name.out file using sftp/ssh to remote MySQL server:
$ scp db-name.sql user#remote.box.com:/backup
Restore database at remote server (login over ssh):
$ mysql -u user -p db-name < db-name.sql
Basically you'll use mysqldump to generate a dump of your database, copy it to your local machine, then pipe the contents into mysql to regenerate the DB.
You can copy the DB files themselves, rather than using mysqldump, but only if you can shutdown the MySQL service on the remote machine.
I would recommend the Xtrabackup tool by Percona. It has support for hot copying data via SSH and has excelent documentation. Unlike using mysqldump, this will copy all elements of the MySQL instance including user permissions, triggers, replication, etc...
ssh into the remote machine
make a backup of the database using mysqldump
transfer the file to local machine using scp
restore the database to your local mysql
As per title I need to import, but PG backups is giving me strict Postgres SQL that doesn't work with MySQL, also with a non-specified encoding that I guess is UTF-16. Using db:pull takes ages and errors before finishing.
I'd appreciate any suggestion. Thanks.
Set up PostgreSQL locally, use PG backups to copy the data from Heroku to your local machine, then pg_restore to import it into your new local PostgreSQL. Then you can copy it from PostgreSQL to MySQL or SQLite locally without having to worry about timeouts. Or, since you'd have a functional PostgreSQL installation after that, just start developing on top of PostgreSQL so that your development stack better matches your deployment stack; developing and deploying on the same database is a good idea.
You're probably getting binary dumps (i.e. pg_dump -Fc) from Heroku, that would explain why the dump looks like some sort of UTF-16 nonsense.
You can use the pgbackups addon to export the database dump
$ heroku addons:add pgbackups # To install the addon
$ curl -o latest.dump `heroku pgbackups:url` # To download a dump
Heroku has instructions on how to do this: https://devcenter.heroku.com/articles/heroku-postgres-import-export#restore-to-local-database, which boil down to:
$ pg_restore --verbose --clean --no-acl --no-owner -h localhost -U myuser -d mydb latest.dump
where myuser is the current user and mydb is the current database.
If you're using Postgres.app, it's pretty trivial to copy your production database locally. You can leave out -U myuser unless you have configured it otherwise, and create a database by running $ psql -h localhost and then CREATE DATABASE your_database_name; (from the Postgres.app documentation. Then run the above command and you're set.
Follow these 4 simple steps in your terminal(Heroku Dev Center):
Install the Heroku Backup tool:
$ heroku addons:add pgbackups
Start using it:
$ heroku pgbackups:capture
Download the remote db on Heroku (to your local machine) using curl:
$ curl -o latest.dump 'heroku pg:backups public-url'
Load it*:
$ pg_restore --verbose --clean --no-acl --no-owner -h localhost -U YOUR_USERNAME -d DATABASE_NAME latest.dump
get your username and choose the desired database from your config/database.yml file.
DATABASE_NAME can be your development/test/production db (Ex. mydb_development)
That's it!
UPDATE: It is updated to the new interface