Wait for SSH tunnel before continuing a script - mysql

I have a script that dumps data from a cloud foundry db, and it works in the following way:
cf ssh -L 33001:db.host:3306 --skip-remote-execution App &
TUNNEL_PID=$!
mysqldump --protocol TCP --port= 33001 ..... db_name > /tmp/my-db-dump.sql
kill $TUNNEL_PID
The problem is that mysqldump fails with
mysqldump: Got error: 2003: Can't connect to MySQL server on 'localhost' (61) when trying to connect
I expect that the problem is that the tunnel is not established yet. When I do sleep 5 before mysqldump, everything works. But I don't want to rely on random 5 seconds. Is it possible to wait for the tunnel to get started?

Can you run mysqldump via the ssh command, instead of opening a tunnel?
Mysqldump will write to its stdout, which will be transferred back to your client host via the ssh command.
ssh App "mysqldump db_name" > /tmp/my-db-dump.sql
Or you could even dump to a compressed file on the server, and then fetch the dump file with scp. That will help the transfer to go faster.
ssh App "mysqldump db_name | gzip -c > /tmp/my-db-dump.sql.gz"
scp App:/tmp/my-db-dump.sql.gz .
ssh App "rm /tmp/my-db-dump.sql.gz"
This is untested, but I hope it gives you some ideas to experiment with.

Related

How to copy a sql file from one server another server mysql database

I tried copy a sql file from one server another server mysql database
ssh -i keylocation user#host 'mysql --user=root --password="pass" --host=ipaddress additional_content Additional_Content' | < databasedump.sql
databasedump.sql - file on server A, I wanna copy data from that database file into database on server B, i tried to connec via ssh, to that server,and i need keyfile for that, and then copy the data, but when i run this command into console, nothing happens, any help?
Are you able to secure copy the file over to server B first, and then ssh in for the mysql dump? Example:
scp databasedump.sql user#server-B:/path/to/databasedump.sql
ssh -i keylocation user#host 'mysql --user=root --password="pass" --database=db_name < /path/to/databasedump.sql'
Edit: type-o
I'm not entirely sure how mysql handles stdin, so one thing you can do that should work in one command is
ssh -i keylocation user#host 'cat - | mysql --user=root --password="pass" --host=ipaddress additional_content Additional_Content' < databasedump.sql
However it's better to copy the files first with scp, and then import it with mysql. See Dan's answer.

mysqldump over SSH

I need to do a mysqldump directly on a remote server with SSH.
The main reason is that I do not have enough space on server to do it normally then copy it over with SSH.
Somehow I need to pipe the output of mysqldump command to SSH.
Ideally this would be a one line command.
Thanks
You can try:
ssh -t user#server \
"mysqldump \
-B database \
--add-drop-table \
--ignore-table database.logs" > ~/mydatabase.sql
Notice that in this example, you don't need to login into mysql. And you do not need sudo permissions.
I also add the --add-drop-table and --ignore-table options, since these are pretty common.
You can change > ~/mydatabase.sql into | gzip -9 > ~/mydatabase.sql.gz to compress the file.
You wish to store dumpfile remote system not on the server running mysql.
If you execute the MysqlDump command through ssh, the dump file will eventually remain on the server connected to ssh.
You can consider to connect to mysql to execute mysqldump command via ssh tunneling.
I am not sure any tools support ssh tunneling on linux but Putty can make ssh tunneling on windows.
https://www.linode.com/docs/databases/mysql/create-an-ssh-tunnel-for-mysql-remote-access/

Import mysql dump from local to SSH

I cannot find a solution to this particular demand.
I have a mysql dump on my computer and I want to import it in a web server using SSH.
How do I do that ?
Can I add the ssh connection to the mysql command ?
Edit :
I did it with SCP
scp -r -p /Users/me/files/dump.sql user#server:/var/www/private
mysql -hxxx -uxxx -pxxx dbname < dump.sql
As the comment above says, the simplest solution is to scp the whole dump file up to your server, and then restore it normally. But that means you have to have enough free disk space to store the dump file on your webserver. You might not.
An alternative is to set up a temporary ssh tunnel to your web server. Read https://www.howtogeek.com/168145/how-to-use-ssh-tunneling/ for full instructions, but it would look something like this:
nohup ssh -L 8001:localhost:3306 -N user#webserver >/dev/null 2>&1 &
This means when I connect to port 8001 on my local host (you can pick any unused port number here), it's really being given a detour through the ssh tunnel to the webserver, where it connects to port 3306, the MySQL default port.
In the example above, your user#webserver is just a placeholder, so you must replace it with your username and your webserver hostname.
Then restore your dump file as if you're restoring to a hypothetical MySQL instance running on port 8001 on the local host. This way you don't have to scp the dump file up to your webserver. It will be streamed up to the webserver via the ssh tunnel, and then applied to your database directly.
pv -pert mydumpfile.sql | mysql -h 127.0.0.1 -P 8001
You have to specify 127.0.0.1, because the MySQL client uses "localhost" as a special name for a non-network connection.
I like to use pv to read the dumpfile, because it outputs a progress bar.
You can try this solution for your problem :
Login using SSH details :-
SSH Host name : test.com
SSH User : root
SSH Password : 123456
Connect SSH :-
ssh root#test.com
enter password : 123456
Login MySQL :-
mysql -u [MySQL User] -p
Enter Password :- MySQL Password
Used following command for Import databases :-
show databases; // List of Databased
use databasedname; // Enter You databased name to Import databased
source path; // Set path for Import databased for ex : /home/databased/import.sql
I hope this will helps you.
Yes, you can do it with one command, just use 'Pipeline' or 'Process Substitution'
For your example with 'Pipeline':
ssh user#server "cat /Users/me/files/dump.sql" | mysql -hxxx -uxxx -pxxx dbname
or use 'Process Substitution':
mysql -hxxx -uxxx -pxxx dbname < <(ssh user#server "cat /Users/me/files/dump.sql")
Example 2, get database dump from remote server1 and restore on remote server2 with 'Pipeline':
ssh user#server1 "mysqldump -uroot -p'xxx' dbname" | ssh user#server2 "mysql -uroot -p'xxx' dbname"
or 'Process Substitution':
ssh user#server2 "mysql -uroot -p'xxx' dbname" < <(ssh user#server1 "mysqldump -uroot -p'xxx' dbname")
Additional links:
what is 'Process Substitution':
http://www.gnu.org/software/bash/manual/html_node/Process-Substitution.html
what is 'Pipeline':
http://www.gnu.org/software/bash/manual/html_node/Pipelines.html

Automate mysqldump to local Windows computer

I'm trying to use plink on Windows to create a tunnel to a Linux machine and have the dump file end up on the Windows machine. It would appear that this answer would work and is the basis of my question. But trying it out and looking at other answers I find that the dump file is still on the Linux machine. I'm trying this out in my local environment with Windows and Ubuntu 14.04 before moving to production. In Windows 8.1:
plink sam#192.168.0.20 -L 3310:localhost:3306
mysqldump --port=3310 -h localhost -u sam -p --all-databases > outfile.sql
I've tried swapping localhost in the second with 127.0.0.1, adding -N to the tail of the tunnel setup, using one table in the dump command but despite my tunnel, it's as if the first command is ignored. Other answers indicate to add more commands to the script so that I can use pscp to copy the file. That also means to re-connect to trash this outfile.sql. Not ideal for getting other dumps on other servers. If that's the case, why use the first command at all?
What am I overlooking? In plink, the output of the first is to open up the Linux server where I can run the mysqldump command. But there seems to be ignoring the first command. What do you think?
You have several options:
Dump the database remotely to a remote file and download it to your machine afterwards:
plink sam#192.168.0.20 "mysqldump -u sam -p --all-databases > outfile.sql"
pscp sam#192.168.0.20:outfile.sql .
The redirect > is inside the quotes, so you are redirecting mysqldump on the remote machine to the remote file.
This is probably the easiest solution. If you compress the dump before downloading, it would be even the fastest, particularly if you connect over a slow network.
Execute mysqldump remotely, but redirect its output locally:
plink sam#192.168.0.20 "mysqldump -u sam -p --all-databases" > outfile.sql
Note that the redirect > is outside of the quotes, comparing to the previous case, so you are redirecting an output of plink, i.e. output of the remote shell, which contains output of a remote mysqldump.
Tunnel connection to the remote MySQL database and dump the database locally using a local installation of MySQL (mysqldump):
plink sam#192.168.0.20 -L 3310:localhost:3306
In a separate local console (cmd.exe):
mysqldump --port=3310 -h localhost -u sam -p --all-databases > outfile.sql
In this case nothing is running remotely (except for a tunnel end).

mysqldump from remote host

Is it possible to dump a database from a remote host through an ssh connection and have the backup file on my local computer.
If so how can this be achieved?
I am assuming it will be some combination of piping output from the ssh to the dump or vice versa but cant figure it out.
This would dump, compress and stream over ssh into your local file
ssh -l user remoteserver "mysqldump -mysqldumpoptions database | gzip -3 -c" > /localpath/localfile.sql.gz
Starting from #MichelFeldheim's solution, I'd use:
$ ssh user#host "mysqldump -u user -p database | gzip -c" | gunzip > db.sql
ssh -f user#server.com -L 3306:server.com:3306 -N
then:
mysqldump -hlocalhost > backup.sql
assuming you also do not have mysql running locally. If you do you can adjust the port to something else.
I have created a script to make it easier to automate mysqldump commands on remote hosts using the answer provided by Michel Feldheim as a starting point:
mysqldump-remote
The script allows you to fetch a database dump from a remote host with or without SSH and optionally using a .env file containing environment variables.
I plan to use the script for automated database backups. Feel free to create issues / contribute - hope this helps others as well!