I'm able to connect to my company's MySQL server via SSH in Terminal
ssh -L 3306:localhost:3306 me#jumphost.mycompany.com
mysql -h mysql.mycompany.com -ume -pmy_password
I'm struggling to find a way to do this in an R Script. Any suggestions appreciated.
If I try to connect using DBI (after connecting to ssh in Terminal):
con <- DBI::dbConnect(RMariaDB::MariaDB(),
host = "localhost",
user = "me",
password = "my_password")
I get this error: Error: Failed to connect: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
Use 127.0.0.1 instead of localhost.
If you use localhost, the client does not connect to port 3306 using TCP. It tries to connect to a UNIX socket, which only reaches an instance running on your client host.
I answered the same question about MySQL in the past (Accessing SQL through SSH Tunnel). MariaDB is not MySQL, but in this case they should work the same.
Since you can successfully run the mysql CLI in same session as ssh port forwarding, consider running R at command line as well using its CLI, Rscript, or the shell, R. Alternatively, try running the SSH port forwarding directly inside R code using the command line caller, system.
The challenge you are encountering is the SSH tunneling occurs in a different session to your R session. Your R environment must run in the same session as SSH port forwarding.
Bash Command Line (using Rscript)
ssh -L 3306:localhost:3306 me#jumphost.mycompany.com
Rscript my_database_connect_script.R
# OR /path/to/R/installation/bin/Rscript my_database_connect_script.R
ssh -O cancel -L 3306:localhost:3306 me#jumphost.mycompany.com
R script (using system)
library(DBI)
library(RMariaDB)
# COMMAND LINE INTERFACE CALL
system("ssh -L 3306:localhost:3306 me#jumphost.mycompany.com")
# OPEN DB CONNECTION
con <- DBI::dbConnect(
RMariaDB::MariaDB(),
host = "mysql.mycompany.com", # SAME HOST AS SUCCESSFUL mysql CLI
user = "me",
password = "my_password"
)
dbGetQuery(con, "SHOW DATABSES")
# CLOSE DB CONNECTION
dbDisconnect(con)
# CANCEL PORT FORWARDING
system("ssh -O cancel -L 3306:localhost:3306 me#jumphost.mycompany.com")
Related
I have 2 servers - serverA and serverB, both have mysql server and mysql client. I have a reverse SSH tunnel set up from serverB to serverA so that I don't have to open ports up on server B to the internet. I access serverB from serverA by doing mysql --host 127.0.0.1 --port 50003
If I am logged into serverA and do mysql --host 127.0.0.1 --port 3306 I want the command line prompt to be me#serverA [dbname].
If I am logged into serverA and do mysql --host 127.0.0.1 --port 50003 I want the command line prompt to be me#serverB [dbname].
For these examples, I am always logged in to serverA and am connecting the mysql client to serverA or serverB
Using the prompt directive in /etc/my.cnf on serverA and serverB,
if I do
[mysql]
prompt = \u#\h [\d]
then I get me#127.0.0.1 [dbname] on both of them.
if I do prompt = \u#serverA [\d] on ServerA and prompt = \u#serverB [\d] on serverB, then I get me#serverA [dbname] whether I'm trying to connect to serverA or serverB
So it looks like the command prompt is picking up the details for the server on serverA regardless of what I'm actually connecting to.
Is there any way I can make the prompt reflect what I'm actually connected to ?
The prompt setting is a client option. For connections from serverA, the only config file that will matter is the one on serverA. The reason you see 127.0.0.1 in your prompt for both connections is that, from the client's perspective, both connections connect to 127.0.0.1.
The MySQL client provides a mechanism for specifying option groups (command line argument --default-groups-suffix), and you can use this to approximate what you are looking for. As an example for your needs, you can add the following to your ~/.my.cnf file:
[clientserverA]
prompt = \u#serverA [\d]>
host = 127.0.0.1
[clientserverB]
prompt = \u#serverB [\d]>
host = 127.0.0.1
port = 50003
Using this, you can connect to MySQL using mysql --defaults-group-suffix=serverA or mysql --defaults-group-suffix=serverB. The client will use the appropriate host, port, and prompt for your connection based on the suffix you provide.
You can create a shell function that takes a suffix and creates the appropriate command line if you don't want to type that all out.
I'm trying to connect Tableau to a SQL database that I have hosted on pythonanywhere.com, but it's not working and I believe the reason is that my remote connection to pythonanywhere.com has no process listening on port 3306. In fact I think it's actively being killed, but I'm not sure if that's true.
Right now I'm on a Windows 10 Machine and I'm connecting through git bash in the following way:
ssh -L 3306:jonathanbechtel.mysql.pythonanywhere-services.com:3306 jonathanbechtel#ssh.pythonanywhere.com
After I do this I run the command netstat -an and see the following:
My understanding is that the TIME_WAIT status means that something locally killed the connection.
My understanding is that I need to have the local address 127.0.0.1:3306 to have a status of LISTENING in order for me to use the tunnel as a connection to anything else.
I've also connected to this database externally in a jupyter notebook, as well as through MySQL workbench, so I know it can be done in some capacity.
But I don't know why in this instance the connection is getting killed.
The command telnet 127.0.0.1 3306 says the connection worked:
UPDATE:
The problem is that when I try to connect to my PuTTY connection (via Tableau) I get the following error message:
[MySQL][ODBC 8.0(w) Driver]Access denied for user 'myusername'#'localhost' (using password: YES)
Invalid username or password.
However, if I run the following python code, I can connect:
import mysql.connector
import sshtunnel
sshtunnel.SSH_TIMEOUT = 5.0
sshtunnel.TUNNEL_TIMEOUT = 5.0
with sshtunnel.SSHTunnelForwarder(
('ssh.pythonanywhere.com'),
ssh_username=info['username'], ssh_password=info['password'],
remote_bind_address=(info['db_address'], 3306)
) as tunnel:
connection = mysql.connector.connect(
user=info['username'], password=info['db_password'],
host=info['ssh_address'], port=tunnel.local_bind_port,
database=info['db_name'],
)
df = pd.read_sql_query('SELECT * FROM classes', connection)
connection.close()
By default, Database Manager from PhpStorm works well. But currently on a special Provider (1u1.de) I have some trouble to got this work.
I can connect to the Provider via SSH. If I want to connect to MySQL database, I have to use:
mysql --host=localhost --user=dbo123123123 -S /tmp/mysql5.sock --password='123123123';
That's works well via CLI on Server, but I didn't find a way to connect via PhpStorm to this Database.
For me it seems that the "socket-connection" may be the Problem. Does anybody have a clue how to got this to work?
Part of the Solution (?!):
Maybe a first part of an solution, I found that you be able to forwarding an Socket to your local pc as own socket this way:
ssh -nNT -L $(pwd)/yourLocal.sock:/var/run/mysqlREMOTEMYSQL.sock user#somehost
Source of Information
This show me, that the Socket is established:
netstat -ln | grep mysql
unix 2 [ ACC ] STREAM LISTENING 3713865 /myFolder/mysql5.sock
But I'm still unable to connect to this Socket with:
mysql -h localhost --protocol=SOCKET -u'username' -p'mypassword' -S /myFolder/mysql5.sock
Got this Error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 95 "Operation not supported"
ssh -L /tmp/mysql.sock:/var/run/mysqld/mysqld.sock sshuser#remotehost
and then
mysql -h localhost --protocol=SOCKET -u'username' -p'mypassword' -S /tmp/mysql.sock
seems to work fine for me
Use SSH to setup a port forward, this will allow you to connect securely to your database without exposing it to the world.
On ssh, use the -L argument to establish the tunnel.
ssh -L <local_port>:<remote_host>:<remote_port> user#host
This will open <local_port> on your local machine, and then redirect all packets out the other side of the tunnel, destened for the <remote_host>:<remote_port>
In your case, you might want to try something like this:
ssh -L 3306:127.0.0.1:3306 user#mybox.1u1.de
After establishing the tunnel, you will be able to connect to the database through a local port.
From your local machine, not the 1u1 host,
mysql -u <user> -p --host 127.0.0.1 --port 3306
If this works properly, you should be able to configure PhpStorm to use the same address, 127.0.0.1:3306
The SSH tunnel will need to remain open the entire time you need to be connected to the database.
i got a remote webserver running with a mysql database.
Right now im using SSH to do any serverside management, and i access the MySQL often. I wondered if its possible for me to make a script that would ssh into the server and if run with "-sql" (subject to change) on the command line it would instead go into mysql.
What i made so far:
#!/bin/bash
if [ "$1" == "-l" ]; then
ssh user#192.168.0.101 //local address for privacy;
mysql -u root -p;
else
ssh user#192.168.0.101
fi
This results in an SSH session and when it ends my computer will try and create a mysql connection on the local machine.
#!/bin/bash
if [ "$1" == "-l" ]; then
ssh user#192.168.0.101 'mysql -u root -p';
else
ssh user#192.168.0.101
fi
This results in a password request and then nothing. I'm assuming its because using ssh with a command expects a response and then shuts down the connection.
Is there any way to do this, i realise that it's not of any significant importance, but it is fun to play around with
Thanks in advance
The mysql command only executes interactively if it's input is a terminal. When you run ssh with a command argument, it doesn't normally allocate a pseudo-tty on the server. So mysql just processes its standard input without displaying a prompt.
Use the -t option to force this:
ssh -t user#192.168.0.101 'mysql -u root -p'
One option you might want to consider for solving this kind of access problem is through the use of the tunneling facility in ssh:
ssh -l user -L 33306:192.168.0.101:3306 192.168.0.101
or maybe
ssh -l user -L 33306:127.0.0.1:3306 192.168.0.101
This creates a port on your local machine (33306) which tunnels to the mysql port (3306) on the remote machine.
Then on your local machine you run a local copy of mysql:
mysql --port=33306 -u root -p
which should connect through the tunneled port to your database.
Try like this. Feed mysql password with the command. So you don't have to enter the password.
ssh user#192.168.0.101 'mysql --user="root" --password="password" --database="database" --execute="My Query;"'
Also I suggest you to set keybased ssh authentication. Hence you can also avoid typing ssh password every time.
The Setup
I am currently using the Premium Wordpress Hosting provided by MediaTemple. I have a very large data-set to import and I was hoping to get direct access to the database via an SSH tunnel.
--------------- ------------------- ------------
| My Machine | ---- SSH TUNNEL -----| Hosting Server | -- -- ? -- -- | Database |
--------------- ------------------- ------------
What Works
If I ssh into the Hosting Server and from the shell on the Hosting Provider, connect to mysql like this, I am able to get into MySQL.
mysql -uuser -ppassword -h123.456.789.1 -P3308
What Does Not Work
However, if I try to connect to MySQL using the -L flag with SSH to create a tunnel, I am unable to connect to the server.
ssh me#hostingserver.net -L 7002:123.456.789.1:3308
From a shell on My Machine:
mysql -uuser -ppassword -h127.0.0.1 -P7002
I get the following error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
From reading other answers (StackOverflow , StackOverflow), I have reasoned that the issue stems from the IP address with which MySQL client tries to bind. I think that the ip address attach to the request to connect, when executed on my machine, is not on the white-list of the Database Server.
Is there anyway to get direct access to the MySQL Database from My Machine. From a system administration perspective, I obiviously have enough access to connect to the MySQL database from the shell but I cannot run the client on My Machine. I have a very large dataset that I would like to transfer from My Machine to Database. I would also like to be able access the database and exicute SQL whenever I need to. This and the large dataset kind of eliminates the possibility of just using a the source command from the MySQL Client on Hosting Server. What is the best workaround to give me something close to the ability to run SQL on the Database from My Machine?
I encountered roughly the same issue. That is, I simply could not connect to the MySQL server, even though I had successfully tunneled to the remote host.
TLDR: it was an iptables issue involving the loopback interface
In my situation, mysqld was running on the same VPS as sshd. However, the MySQL instance was bound only to 127.0.0.1 and listening on the default port. As you did, I confirmed that I could connect to the mysqld instance on the remote machine using the credentials used locally.
Here is the tunnel:
ssh -v -N -L 33306:127.0.0.1:3306 sshuser#sshanddbvps.org
Here is the connection string to the mysqld instance using the mysql client:
mysql -umysqluser -h127.0.0.1 -P 33306 -p
Even though ssh indicated that the connection was successful...
debug1: Connection to port 33306 forwarding to 127.0.0.1 port 3306 requested.
...the mysql client connection would error out after accepting the correct password with the message you mentioned:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet'...
To check that data was flowing across the loopback interface, I logged into the remote server and ran three commands in three separate shells:
while true; do echo -e "HTTP/1.1 200 OK\n\n $(date)" | nc -l 127.0.0.1 1234; done
tcpdump -i lo src 127.0.0.1 -or dst 127.0.0.1
nc 127.0.0.1 1234
After running the third, output from the second command appeared:
13:59:14.474552 IP localhost.36146 > localhost.1234: Flags [S], seq 1149798272, win 43690, options [mss 65495,sackOK,TS val 48523264 ecr 0,nop,wscale 7], length 0
But nothing indicating that packets were flowing in the reverse direction.
Inserting a rule in the INPUT chain of the firewall that allowed traffic from the default loopback address solved the issue:
iptables -I INPUT 4 -i lo -s 127.0.0.1 -j ACCEPT