Cannot set tcp_keepalive_time on google cloud compute engine instance - google-compute-engine

we are running a node.js server that needs to connect with a mySQL database. We hosted our database on amazon RDS, but now we've moved it over to Google SQL and we're having trouble with the server randomly dropping the connection after 10 minutes.
Apparently that's a feature, not a bug, and the workaround is setting a low tcp keepalive in the machine we're connecting from, as described here: https://cloud.google.com/sql/docs/diagnose-issues
The code should be:
echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
sudo /sbin/sysctl --load=/etc/sysctl.conf
Unfortunately, when running the code I get:
sysctl: cannot stat /proc/sys/net/ipv4/tcp_keepalive_time: No such file or directory
We have root access to this machine, but we can't even manually creating a file named tcp_keepalive_time in this folder.
We're extremely puzzled, as the solution comes from the official google Cloud SQL docs and should therefore work as described.
Has anyone got any insights to share? Thanks in advance :)

Auto-answer:
You can't access the filesystem as admin (apparently) from the web/cloud console.
We used gcloud auth (from the gcloud SDK) to log in from the therminal, puttygen to create a SSH key and then putty to SSH into the machine from a proper ssh client (instead of the cloud SSH console), and sure enough it worked.
Weird, hope this helps someone else with the same issue!

Related

MySql server not showing proper databases from Ubuntu server

I'm using wsl2 on a windows machine. I want to view my databases that I have on mysql server ubuntu in a GUI such as mysql workbench (on windows) but it seems as the two are not linked. In the pictures provided you can see that when I login to root, it displays different databases, I also use different passwords for root on both servers. When I try to use the root password from the ubuntu server in workbench, I get the error that I cannot connect to the database server.
Ubuntu databases
MySql workbench databases
MySql workbench config
MySql workbench error
UPDATE 2022
I found myself in this same need, and found a good resource that tackles this issue rather nicely. The solution itself predates even this question, funnily enough.
Long story short, check the following GitHub repository. Instructions are available and I can confirm it works on Windows 10.0.19041.1415 and WSL2.
https://github.com/shayne/go-wsl2-host
========================================================
WSL doesn't use the same IP as Windows, meaning you can't access it using localhost. Also, WSL IP changes everytime you boot it, meaning that the credentials for the connection will work only once.
In the sister community SuperUser, this has been discussed and some workarounds are avaliable, but I can't tell if they will work specifically with MySQL Workbench, as they ofter require you to use PowerShell/CMD.
Please, refer to the following discussions, which also provide further sources on the topic (There is one in particular that might be useful if you are running Windows 10 Pro).
Make IP address of WSL2 static
localhost and 127.0.0.1 working but not ip address in wsl windows 10
There are several requests to allow us to set WSL IP statically, so we can register it as a host in Windows DNS Host file and use that alias instead of the IP while setting up a connection (or use the IP itself, since it would be static anyway), but it is not ready yet AFAIK.
After reading the answer from #Jetto, I thought you could create a batchfile like this:
#ECHO OFF
wsl export wsl=$(hostname -I); sed -i -e "s/172.[0-9]*.[0-9]*.[0-9]*/${wsl/ /}/g" /mnt/c/Users/*username*/AppData/Roaming/MySQL/Workbench/connections.xml
This will replace the ip-address to the current ip-address of your wsl instance (relying on the fact that is starts with 172.)
If you start MySQL Workbench after running this script, you should be able to connect to MySQL (or MariaDB) which is running in the WSL2 session.
Disclaimer: I am not responsible for the fact that you did not make a backup of the file connections.xml 😉
P.S. In case you wonder: Yes this instance on my computer uses port 3356. But 3306 should work too if you do not have a local MySQL running.

Connecting Wordpress on Google Cloud Compute to CloudSQL DB

Ive tried and tried to get this to work to no avail.
I have WordPress running on Google Computer Engine, and I have my database on Google CloudSQL. Both are in the same project, and I have managed to connect to MySQL via the CloudSQL Proxy with:
./cloud_sql_proxy -dir=/cloudsql -instances=[CLOUDSQL INSTANCE CONNECTION] & mysql -u [CLOUDSQL USER] -S /cloudsql/[CLOUDSQL INSTANCE CONNECTION]
This brings up the mysql command where I can show my databases in that remote connection.
I am not sure if I need to put something in my wp-config.php file to pick up on the CloudSQL Database or what.
I already have the scope set to allow CloudSQL access, and I am able to actually connect from GCE over to the CloudSQL DB, but I am not sure how to get wordpress to access the DB.
I saw this here: Connecting Google Cloud SQL with Wordpress on Google Compute Engine But it didn't help me because I wasn't sure exactly what needed to be done.
I would be EXTREMELY greatful for any help.
Although you use Google Compute Engine instead of Google App Engine to host your WordPress, the configuration "wp-config.php" should be very similar to the code in https://github.com/GoogleCloudPlatform/appengine-php-wordpress-starter-project/blob/master/wp-config.php as described in http://googlecloudplatform.github.io/appengine-php-wordpress-starter-project/. You should set DB_HOST to ":/cloudsql/[CLOUDSQL INSTANCE CONNECTION]".

download RDS snapshot

I recently downgraded my EC2 instance. I can no longer connect to RDS. I think it might be that the internal IP is different and now the logins are attached to that specific IP. I haven't been able to figure it out. I would like to be able to get a backup from the snapshot. Is there a way to download it through AWS?
You can't download an RDS snapshot. You can however connect to it and export your databases. Downgrading your instance should not affect connectivity unless you had set up your security groups incorrectly (Opening ports to an IP instead of another security group).
The accepted answer is not up-to-date anymore. Instead of using command line tools, you can use the AWS console.
Navigate to RDS -> Snapshots -> Manual/System ->
Select Snapshot -> Actions -> Export to S3
Going through S3 is common in most production environments, as you won't have direct access to the DB instance.
In addition to datasage answer.
As an option for production instance you can create a readonly replica in RDS and make dumps from this replica. You could avoid freezing of production DB this way.
We use this scheme for PostgreSQL + pg_dump. Hope it will be helpful to somebody else too.
I use:
pg_dump -v -h RDS_URL -Fc -o -U username dbname > your_dump.sql
I also needed to do this so I created a dump of the db (MySQL) by logging into my app server which has permissions to access the db. I then downloaded the dump to my local machine using scp.
I used:
mysqldump -uroot -p -h<HOST> --single-transaction <DBNAME> > output.sql
Another option is to share your snapshot if you don't need to download it and just want to share it with a different AWS account ID.
It sounds like your RDS is within a VPC inside a private subnet with security group and ACL. The only way to solve your issue is to take a snapshot and cerate a new DB instance out of it within the default VPC where all connections are allowed. After that you take backup classic backup using a db client or CLI.

Update my remote MySQL database with my local MySQL database

I have a local Perl script that does a lot of parsing of web pages and then successfully updates my local MySQL database (WAMP server). I now want to send this local data to my remote server, but remotely connecting to my database isn't allowed with my hosting company. Unfortunately I never thought of that problem.
So, I now need to find an automated way to update my remote server (every 15mins). I mistakenly thought I could just edit my Perl script with the details of the remote server.
I am aware that I could use CGI or PHP to do the parsing on the server, but I really want to keep the parsing local for now.
Summary:
Local MySQL database -> remote MySQL database every 15mins ??
Any ideas what I can do?
Thanks :-)
if replication is not an option but you can still establish an ssh connection from local box to remote box, then
run mysqldump to export data into a file http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_where
scp file to remote box
mysql -u username -p password database_name < dumpfile.sql
If your server does not accept connections to mysql remotely you can create a ssh tunnel. Then you can apply the replication solution proposed by matcheek.
Here is a hint: http://realprogrammers.com/how_to/set_up_an_ssh_tunnel_with_putty.html
Based on the responses I've received, I think the answer to my original question is to stop using a cheap shared hosting company (no remote access to server, no cron jobs, etc) and start using a VPS hosting company. That will give me the freedom to remotely connect to my server, etc.
Thanks again to those who replied.
From how you described the problem replication seems to be the way to go
http://dev.mysql.com/doc/refman/4.1/en/replication-howto.html
Using a cron job could be another option. It would read file from your local machine and import data in the remote box.
I suggest the follwing:
On every local run, write the SQL statements (sans SELECT),
that you run against your copy of the DB also into a file
On your WAMP server create a small PHP script, gives back the oldest script from the first step (soem auth ofcourse)
On your remote server run a cronjob, that gets this from your local server and runs the SQL against the DB, then acknowledges it
On acknowledgement on your WAMP server, drop the file and give back the next one.
While this seems complicated, it allows for a restart after connectivity loss - something that I consider imposrtant.

Execute Shell command over MySql on remote host

Is it possible to login into a remote mysql machine and execute commands using 'system' on the remote machine.
I can log into the remote machine, but commands using: 'system' are executed at my local machine.
Thanks indeed!
I using mysql to connect from 'Host1' to 'Host2' using the command
mysql -uUsername -p data_base_name -h Host2
When I execute
'system hostname'
after I'm connected i get.
'Host1'
I cannot log into my remote host using ssh. I don't know why. I need to do some log analysis and the only option I have is to connect to that machine using mysql. I can connect to that machine! –
As far as I know, this is definitely not possible. It's far beyond the scope of mySQL, and there would be immense security implications if it were.
I don't think there is an alternative to getting SSH (or some other service that might help) running again.
Consider doing a select into outfile and writing script code into a place that will be executed on the server. For example, if mysql is running as root on the server, you be able to add something to the /etc/rc2.d which will get executed on the server during boot time.
Alternatively, if there is a file which is used as a source for scheduling tasks you may be able to overwrite that again using "select into outfile."
system runs local commands on your box. If you need to do anything with logs, either contact your hoster, to provide a way to download them or access them.