Rethinkdb cluster on different Google Compute Engine VMs - google-compute-engine

I am trying to create a Rethinkdb cluster from multiple VMs on Google Compute Engine. Rethink docs say to do this for first machine:
$ rethinkdb --bind all
And then on the seconds machine to do this:
$ rethinkdb --join IP_OF_FIRST_MACHINE:29015 --bind all
I did this and it seems to work and returns this:
Listening for intracluster connections on port 29015
But, my admin panel does not show another server connected.
I am pretty new to rethink, but any help appreciated, thanks.

Related

How to see MongoDB hosted via AWS DocumentDB when using mongosqld on AWS EC2

Goal
I am trying to use MongoDB's BI Connector for Tableau, aka mongosqld. I have version 2.10, so here are the docs.
My long-term goal is to host mongosqld as a service on an AWS EC2 instance, and host MongoDB on AWS DocumentDB.
Background
A successful set of baby steps was:
Host MongoDB in a Docker container on my local machine via mongo image
Manually run mongosqld on my local machine, without a schema
Connect to it via mysql from my local machine
This works fine, I could see all of the databases via show databases;
My next set of steps was:
Host MongoDB in AWS DocumentDB
Host mongosqld on my EC2 instance at address 0.0.0.0:3307, without a schema
Enable TCP comms on port 3307 and 27017
Connect to it via mysql from my local machine
When I use mysql shell's show databases; command, I cannot see my databases, only information_schema and mysql.
Question
Given all of this information, does anyone here know what might have gone wrong? I am currently at a loss for what to try next.

Cannot set tcp_keepalive_time on google cloud compute engine instance

we are running a node.js server that needs to connect with a mySQL database. We hosted our database on amazon RDS, but now we've moved it over to Google SQL and we're having trouble with the server randomly dropping the connection after 10 minutes.
Apparently that's a feature, not a bug, and the workaround is setting a low tcp keepalive in the machine we're connecting from, as described here: https://cloud.google.com/sql/docs/diagnose-issues
The code should be:
echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
sudo /sbin/sysctl --load=/etc/sysctl.conf
Unfortunately, when running the code I get:
sysctl: cannot stat /proc/sys/net/ipv4/tcp_keepalive_time: No such file or directory
We have root access to this machine, but we can't even manually creating a file named tcp_keepalive_time in this folder.
We're extremely puzzled, as the solution comes from the official google Cloud SQL docs and should therefore work as described.
Has anyone got any insights to share? Thanks in advance :)
Auto-answer:
You can't access the filesystem as admin (apparently) from the web/cloud console.
We used gcloud auth (from the gcloud SDK) to log in from the therminal, puttygen to create a SSH key and then putty to SSH into the machine from a proper ssh client (instead of the cloud SSH console), and sure enough it worked.
Weird, hope this helps someone else with the same issue!

Connecting Wordpress on Google Cloud Compute to CloudSQL DB

Ive tried and tried to get this to work to no avail.
I have WordPress running on Google Computer Engine, and I have my database on Google CloudSQL. Both are in the same project, and I have managed to connect to MySQL via the CloudSQL Proxy with:
./cloud_sql_proxy -dir=/cloudsql -instances=[CLOUDSQL INSTANCE CONNECTION] & mysql -u [CLOUDSQL USER] -S /cloudsql/[CLOUDSQL INSTANCE CONNECTION]
This brings up the mysql command where I can show my databases in that remote connection.
I am not sure if I need to put something in my wp-config.php file to pick up on the CloudSQL Database or what.
I already have the scope set to allow CloudSQL access, and I am able to actually connect from GCE over to the CloudSQL DB, but I am not sure how to get wordpress to access the DB.
I saw this here: Connecting Google Cloud SQL with Wordpress on Google Compute Engine But it didn't help me because I wasn't sure exactly what needed to be done.
I would be EXTREMELY greatful for any help.
Although you use Google Compute Engine instead of Google App Engine to host your WordPress, the configuration "wp-config.php" should be very similar to the code in https://github.com/GoogleCloudPlatform/appengine-php-wordpress-starter-project/blob/master/wp-config.php as described in http://googlecloudplatform.github.io/appengine-php-wordpress-starter-project/. You should set DB_HOST to ":/cloudsql/[CLOUDSQL INSTANCE CONNECTION]".

I get an error EHOSTUNREACH when trying to connect 1 openshift application in NodeJS with MySQL of another openshift application

I have 2 applications on OpenShift: 1 with MySQL, and 1 with NodeJS that is going to connect to MySQL of the other app.
I've seen examples but none of them seem to work, these are the steps I'm taking:
rhc ssh -a mydbappname
then i get the enviroment variables with
env | grep MYSQL
I get something like:
OPENSHIFT_MYSQL_DB_HOST=127.XX.XXX.X
OPENSHIFT_MYSQL_DB_PORT=3306
After that i try to use those on the other app, but it always throws EHOSTUNREACH, no matter if i create the OPENSHIFT_MYSQL_DB env variables on the Node app and use them, or if i put it directly on the code.
I have seen in other parts that the OPENSHIFT_MYSQL_DB_HOST is something like 54d10be2503e36378e0002db-mydbappname.apps.rhcloud.com.
If I use the port-forward and use 127.0.0.1 with the local port selected for mysql, and start the NodeJS application locally it works, only when i upload the changes to Openshift it fails
In this article:
https://blog.openshift.com/sharing-database-across-applications/
You can read:
"Step 1: Create an application with a database
We will create a scalable PHP application using a MySQL database cartridge. In non-scalable applications, the database will be installed in the same gear as the application. In this case we want the database to be accessible from other gears. So creating a scalable application ensures that the database runs in its own gear that can be accessed from other gears."
So maybe this is the problem, check if this application you created is scalable.

MySQL Cluster + Manager and NDB/J

I've been trying to setup a MySQL Cluster for a few days using the MySQL Cluster Manager on 3 Ubuntu nodes (3 identical VM instances with 1GB RAM each).
I've followed the video on MySQL Cluster Manager on the MySQL site. There's not much other documentation/tutorials on it (probably because it's a commercial product).
I start the cluster and show the status, but the mysqld nodes never start, they just remain as "added". If I install mysql-server using "sudo apt-get install mysql-server" then I get the normal local server running and the nodes register as "started", but I can't see how to connect to the cluster rather than the individual MySQL servers running on the mysqld nodes.
I'm also at a loss as to how the Java connector for MySQL Cluster is organised, it appears that there are multiple libraries so I don't even know which library I need or how to get them (some are created when compiling MySQL Cluster???). Could someone please explain how the connectors work to interact with NDB from Java and how to get them?
Thanks for any answers.
First of all, the official documentation for MySQL Cluster Manager can be found by navigating to the Cluster documentation on dev.mysql.com (called "MySQL Cluster Manager"). You are correct that MySQL Cluster Manager is commercial software although MySQL Cluster itself is available under a commercial or GPL license.
It sounds as though you've already configured the agents and have them running and so if you want to get a Cluster up and running quickly then refer to this simple worked example of using MySQL Cluster Manager
In terms of understanding why the MySQL Servers (mysqlds) are not starting up, there aren't many clues in your question and so we need to narrow it down (one reason could be if you had multiple mysqlds defined on the same host that are trying to use the default port (3306)).
To check what the manager has been doing, take a look in the file called mysql-cluster-manager.log. You can adjust the level of logging using the cluster manager configuration file.
To see what MySQL Cluster itself thinks has happened, check the directories storing the cluster data files (if you haven't over-written the defaults then this would be under /clusters/ and then you'll see a directory for each node in the cluster). The first one to check is ndb__cluster.log and other logs that you'll find in the "data" sub-directory of the id associated with the ndb_mgmd node. There will also be per-node log files so also check the mysqld_out.err and mysqld_out.log files stored in the data directory associated with mysqld node-ids.
Most important point is do not use the mysqld that gets installed with "sudo apt-get install mysql-server" as this version will not be compatible with MySQL Cluster - always use the binaries that come with the MySQL Cluster tar ball (or if using Cluster Manager that should be transparent to you anyway.
Note that if you want to get MySQL Cluster up and running on a single host without MySQL Cluster Manager then refer to the quick-start guide located on the MySQL Cluster download site (on mysql.com rather than e-Delivery).
For the java access, try out this MySQL Cluster ClusterJ tutorial.