Cannot connect to Google MySQL from deployed Kubernetes NodeJS app - mysql

I have been trying for the past couple of days to get my deployed NodeJS Kubernetes LoadBalancer app to connect to a Google Cloud MySQL instance. the SQL database and the Kubernetes deployment exist in the same Google project. Both The ORM of choice for this project is Sequelize. Here is a snippet of my connection configuration:
"deployConfigs": {
"username": DB_USERNAME,
"password": DB_PASSWORD,
"database": DB_DATABASE,
"host": DB_HOST,
"port": 3306,
"dialect": "mysql",
"socketPath": "/cloudsql/INSTANCE_NAME"
}
When I run the application locally with the same configurations, I am able to query from the database. I can also hit the NodeJS LoadBalancer URL to get a valid API response as long as the API does not hit the database.
I have whitelisted my IP as well as the IP for the NodeJS LoadBalancer API but I still get the following response:
{
"name": "SequelizeConnectionError",
"parent": {
"errorno": "ETIMEDOUT",
"code": "ETIMEDOUT",
"syscall": "connect",
"fatal": true
},
"original": {
"errorno": "ETIMEDOUT",
"code": "ETIMEDOUT",
"syscall": "connect",
"fatal": true
}
}
I followed the instructions for creating a Proxy through a Kubernetes deployment but I don't think that will necessarily solve my issue because I simply want to connect from my Kubernetes app to a persistent database.
Again, I have been able to successfully hit the remote DB when running the container locally and when running the node app locally. I am really unsure as to why this will not connect when deployed.
Thanks!

So Kubernetes does a lot Source NATing so I had to add a rule like this on my network to allow outgoing traffic everywhere from my cluster in GCE:
This is very permissive so you might just want to add it for testing purposes initially. You can also check connectivity to MySQL by shelling into a running pod:
$ kubectl exec -it <running-pod> sh
/home/user # telnet $DB_HOST 3306

It sounds like you might be attempting to connect to your Cloud SQL instance via its public IP? If that's the case, then be careful as that is not supported. Take a look at this documentation page to figure out what's the best way to go about it.
You mentioned you're already using a proxy, but didn't mention which one. If it's the Cloud SQL Proxy, then it should allow you to perform any kind of operation you want against your database, all it does is establish a connection between a client (i.e. a pod) and the Cloud SQL instance. This Proxy should work without any issues.
Don't forget to setup the appropriate grants and all of that stuff on the Cloud SQL side of things.

I have figured it out.
When creating a MYSQL instance you need to do two things (Go to the section titled "Authorized Networks"):
Add a network and give it the name "Application" and the value "127.0.0.1". The environment variable DB_INSTANCE_HOST in your Kubernetes secret should also be set to "127.0.0.1" to prevent ETIMEOUT or ECONNREFUSED from occurring when connecting to MySQL with Node.js.
Create another network and give it the name "Local computer." Search for "my IP address" in a new tab in your browser, then enter that IP in the local computer value input field. (The goal of step two is to connect your instance to MySQL Workbench running on your computer so that you can start building and managing databases.)
That's it!
If you have any questions, write back and we can speak on Facebook or Instagram. I can then help you through the deployment process and address any issues you may have.

Related

Not able to connect Google Cloud MySQL Instance from App Engine Node JS standard environment

Not able to connect MySQL Google Cloud Instance Database from NodeJS/Express App Engine.
Able to connect Database locally as I have added my local machine IP in "Authorized networks" of SQL Instance Settings.
Tried using below code in NodeJS App Engine Instance but facing error not able to connect db.
{
"username": "abcd",
"password": "abcd123",
"database": "dummy",
"host": "localhost",
"socketPath": "/cloudsql/{project-name}:{instance-zone}:{instance-name}",
"post": 3306,
"dialect": "mysql",
"dialectOptions": {
"socketPath": "/cloudsql/{project-name}:{instance-zone}:{instance-name}"
}
}
How can I solve this issue?
Having the Cloud SQL MySQL Instance on a different project as the GAE app caused trhe issue.
Just moving the isntance solved the issue
I think you need vpc_access_connector in app.yaml.
or you can follow sample code from gcp app engine nodejs
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/appengine/cloudsql

Connect to snappy data with aws external IP address

I am using Tibco ComputeDB, which is new to me. It uses sparkDB and snappyData. I can start both Spark and SnappyData and connect to snappydata using command connect client '127.0.0.1:1527' or with internal IP of aws server. But when I try to connect it with aws external IP using above command it do not work. Also I am not able to connect to snappyData from client like sql workbench/J. I have all required drives installed on local machine and server and also all ports are open on aws server. I can access dashboard using http://externalip:5050.
I also edited conf/locators and conf/servers file as explained in below link and also hostfile entry seems fine.
=> https://snappydatainc.github.io/snappydata/howto/connect_to_the_cluster_from_external_clients/
Lines were as below
=> "Private IP" -client-bind-address="Private IP" -hostname-for-clients="Public IP"
=> "Private IP" -client-bind-address="Private IP" -client-port=1555 -hostname-for-clients="Public IP"
I follow below document to connect with JDBC.
=> https://snappydatainc.github.io/snappydata/howto/connect_using_jdbc_driver/
But still not able to connect with external IP.
=> connect client 'externalIP:1527'; should work before I can connect to snappydata from any client using external IP?
Can someone guide that what setting should be made to connect snappydata from aws external IP and with any sql client.
Are the ports open for the public IP of the AWS instance itself? Right now that is needed if you connect using the public IP, even if you are trying to connect from the same AWS instance via connect client ... command.
Thumb rule is that ports (e.g. 1527-1528) must be open for the client's IP in the security group. So, if the client is in the same AWS instance, then ports must be open for it's public IP.
If this doesn't help, can you paste the content of files locators, servers and leads which are present under conf/ directory? You can remove/strike sensitive information in them, if any.
Also please paste the error messages you see.
We have refined the steps to set up the cluster on AWS here that could clear a few things: https://snappydatainc.github.io/snappydata/install/setting_up_cluster_on_amazon_web_services/#usingawsmgmtconsole

IPSec tunnel on Google Compute Virtual Machine

I am trying to setup an IPSec tunnel on my virtual machine on Google Compute Engine and it seems all my traffic is blocked. Even though I have open the necessary ports on both the Windows Server 2016 server and Google's Firewall. Question I have is it possible to setup the VPN tunnel on the server it self or should I make use of the Hybrid Connectivity VPN or something else? I have the same setup on a dedicated server but just can't get Main Mode and or Quick Mode functioning at all.
PS I have setup many iPSec tunnels on stand alone server just not on a virtual server using Google Compute Engine.
Thanks in advance for your help on this one.
I was able to set up IPSec VPN server with Debian 10 virtual machine, on Google Compute Engine.
Here's what I did:
While creating virtual machine instance (Debian 10 for example), in "Network interface" window set option "IP forwarding" to "ON";
On "VPC network" page create firewall rule with open ports: "udp: 500, 4500";
Use this script to setup VPN software:
wget https://git.io/vpnsetup -O vpnsetup.sh && sudo sh vpnsetup.sh
It will generate credentials needed for next step. They look like this: "Server IP: ****", "IPsec PSK: ****", "Username: ****", "Password: ****".
For client configuration use credentials generated from above step and IPsec/XAuth protocol while setting vpn connection.
Look here if you encounter problems: https://github.com/hwdsl2/setup-ipsec-vpn/blob/master/docs/clients-xauth.md
Check this guide "IPsec VPN Server Auto Setup Scripts" for more information:
https://github.com/hwdsl2/setup-ipsec-vpn

CloudSql with Autoscaler access

I am stuck at one thing regarding CloudSQL.
I have my WordPress app running on GCE and I create Instance Group so I will utilise the AutoScaler.
for Db, I am using CloudSQL.
Now point where is stuck is the "Authorise network" in CloudSQL as it accepts only IPV4 Public IP.
How do I know when autoscaling happen what IP will attach to Instance so my instance will know where the DB is?
I can hard code the CloudSQL IP as a CNAME but from CloudSQL Side I am not able to figure it out how to provide access. I can make my DB access all open
If you can let me know what will be the point which I am missing.
I used cloudsql proxy also but that doesn't come with Service in Linux ... I hope you can understand my situation. Let me know if any idea you like to share on this.
Thank you
The recommended way is to use the second generation instances and Cloud SQL Proxy, you’ll need to configure the Proxy on Linux and start it by using service account credentials as outlined at the provided link.
Another way is to use startup script in your GCE instance template, so you can get your new instance’s external IP address and add it to a Cloud SQL instance’s authorized networks by using gcloud sql instance patch command. The IP can be removed from the authorized networks in the same way by using shutdown script. The external IP address of GCE VM instance can be retrieved from metadata by running:
$ curl "http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip" -H "Metadata-Flavor: Google".

Connect to host postgres db from minishift

Im trying to connect to a postgres database, from a springboot application deployed in minishift.
The postgres server is running on the same host that minishift is running on.
I've tried setting the postgres serve to listen on a specific IP address, and use this same address in the springboot jdbc connection url but I still get org.postgresql.util.PSQLException: Connection to 172.99.0.1:5432 refused
I've also tried using 10.0.2.2
Also tried, in /etc/postgresql/9.5/main/postgresql.conf, setting:
listen_addresses = '*'
How can I connect to a database external to minishift, running on same host?
Besides the answer referenced in my comment, which suggests to make your database listen on the IP address of the Docker bridge, you could make your pod use the network stack of your host. This way you could reach Postgres on the loopback. This works only if can guarantee that the pod will always run on the same host as the database.
The Kubernetes documentation discourages using hostNetwork. If you understand the consequences you can enable it as in this example.
If a pod inside kubernetes can't see the IP address from the host then I guess its an underlying firewall or networking issue. Try opening a shell inside the pod...
kubectl exec -it mypodname bash
Then trying to ping, telnet, curl, wget or whatever to see if you can see the IP address.
It sounds like something's wrong with the networking setup of your minishift. It might be worth raising an issue with minishift: https://github.com/minishift/minishift/issues/new
If you can find an IP address on the host which is accessible from a docker pod you can create a Kubernetes Service and then an Endpoint for the service with the IP address of the database on your host; then you can use the usual DNS discovery of kubernetes services (i.e. using the service name as the DNS name) which will then resolve to the IP address. Over time you could have multiple IP addresses for failover etc.
See: https://kubernetes.io/docs/user-guide/services/#without-selectors
Then you can use Services to talk to all your actual network endpoints with your application code completely decoupled on if the endpoints are implemented inside kubernetes, outside with load balancing baked in!