Connecting to external mysql db from nodejs GKE pod - mysql

I have a nodejs express app that connects to a mysql DB using:
const dbconfig = {
client: 'mysql',
connection: {
host: config.db.host,
user: config.db.user,
password: config.db.password,
database: config.db.database,
port: config.db.port,
charset: 'utf8',
ssl: {
ca: fs.readFileSync(__dirname + '/root_ca.pem')
}
}
}
In my local docker env this connection is successful, however when deploying this onto a kube cluster I am unable to connect to host:port.
The VPC is set up to allow Ingress/Egress traffic on that host/port.
And a service and endpoint were setup as well:
kind: "Service"
apiVersion: "v1"
metadata:
name: "mysql"
spec:
ports:
- name: "mysql"
protocol: "TCP"
port: 13306
nodePort: 0
selector: {}
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "mysql"
subsets:
- addresses:
- ip: "34.201.17.84"
ports:
- port: 13306
name: "mysql"
Update: Still no luck but more info shows that the pod and the node are not able to reach the host.

So with the help of google support I was able to find a solution to my problem. The issue was that the ip address that is whitelisted to connect to the database was not the ip address of the loadbalancer; as loadbalancers are for ingress traffic and not egress traffic.
The work around for this is to create a private cluster and then route the egress traffic of that cluster through a single ip (or ip range) using Google Cloud NAT service. Once that was done I was able to successfully connect to the DB without the need of the extra endpoints/mysql service.

Related

Can't connect OKE Kubernetes cluster to Oracle MySQL DB instance that is outside of the cluster

Currently I am using Oracle Cloud to host an Oracle Kubernetes Cluster managed by Rancher. I also have an Oracle MySQL DB that is outside of the cluster.
The kubernetes cluster and db instance are on the same VCN, subnet, and in the same compartment.
The db instance does not have an external IP but has an internal IP.
I have deployed an endpoint and a ClusterIP in an effort to expose the db instance to the application.
apiVersion: v1
kind: Service
metadata:
name: mysql-dev
namespace: development
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
---
kind: Endpoints
apiVersion: v1
metadata:
name: mysql-dev
namespace: development
subsets:
- addresses:
- ip: <DB INTERNAL IP>
ports:
- port: 3306
In my application properties file I referenced the service...
datasource.dev.db=dev
datasource.dev.host=mysql-dev
datasource.dev.username=<USERNAME>
datasource.dev.password=<PASSWORD>
I can't seem to get my application to communicate with the db.
Any help would be much appreciated!
Looks like the mysql version referenced is not compatible with this version of OKE.
Updated the mysql version and it is working well.

How to connect rethinkdb in Openshift using rethinkdbdash

Could someone help to connect to rethinkdb in openshift using rethinkdbdash
I have deployed rethinkdb in openshift & create 3 clusterIP services
1.8080 - admin
2.29015 - intracluster communicated
3.28015 - client connection
I have created a route which targets client connection clusterIP service(port 28015)
I tried to use that from client side as below
const r = require('rethinkdbdash')({
cursor: true,
silent: true,
host: rethink-client.test.exchange.com,
port: 80
)}
I am getting below error
data: Timeout during operation
(node:5739) UnhandledPromiseRejectionWarning: Error: Cannot wrap non-Error object
You should use NodePort or LoadBalancer type Services to expose your DB connection to external instead of Route. Because Route does not support TCP protocol. Refer here for supported protocols.
For instance of mysql db, further details are provided in Using a NodePort to Get Traffic into the Cluster.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30036
name: http
selector:
name: mysql

Can't connect to mysql on my local machine from kubernetes

Hi just started using Kubernetes. I have deployed my flask application on Kubernetes with minkikube. I have running the MySQL server on my local machine. When I try to access the DB I will get error
InternalError: (pymysql.err.InternalError) (1130, u"Host '157.37.85.26'
is not allowed to connect to this MySQL server")
(Background on this error at: http://sqlalche.me/e/2j85)
the IP is dynamic here, every time I try to access..it will use different IP to connect
Here is my deployment.ymal file
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-deployment
spec:
selector:
matchLabels:
app: flask-crud-app
replicas: 3
template:
metadata:
labels:
app: flask-crud-app
spec:
containers:
- name: flask-crud-app
image: flask-crud-app:latest
ports:
- containerPort: 80
And service.yaml
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
selector:
app: flask-crud-app
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
type: NodePort
It because your current configuration doesn't allow requests coming from that IP address. Say, you're connecting as root user, then a workaround will be(not recommended), giving root user the privilege to connect from that IP.
Connect to your mysql server and perform:
$ mysql -u root -p
$ GRANT ALL PRIVILEGES ON *.* TO 'root'#'my_ip' IDENTIFIED BY 'root_password' WITH GRANT OPTION;
$ FLUSH PRIVILEGES;
Recommendation: Set up a new user with limited privileges and allow requests from the given IP for that user.

How to connect mysql kubernetes container internally with nodejs k8s container?

I have created mysql k8s container and nodejs k8s container under same namespace.I can't able to connect mysql db.(sequalize)
I have tried to connect using '''http://mysql.e-commerce.svc.cluster.local:3306'''.But i got "SequelizeHostNotFoundError" error.
Here is my service and deployment yaml files.
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: e-commerce
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 30306
selector:
app: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
namespace: e-commerce
spec:
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql-container
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim```
From the ClusterIP worked for me or better way to go with the host name of the local cluster service ex. db-mysql.default.svc.cluster.local. This way if you cluster restarts and your IP changes, then you got it covered.
You are trying to access database with http protocol, leave it or change with mysql://ip:3306. Some clients won't accept DNS name for databases so you can check ClusterIP of service and try that IP too.
As mentioned by community member FL3SH you can change your spec.type to clusterIP.
You can reproduce this task using stable helm chart wordpress/mysql.
For newly created pods:
mysql-mariadb-0
mysql-wordpress
and services:
mysql-mariadb
mysql-wordpress
After successfully deployment you can verify if your service is working from the mysql-wordpress pod by running:
kubectl exec -it mysql-wordpress-7cb4958654-tqxm6 -- /bin/bash
In addition, you can install additional tools like nslooukp, telnet:
apt-get update && apt-get install dnsutils telnet
Services and connectivity with db you can test by running f.e. those commands:
nslookup mysql-mariadb
telnet mysql-mariadb 3306
mysql -uroot -hmysql-mariadb -p<your_db_password>
example output:
nslookup mysql-mariadb
Server: 10.125.0.10
Address: 10.125.0.10#53
Non-authoritative answer:
Name: mysql-mariadb.default.svc.cluster.local
Address: 10.125.0.76
mysql -u root -hmysql-mariadb -p<your_db_password>
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2068
Server version: 10.1.40-MariaDB Source distribution
You should be able to connect using service name or using ip address.
Inside this helm chart you can find also template for statefulset in order to create mysql pods.
Update
From the second pod f.e. ubuntu run this example - Node.js Mysql, install nodes.js and create connection to the database demo_db_connection.js
example:
var mysql = require('mysql');
var con = mysql.createConnection({
host: "mysql-mariadb",
user: "root",
password: "yourpassword"
});
con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
});
run it:
root#ubuntu:~/test# node demo_db_connection.js
Connected!
Try with:
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: e-commerce
spec:
clusterIP: None
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
with the same connection string.

Connection to MySQL (AWS RDS) in Istio

We have a issue where connecting to AWS RDS in Istio Service Mesh is results in upstream connect error or disconnect/reset before header .
Our Egress rule is as below
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
namespace: <our-namespace>
name: rds-egress-rule-with
spec:
destination:
service: <RDS End point>
ports:
- port: 80
protocol: http
- port: 443
protocol: https
- port: 3306
protocol: https
The connection to MySQL works fine in a stand alone MySQL in EC2. The connection to AWS RDS works fine without Istio. The problem only occurs in Istio Service Mesh.
We are using istio in Disabled Mutual TLS Configuration.
The protocol in your EgressRule definition should be tcp. The service should contain the IP address or a range of IP addresses in CIDR notation.
Alternatively, you can use the --includeIPRanges flag of istioctl kube-inject, to specify which IP ranges are handled by Istio. Istio will not interfere with the the not-included IP addresses and will just allow the traffic to pass thru.
References:
https://istio.io/latest/blog/2018/egress-tcp/
https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services