Could someone help to connect to rethinkdb in openshift using rethinkdbdash
I have deployed rethinkdb in openshift & create 3 clusterIP services
1.8080 - admin
2.29015 - intracluster communicated
3.28015 - client connection
I have created a route which targets client connection clusterIP service(port 28015)
I tried to use that from client side as below
const r = require('rethinkdbdash')({
cursor: true,
silent: true,
host: rethink-client.test.exchange.com,
port: 80
)}
I am getting below error
data: Timeout during operation
(node:5739) UnhandledPromiseRejectionWarning: Error: Cannot wrap non-Error object
You should use NodePort or LoadBalancer type Services to expose your DB connection to external instead of Route. Because Route does not support TCP protocol. Refer here for supported protocols.
For instance of mysql db, further details are provided in Using a NodePort to Get Traffic into the Cluster.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30036
name: http
selector:
name: mysql
Related
Currently I am using Oracle Cloud to host an Oracle Kubernetes Cluster managed by Rancher. I also have an Oracle MySQL DB that is outside of the cluster.
The kubernetes cluster and db instance are on the same VCN, subnet, and in the same compartment.
The db instance does not have an external IP but has an internal IP.
I have deployed an endpoint and a ClusterIP in an effort to expose the db instance to the application.
apiVersion: v1
kind: Service
metadata:
name: mysql-dev
namespace: development
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
---
kind: Endpoints
apiVersion: v1
metadata:
name: mysql-dev
namespace: development
subsets:
- addresses:
- ip: <DB INTERNAL IP>
ports:
- port: 3306
In my application properties file I referenced the service...
datasource.dev.db=dev
datasource.dev.host=mysql-dev
datasource.dev.username=<USERNAME>
datasource.dev.password=<PASSWORD>
I can't seem to get my application to communicate with the db.
Any help would be much appreciated!
Looks like the mysql version referenced is not compatible with this version of OKE.
Updated the mysql version and it is working well.
I deployed a MySQL pod with the example from the kubernetes website: https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
I can access the pod from the pod network but not from outside the cluster, how can I achieve this? I would want to access the service via MySQL Workbench for easier editing of the Database.
I already tried to setup a NodePort service like this:
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3006
nodePort: 30003
selector:
app: mysql
type: NodePort
with the goal to access the service at :30003 but that does not work.
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3006
nodePort: 30003
selector:
app: mysql
type: NodePort
the targetPort is 3006 instead of 3306, it was a typo.
Unable to connect jenkins master hosted On Openshift Cluster. Terminates with below error after handshaking:
may 23, 2020 2:05:55 PM hudson.remoting.jnlp.Main$CuiListener error
GRAVE: Failed to connect to jenkins-jnlp-poc:50000
java.io.IOException: Failed to connect to jenkins-jnlp-poc:50000
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:246)
at hudson.remoting.Engine.connectTcp(Engine.java:678)
at hudson.remoting.Engine.innerRun(Engine.java:556)
at hudson.remoting.Engine.run(Engine.java:488)
Caused by: java.net.ConnectException: Connection timed out: connect
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:204)
... 3 more
I added route to jenkins-jnlp service but I'm not able to expose the port, I'been trying to configure nodePort but I couldn't archive it yet. Any help will be welcomed!
Thanks.
A Route will only work with HTTP / HTTPS traffic and will not work in this case and as you correctly noted, NodePorts is most likely what you want. Here is an example for a Service type NodePort using Port 32000:
apiVersion: v1
kind: Service
metadata:
name: jenkins-jnlp-poc-service
spec:
selector:
app: jenkins-jnlp-poc
type: NodePort
ports:
- name: jnlp
port: 50000
targetPort: 50000
nodePort: 32000
protocol: TCP
Note that you may need to change multiple parts of the Service:
The port and targetPort specifying which port the Service "listens" on and where traffic is forwarded to (typically to the port your container exposes)
The selector, which Pods are targeted (you'll need to check your Pods which labels are used and adjust accordingly)
How can be a service that does not use HTTP/s be exposed in Openshift 3.11 or 4.x?
I think routes only support HTTP/s traffic.
I have read about using ExternalIP configuration for services but that makes the operation of the cluster complicated and static compared to routes/ingress.
For example Nginx-ingress-controller allows it with special configurations: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
What are the options in Openshift 3.11 or 4.x?
Thank you.
There is a section in the official OpenShift documentation for this called Getting Traffic Into the Cluster.
The recommendation, in order or preference, is:
- If you have HTTP/HTTPS, use a router.
- If you have a TLS-encrypted protocol other than HTTPS (for example, TLS with the SNI header), use a router.
- Otherwise, use a Load Balancer, an External IP, or a NodePort.
NodePort exposes the Service on each Node’s IP at a static port (30000~32767)[0].
You’ll be able to contact the NodePort Service, from outside the cluster, by requesting : format.
apiVersion: v1
kind: Service
metadata:
name: nodeport
spec:
type: NodePort
ports:
- name: "8080"
protocol: "TCP"
port: 8080
targetPort: 80
nodePort: 30000
selector:
labelName: targetname
I have a nodejs express app that connects to a mysql DB using:
const dbconfig = {
client: 'mysql',
connection: {
host: config.db.host,
user: config.db.user,
password: config.db.password,
database: config.db.database,
port: config.db.port,
charset: 'utf8',
ssl: {
ca: fs.readFileSync(__dirname + '/root_ca.pem')
}
}
}
In my local docker env this connection is successful, however when deploying this onto a kube cluster I am unable to connect to host:port.
The VPC is set up to allow Ingress/Egress traffic on that host/port.
And a service and endpoint were setup as well:
kind: "Service"
apiVersion: "v1"
metadata:
name: "mysql"
spec:
ports:
- name: "mysql"
protocol: "TCP"
port: 13306
nodePort: 0
selector: {}
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "mysql"
subsets:
- addresses:
- ip: "34.201.17.84"
ports:
- port: 13306
name: "mysql"
Update: Still no luck but more info shows that the pod and the node are not able to reach the host.
So with the help of google support I was able to find a solution to my problem. The issue was that the ip address that is whitelisted to connect to the database was not the ip address of the loadbalancer; as loadbalancers are for ingress traffic and not egress traffic.
The work around for this is to create a private cluster and then route the egress traffic of that cluster through a single ip (or ip range) using Google Cloud NAT service. Once that was done I was able to successfully connect to the DB without the need of the extra endpoints/mysql service.