Unable to connect: Communications link failure - mysql

I am trying to follow the tutorial Deploying Debezium using the new KafkaConnector resource.
Based on the tutorial, I am also using minikube but with docker driver. Basically just follow exactly step by step.
However, for the step "Create the connector", after creating the connector by
cat <<EOF | kubectl -n kafka apply -f -
apiVersion: "kafka.strimzi.io/v1alpha1"
kind: "KafkaConnector"
metadata:
name: "inventory-connector"
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: io.debezium.connector.mysql.MySqlConnector
tasksMax: 1
config:
database.hostname: 192.168.99.1
database.port: "3306"
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
database.server.id: "184054"
database.server.name: "dbserver1"
database.whitelist: "inventory"
database.history.kafka.bootstrap.servers: "my-cluster-kafka-bootstrap:9092"
database.history.kafka.topic: "schema-changes.inventory"
include.schema.changes: "true"
EOF
and check by
kubectl -n kafka get kctr inventory-connector -o yaml
I got error
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I tried to change
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
to
database.user: "debezium"
database.password: "dbz"
directly and re-apply, based on the user and password info in "Secure the database credentials" step.
Also, based on the description in the tutorial
I’m using database.hostname: 192.168.99.1 as IP address for connecting to MySQL because I’m using minikube with the virtualbox VM driver If you’re using a different VM driver with minikube you might need a different IP address.
I am actually a little confused for above description. MySQL in the demo is deployed in Docker, while the rest of parts like Kafka are deployed in minikube. Why the description about database.hostname says minikube instead of Docker?
Anyway, when I run minikube ip, I got 192.168.49.2. However, after I change database.hostname to 192.168.49.2, and run kubectl get kctr inventory-connector -o yaml -n kafka, I got
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I can access MySQL by localhost as it is hosted in Docker.
However, I still same error when I changed database.hostname to localhost.
Any idea? Thanks!

The issue is related with the service in minikube failed to communicate with the MySQL in the docker.
Regarding how to access host's localhost from inside Kubernetes cluster, I found How to access host's localhost from inside kubernetes cluster
However, I end up with deploying MySQL in Kubernetes direction by
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml
(Copied from https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
with
database.hostname: "mysql.default" # service `mysql` in namespace `default`
database.port: "3306"
database.user: "root"
database.password: "password"
Now when I run
kubectl -n kafka get kctr inventory-connector -o yaml
I got a new error saying MySQL not enabling row-level binlog, however, it means it can connect the MySQL now.
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"mysql.default","database.password":"password","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"root","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T19:36:52Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "2918"
uid: 48bb46e1-42bb-4574-a3dc-221ae7d6a803
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: mysql.default
database.password: password
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: root
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T19:36:53.605Z"
status: "True"
type: Ready
connectorStatus:
connector:
state: UNASSIGNED
worker_id: 172.17.0.8:8083
name: inventory-connector
tasks:
- id: 0
state: FAILED
trace: "org.apache.kafka.connect.errors.ConnectException: The MySQL server is
not configured to use a row-level binlog, which is required for this connector
to work properly. Change the MySQL configuration to use a row-level binlog
and restart the connector.\n\tat io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:207)\n\tat
io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:49)\n\tat
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:208)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
java.lang.Thread.run(Thread.java:748)\n"
worker_id: 172.17.0.8:8083
type: source
observedGeneration: 1

Related

Why does the K3s system upgrade controller fail with "not found, requeuing"?

I installed the system upgrade controller and applied this plan manifest:
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: master-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: k3s-master-upgrade
operator: In
values:
- "true"
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
channel: https://update.k3s.io/v1-release/channels/stable
---
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: worker-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: k3s-worker-upgrade
operator: In
values:
- "true"
prepare:
args:
- prepare
- master-plan
image: rancher/k3s-upgrade
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
channel: https://update.k3s.io/v1-release/channels/stable
I applied and checked the labels:
$ kubectl label node crux k3s-worker-upgrade=true
$ kubectl describe nodes crux | grep k3s-worker-upgrade
k3s-worker-upgrade=true
$ kubectl label node nemo k3s-master-upgrade=true
$ kubectl describe nodes nemo | grep k3s-master-upgrade
k3s-master-upgrade=true
According to kubectl get nodes I'm still on v1.23.6+k3s1, but the stable channel is on v1.24.4+k3s1.
I get the following errors:
$ kubectl -n system-upgrade logs deployment.apps/system-upgrade-controller
time="2022-09-12T11:29:31Z" level=error msg="error syncing 'system-upgrade/apply-worker-plan-on-crux-with-4190e4adda3866e909fc7735c1-f0dff': handler system-upgrade-controller: jobs.batch \"apply-worker-plan-on-crux-with-4190e4adda3866e909fc7735c1-f0dff\" not found, requeuing"
time="2022-09-12T11:30:35Z" level=error msg="error syncing 'system-upgrade/apply-master-plan-on-nemo-with-4190e4adda3866e909fc7735c1-9cf4f': handler system-upgrade-controller: jobs.batch \"apply-master-plan-on-nemo-with-4190e4adda3866e909fc7735c1-9cf4f\" not found, requeuing"
$ kubectl -n system-upgrade get jobs -o yaml
- apiVersion: batch/v1
kind: Job
metadata:
labels:
upgrade.cattle.io/controller: system-upgrade-controller
upgrade.cattle.io/node: crux
upgrade.cattle.io/plan: worker-plan
upgrade.cattle.io/version: v1.24.4-k3s1
status:
conditions:
- lastProbeTime: "2022-09-12T12:14:31Z"
lastTransitionTime: "2022-09-12T12:14:31Z"
message: Job was active longer than specified deadline
reason: DeadlineExceeded
status: "True"
type: Failed
failed: 1
startTime: "2022-09-12T11:59:31Z"
uncountedTerminatedPods: {}
Same here.
I have managed to upgrade k3s only by using manual upgrade using the binaries.
On all nodes:
wget https://github.com/k3s-io/k3s/releases/download/v1.24.4%2Bk3s1/k3s
On the server (master):
systemctl stop k3s
cp ./k3s /usr/local/bin/
systemctl start k3s
On the agents (workers):
systemctl stop k3s-agent
cp ./k3s /usr/local/bin/
systemctl start k3s-agent
It seems much faster and easier than struggling with the automated upgrade, no drains, no cordons....

How to set up kubernetes for Spring and MySql

i follow this tutorial https://medium.com/better-programming/kubernetes-a-detailed-example-of-deployment-of-a-stateful-application-de3de33c8632
I create mysql pod and backend pod, but when application get error com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
pod mysql: running
pod backend: CrashLoopBackOff
Dockerfile
FROM openjdk:14-ea-8-jdk-alpine3.10
ADD target/credit-0.0.1-SNAPSHOT.jar .
EXPOSE 8200
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-Dspring.profiles.active=container","-jar","/credit-0.0.1-SNAPSHOT.jar"]
credit-deployment.yml
# Define 'Service' to expose backend application deployment
apiVersion: v1
kind: Service
metadata:
name: to-do-app-backend
spec:
selector: # backend application pod lables should match these
app: to-do-app
tier: backend
ports:
- protocol: "TCP"
port: 80
targetPort: 8080
type: LoadBalancer # use NodePort, if you are not running Kubernetes on cloud
---
# Configure 'Deployment' of backend application
apiVersion: apps/v1
kind: Deployment
metadata:
name: to-do-app-backend
labels:
app: to-do-app
tier: backend
spec:
replicas: 2 # Number of replicas of back-end application to be deployed
selector:
matchLabels: # backend application pod labels should match these
app: to-do-app
tier: backend
template:
metadata:
labels: # Must macth 'Service' and 'Deployment' labels
app: to-do-app
tier: backend
spec:
containers:
- name: to-do-app-backend
image: gitim21/credit_repo:1.0 # docker image of backend application
env: # Setting Enviornmental Variables
- name: DB_HOST # Setting Database host address from configMap
valueFrom:
configMapKeyRef:
name: db-conf # name of configMap
key: host
- name: DB_NAME # Setting Database name from configMap
valueFrom:
configMapKeyRef:
name: db-conf
key: name
- name: DB_USERNAME # Setting Database username from Secret
valueFrom:
secretKeyRef:
name: db-credentials # Secret Name
key: username
- name: DB_PASSWORD # Setting Database password from Secret
valueFrom:
secretKeyRef:
name: db-credentials
key: password
ports:
- containerPort: 8080
application.yml
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
hikari:
idle-timeout: 10000
platform: mysql
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
url: jdbc:mysql://${DB_HOST}/${DB_NAME}
jpa:
hibernate:
naming:
physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
I placed the application.yml file in the application folder "resources"
EDIT
Name: mysql-64c7df597c-s4gbt
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 17:50:18 +0200
Labels: app=mysql
pod-template-hash=64c7df597c
tier=database
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/mysql-64c7df597c
Containers:
mysql:
Container ID: docker://514d3f5af76f5e7ac11f6bf6e36b44ee4012819dc1cef581829a6b5b2ce7c09e
Image: mysql:5.7
Image ID: docker-pullable://mysql#sha256:1a121f2e7590f949b9ede7809395f209dd9910e331e8372e6682ba4bebcc020b
Port: 3306/TCP
Host Port: 0/TCP
Args:
--ignore-db-dir=lost+found
State: Running
Started: Thu, 12 Sep 2019 17:50:19 +0200
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'db-root-credentials'> Optional: false
MYSQL_USER: <set to the key 'username' in secret 'db-credentials'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
MYSQL_DATABASE: <set to the key 'name' of config map 'db-conf'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49m default-scheduler Successfully assigned default/mysql-64c7df597c-s4gbt to minikube
Normal Pulled 49m kubelet, minikube Container image "mysql:5.7" already present on machine
Normal Created 49m kubelet, minikube Created container mysql
Normal Started 49m kubelet, minikube Started container mysql
Name: to-do-app-backend-8669b5467-hrr9q
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 18:27:45 +0200
Labels: app=to-do-app
pod-template-hash=8669b5467
tier=backend
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/to-do-app-backend-8669b5467
Containers:
to-do-app-backend:
Container ID: docker://1eb8453939710aed7a93cddbd5046f49be3382858aa17d5943195207eaeb3065
Image: gitim21/credit_repo:1.0
Image ID: docker-pullable://gitim21/credit_repo#sha256:1fb2991394fc59f37068164c72263749d64cb5c9fe741021f476a65589f40876
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 12 Sep 2019 18:51:25 +0200
Finished: Thu, 12 Sep 2019 18:51:36 +0200
Ready: False
Restart Count: 9
Environment:
DB_HOST: <set to the key 'host' of config map 'db-conf'> Optional: false
DB_NAME: <set to the key 'name' of config map 'db-conf'> Optional: false
DB_USERNAME: <set to the key 'username' in secret 'db-credentials'> Optional: false
DB_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
DB_PORT: <set to the key 'port' in secret 'db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/to-do-app-backend-8669b5467-hrr9q to minikube
Normal Pulled 23m (x5 over 25m) kubelet, minikube Container image "gitim21/credit_repo:1.0" already present on machine
Normal Created 23m (x5 over 25m) kubelet, minikube Created container to-do-app-backend
Normal Started 23m (x5 over 25m) kubelet, minikube Started container to-do-app-backend
Warning BackOff 50s (x104 over 25m) kubelet, minikube Back-off restarting failed container
First and foremost make sure that you fillfull all requirements that are described in article.
During creating deployments objects like (eg. pods, services ) environment variables are injected from the configMaps and secrets that are created earlier. This deployment uses the image kubernetesdemo/to-do-app-backend which is created in step one. Make sure you've created configmap and secrets before, otherwise delete created during deployment objects, create configMap, secret and then run deployment config file once again.
Another possibility if get:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications
link failure
error it means that the DB isn't reachable at all. This can have one or more of the following causes:
IP address or hostname in JDBC URL is wrong.
Hostname in JDBC URL is not recognized by local DNS server.
Port number is missing or wrong in JDBC URL.
~~4. DB server is down.~~
DB server doesn't accept TCP/IP connections.
DB server has run out of connections.
Something in between Java and DB is blocking connections, e.g. a firewall or proxy.
I assume that if your mysql pod is running your DB server is running and point ~~4. DB server is down.~~ is wrong.
To solve the one or the other, follow the following advices:
Verify and test them with ping. Refresh DNS or use IP address in JDBC URL instead.
Check if it is based on my.cnf of MySQL DB.
Start the DB once again. Check if mysqld is started without the --skip-networking option.
Restart the DB and fix your code accordingly that it closes connections in finally.
Disable firewall and/or configure firewall/proxy to allow/forward the port.
Similar error you can find here: communication-error.

Can't login mysql server deployed in k8s cluster

I am using k8s in mac-docker-desktop. I deploy a mysql pod with below config.
run with: kubectl apply -f mysql.yaml
# secret
apiVersion: v1
kind: Secret
metadata:
name: mysql
type: Opaque
data:
# root
mysql-root-password: cm9vdAo=
---
# configMap
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-conf
data:
database: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mysql
spec:
volumes:
- name: mysql
persistentVolumeClaim:
claimName: mysql
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: mysql-root-password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: mysql-conf
key: database
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql
mountPath: /var/lib/mysql
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql
labels:
app: mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
# services
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
After that. it shows ok . and then, I want to connect the mysql server with node ip, but failed. then I exec in the pod, and got failed either.
I execute in the pod and can't login.
☁ gogs-k8s kubectl get pods
NAME READY STATUS RESTARTS AGE
blog-59fb8cbd44-frmtx 1/1 Running 0 37m
blog-59fb8cbd44-gdskp 1/1 Running 0 37m
blog-59fb8cbd44-qrs8f 1/1 Running 0 37m
mysql-6c794ccb7b-dz9f4 1/1 Running 0 31s
☁ gogs-k8s kubectl exec mysql-6c794ccb7b-dz9f4 -it bash
root#mysql-6c794ccb7b-dz9f4:/# ls
bin boot dev docker-entrypoint-initdb.d entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#mysql-6c794ccb7b-dz9f4:/# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
root#mysql-6c794ccb7b-dz9f4:/# echo $MYSQL_ROOT_PASSWORD
root
root#mysql-6c794ccb7b-dz9f4:/# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
It there any problems with my config file ?
Probably you have invalid base64 encoded password. Try this one:
data:
pass: cm9vdA==
As #Vasily Angapov pointed out, your base64 encoding is wrong.
When you do the following you are encoding the base for root\n
echo "root" | base64
Output:
cm9vdAo=
If you want to remove the newline character you should use the option -n:
echo -n "root" | base64
Output:
cm9vdA==
Even better is to do the following:
echo -n "root" | base64 -w 0
That way base64 will not insert new lines in longer outputs.
Also you can verify if your encoding is right by decoding the encoded text:
echo "cm9vdA==" | base64 --decode
The output should not create a new line.

Openshift: Change in trigger behavior between 3.6.1 and 3.7.1

I have two Openshift environments that I publish to from a Gitlab pipeline. Let's call the first one DEV and it runs Openshift v3.7.1+c2ce2c0-1. The second one is INT and runs v3.6.1+008f2d5. Recently DEV got upgraded from 3.6.1 to 3.7.1 and after that I noticed a strange change in the behavior of redeployment triggers.
In short, what I see is that existing deployments in the DEV environment are triggered to redeploy with an "Image changed" message when an unchanged deployment config template is applied and while the Docker image also remains unchanged. This means that for example a MongoDB or Jenkins deployment recreates all containers and loses all data with every run of the CI pipeline.
Yes, there is the theoretically the possibility to use persistent volumes but the Openshift installations are not under my control. The point here is that redeployments happen when neither image nor configuration changes.
The command that I am using from Gitlab is this:
oc process -f openshift-mongodb-ephemeral.yml -v MONGODB_DATABASE=mydb -v DATABASE_SERVICE_NAME=mongodb -l template=northbound-mongodb | oc apply -f -
Here is one of the deployment config templates:
apiVersion: v1
kind: Template
labels:
template: mongodb-ephemeral-template
metadata:
annotations:
description: |-
MongoDB database service, without persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.
WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing
iconClass: icon-mongodb
openshift.io/display-name: MongoDB (Ephemeral)
tags: database,mongodb
creationTimestamp: 2017-03-14T11:25:13Z
name: mongodb-ephemeral
resourceVersion: "483"
selfLink: /oapi/v1/namespaces/openshift/templates/mongodb-ephemeral
uid: e41b7f8e-08a8-11e7-9120-000d3a266151
objects:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
ports:
- name: mongo
nodePort: 0
port: 27017
protocol: TCP
targetPort: 27017
selector:
name: ${DATABASE_SERVICE_NAME}
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
replicas: 1
selector:
name: ${DATABASE_SERVICE_NAME}
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: ${DATABASE_SERVICE_NAME}
spec:
containers:
- capabilities: {}
env:
- name: MONGODB_USER
value: ${MONGODB_USER}
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: myproject-secrets
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: myproject-secrets
- name: MONGODB_DATABASE
value: ${MONGODB_DATABASE}
image: ' '
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
initialDelaySeconds: 3
timeoutSeconds: 1
resources:
limits:
memory: ${MEMORY_LIMIT}
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: ${DATABASE_SERVICE_NAME}-data
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- emptyDir:
medium: ""
name: ${DATABASE_SERVICE_NAME}-data
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mongodb
from:
kind: ImageStreamTag
name: mongodb:${MONGODB_VERSION}
namespace: ${NAMESPACE}
lastTriggeredImage: ""
type: ImageChange
- type: ConfigChange
status: {}
parameters:
- description: Maximum amount of memory the container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
value: openshift
- description: The name of the OpenShift Service exposed for the database.
displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: mongodb
- description: Name of the MongoDB database accessed.
displayName: MongoDB Database Name
name: MONGODB_DATABASE
required: true
value: sampledb
- description: Version of MongoDB image to be used (2.4, 2.6, 3.2 or latest).
displayName: Version of MongoDB Image
name: MONGODB_VERSION
required: true
value: "3.2"
- description: Name of user to access MongoDB.
displayName: MongoDB user
name: MONGODB_USER
required: true
value: "mongouser"
The oc process and oc apply commands get executed with each execution of the Gitlab CI pipeline. The pipeline triggers whenever someone merges into e.g. the develop branch. I would like to keep this behavior because this guarantees if someone changes the configuration of MongoDB, Jenkins, etc. those get updated and redeployed automatically (in which case a loss of data is acceptable).
Does anybody know what change in OS could have prompted this change in behavior and how to achieve the old behavior again?
Credits go to Graham Dumpleton who answered my question, albeit through a comment. Since I did not hear back from him I am leaving this answer myself for completeness' sake.
After removing the entries
creationTimestamp
lastTriggeredImage
status
image
from the deployment config template, the unwanted redeployments stopped. I do not know which entry specifically started triggering the behavior in Openshift 3.7.1 but in any case, these four entries are information generated during an Openshift deployment process and should not be in a template to begin with.

App deployed on Kubernetes cannot be accessed from Internet

I am new to Kubernetes & Docker. I created a simple nodejs application and deployed on BlueMix Kubernetes. But I am unable to accesses the application on internet. The ip & port mentioned in the kubernetes is not accessible. Can somebody help me.
I tried to http://10.76.193.146:31972, but it did not go through. I am not sure if this the public ip as its 10. series.
I also tried the public ip ( http://184.173.1.79:31972 ) mentioned in the blue mix kubernetes cluster - screenshot below. But that too failed.
This are steps I followed.
Created a nodejs app locally. It ran as desired on the local
// Load the http module to create an http server.
var http = require('http');
// Configure our HTTP server to respond with Hello World to all requests.
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
response.end("Hello World\n");
});
// Listen on port 8000, IP defaults to 127.0.0.1
server.listen(8000);
// Put a friendly message on the terminal
console.log("Server running at http://127.0.0.1:8000/");
---------- package.json
{
"name": "helloworld-nodejs",
"version": "0.0.1",
"description": "First Docker",
"main": "app.js",
"scripts": {
"start": "PORT=8000 node ./app.js"
},
"author": "",
"license": "ISC"
}
Created a docker container locally and ran the docker. It worked properly
Uploaded the docker container on Bluemix registry as
registry.ng.bluemix.net/testkubernetes/helloworld-nodejs:0.0.1
Created the Nodes and Services in Kubernetes, using the following YAML file
----------Node YAML file
apiVersion: v1
kind: Pod
metadata:
name: helloworld-nodejs
labels:
name: helloworld-nodejs
spec:
containers:
- name: helloworld-nodejs
image: registry.ng.bluemix.net/testkubernetes/helloworld-nodejs:0.0.1
ports:
- containerPort: 8000
---------- Services YAML
apiVersion: v1
kind: Service
metadata:
name: helloworld-nodejs
labels:
name: helloworld-nodejs
spec:
type: NodePort
selector:
name: helloworld-nodejs
ports:
- port: 8080
The application gets deployed properly and is also running, which I can confirm from the logs
Result of kubectl get services & kubectl get nodes command
Since your service's port is different from your pod's containerPort, you will have to specify targetPort in your service.
spec:
type: NodePort
selector:
name: helloworld-nodejs
ports:
- port: 8080
targetPort: 8000
According to the Kubernetes documentation on targetPort, it is the:
Number or name of the port to access on the pods targeted by the
service. .... If this is not specified, the value of the 'port' field
is used (an identity map).