jdbc driver does not work in kubernetes, fails with timeout - mysql

I have a java 11 app with jdbc driver running together with mysql 8.0, the app is able to connect to mysql and execute one sql, but it looks like it never gets a response back?

It looks like a connectivity issue.
At first it'd be good to look at the Java program output.
First simple checks are at the Kubernetes level to ensure that key components are alive:
$ kubectl get deployments
$ kubectl get services
$ kubectl get pods
Additional checks could be done from within the container where your Java app is running.
A possible approach is below.
List deployments of your app and their labels:
$ kubectl get deployments --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
hello-node 2/2 2 2 1h app=hello-node
Having got the label, you can list the relevant pods and their containers:
$ LABEL=hello-node; kubectl get pods -l app=$LABEL -o custom-columns=POD:metadata.name,CONTAINER:spec.containers[*].name
POD CONTAINER
hello-node-55b49fb9f8-7tbh4 hello-node
hello-node-55b49fb9f8-p7wt6 hello-node
Now it's possible to run basic diagnostic commands from within the Java app container.
Ping might not achieve the target but is available almost always in container and does primitive check of DNS resolution.
Services from the same namespace should be available via short DNS name.
Services from other namespaces inside of the same Kubernetes cluster should be available via internal FQDN.
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- ping -c1 hello-node
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- ping -c1 hello-node.default.svc.cluster.local
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- mysql -u [username] -p [dbname] -e [query]
From here on the connectivity diagnostics is pretty similar to the bare-metal server except the fact that you are limited by the tools available inside of container. You might install missing packages into the container as needed.
As soon as you obtain more diagnostic information, you'll get a clue what to check next.

Related

gcloud: command not found when starting Cloud SQL Auth proxy with docker and container optimised OS

I'm trying to set-up a Cloud SQL Auth proxy with a Cloud SQL for MySQL instance.
I'm following this guide but without success.
so I'm creating a new VM instance. Once it has been created I'm running the following command in the cloud shell
gcloud beta compute ssh --zone "europe-west2-c" "nameinstance" --tunnel-through-iap --project "my_project"
From what I understand this allow me to connect to my instance. Then I'm running the following command:
docker pull gcr.io/cloudsql-docker/gce-proxy:1.19.1
all good. then I'm kind of lost as when entering gcloud sql instances describe Cloud_SQL_instance_name I got the following error gcloud: command not found
and when entering docker run -d \\ -p 127.0.0.1:3306:3306 \\ gcr.io/cloudsql-docker/gce-proxy:1.19.1 /cloud_sql_proxy \\ -instances=sql_connection_name=tcp:0.0.0.0:3306 I have the following error docker: invalid reference format.
Ultimately, and if I'm right, I should be able to execute successfully the following command mysql -u USERNAME -p --host 127.0.0.1
Container Optimized OS (or COS) target is simple: run containers. That's all. All the other capacity of linux have been deactivated, to keep the kernel small, to reduce the attack surface, and to limit the point of failure (with third party binaries, like gcloud).
Thus, run container with docker (or docker-containerd).
# interactive mode
docker run -ti google/cloud-sdk:latest gcloud version
# Script mode
docker run --entrypoint gcloud google/cloud-sdk:latest version
It works as is in startup script. If you log into the VM and want to run these commands, add a sudo before to have the permission to run the binaries.
So, you will be able to run Cloud SQL proxy in a container, Gcloud in a container, and also MySQL client in a container. Forget the fact to run something without container (and docker run command). And think also to redirect the port correct when you run your containers.

Creating a MySQL cluster, using mysql-server docker containers, on multiple servers

I'm trying to create an MySQL cluster of 3 nodes using mysql-server docker containers.
I have 3 separate cloud instances and docker is setup on all 3 of them. Each server will have only 1 container running on it - to achieve High Availability when in cluster.
I start the containers on all 3 servers, individually, with the command
docker run --name=db -p 3301:3306 -v db:/var/lib/mysql -d mysql/mysql-server
I'm mapping the port 3306 of container to my server's 3301 port. I've also created a new user 'clusteradmin' for remote access.
Next, from mysql-shell, I ran following command - for all 3 servers
dba.configureInstance('clusteradmin#serverIp:3301')
I get similar message for all-
Note that it says 'This instance reports its own address as 39xxxxxxxxxx:3306'.
Next I create a cluster in one of the server successfully. But, when adding the other 2 servers to this cluster, I'm getting the following error
On checking the logs for that particular server, I see the following lines
It says 'peer address a9yyyyyyyyyy:33061 is not valid'. This is because, since the containers are running on different servers, the container-id is not recognised by other containers on other server.
I tried many options but to no avail. One method was to use report-host and report-port options when starting the container, like so
docker run --name=db2 -p 3301:3306 -v db2:/var/lib/mysql -d mysql/mysql-server --report-host=139.59.11.215 --report-port=3301
But, the issue with this approch is that, during dba.configureInstance(), it wants to update the port to default value and throws error like so
Anybody who has managed to create such a cluster of mysql-server containers running on different servers, I would really appreciate pointers in this regard.
I have gone over the documentation and source code but have not found an explanation why listening and advertising different ports is problematic.
I have solved the problem by using --port 3301 when invoking mysql-server:
docker run --name=db2 -p 3301:3301 -v db2:/var/lib/mysql -d mysql/mysql-server --report-host=139.59.11.215 --port 3301

monitor log of a pod with dynamic name

I need to automate monitoring log of pods of an app
Monitoring a pod's log can be done using oc CLI
oc log -f my-app-5-43j
However, the pod's name changes dynamically over the deployments. If I want to automate the monitoring, like running a cron job, continually tailing the log even after another deployment, how should I do?
Will Gordon already commented solution, so I provide more practical usage for your understanding.
If you deploy your pod using deploymentConfig, daemonSet and so on, you can see logs of the pod without specifying a pod name as follows.
# oc logs -f dc/<your deploymentConfig name>
# oc logs -f ds/<your daemonset name>
Or you can get first pod name dynamically using jsonpath output option to see log.
# oc logs -f $(oc get pod -o jsonpath='{.items[0].metadata.name}')
If you can specify the pod with a specific label, you can use -l option either.
# oc logs -f $(oc get pod -l app=database -o jsonpath='{.items[0].metadata.name}')

Use short lived token to push docker image to GCP

I have a service account and key json file contents in my process. Trying to spawn "docker gcr.io/my-project/my-image" to upload images to container registry.
I tried docker login -u _json_key -p "$(cat keyfile.json)" https://[HOSTNAME] from Advanced Authentication tutorial, which returns success during login, but still docker push returns error:
You do not currently have an active account selected.
Please run:
$ gcloud auth login
Ideally, I would like to trigger docker push without configuring gcloud SDK. Also would not want to store key json contents to a file. I'd like to keep it in process memory.
The correct command to run for docker clients 18.03 and newer is gcloud auth configure-docker.
If you read the fine print for your command docker login -u _json_key -p "$(cat keyfile.json)" https://[HOSTNAME] it mentions for older Docker clients e.g. several years old. This is not the correct command to run today.
With the constant improvements, new features, Kubernetes, etc. you do not want to be running old commands or configurations.
gcloud auth configure-docker

Kubernetes (kubectl) get running pods

I am trying to get the first pod from within a deployment (filtered by labels) with status running - currently I could only achieve the following, which will just give me the first pod within a deployment (filtered by labels) - and not for sure a running pod, e.g. it might also be a terminating one:
kubectl get pod -l "app=myapp" -l "tier=webserver" -l "namespace=test"
-o jsonpath="{.items[0].metadata.name}"
How is it possible to
a) get only a pod list of "RUNNING" pods and (couldn't find anything here or on google)
b) select the first one from that list. (thats what I'm currently doing)
Regards
Update: I already tried the link posted in the comments earlier with the following:
kubectl get pod -l "app=ms-bp" -l "tier=webserver" -l "namespace=test"
-o json | jq -r '.items[] | select(.status.phase = "Running") | .items[0].metadata.name'
the result is 4x "null" - there are 4 running pods.
Edit2: Resolved - see comments
Since kubectl 1.9 you have the option to pass a --field-selector argument (see https://github.com/kubernetes/kubernetes/pull/50140). E.g.
kubectl get pod -l app=yourapp --field-selector=status.phase==Running -o jsonpath="{.items[0].metadata.name}"
Note however that for not too old kubectl versions many reasons to find a running pod are mute, because a lot of commands which expect a pod also accept a deployment or service and automatically select a corresponding pod. To quote from the documentation:
$ echo exec logs port-forward | xargs -n1 kubectl help | grep -C1 'service\|deploy\|job'
# Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default
kubectl exec deploy/mydeployment -- date
# Get output from running 'date' command from the first pod of the service myservice, using the first container by default
kubectl exec svc/myservice -- date
--
# Return snapshot logs from first container of a job named hello
kubectl logs job/hello
# Return snapshot logs from container nginx-1 of a deployment named nginx
kubectl logs deployment/nginx -c nginx-1
--
Use resource type/name such as deployment/mydeployment to select a pod. Resource type defaults to 'pod' if omitted.
--
# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment
kubectl port-forward deployment/mydeployment 5000 6000
# Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service
kubectl port-forward service/myservice 8443:https
(Note logs also accepts a service, even though an example is omitted in the help.)
The selection algorithm favors "active pods" for which a main criterion is having a status of "Running" (see https://github.com/kubernetes/kubectl/blob/2d67b5a3674c9c661bc03bb96cb2c0841ccee90b/pkg/polymorphichelpers/attachablepodforobject.go#L51).