I am trying to get the first pod from within a deployment (filtered by labels) with status running - currently I could only achieve the following, which will just give me the first pod within a deployment (filtered by labels) - and not for sure a running pod, e.g. it might also be a terminating one:
kubectl get pod -l "app=myapp" -l "tier=webserver" -l "namespace=test"
-o jsonpath="{.items[0].metadata.name}"
How is it possible to
a) get only a pod list of "RUNNING" pods and (couldn't find anything here or on google)
b) select the first one from that list. (thats what I'm currently doing)
Regards
Update: I already tried the link posted in the comments earlier with the following:
kubectl get pod -l "app=ms-bp" -l "tier=webserver" -l "namespace=test"
-o json | jq -r '.items[] | select(.status.phase = "Running") | .items[0].metadata.name'
the result is 4x "null" - there are 4 running pods.
Edit2: Resolved - see comments
Since kubectl 1.9 you have the option to pass a --field-selector argument (see https://github.com/kubernetes/kubernetes/pull/50140). E.g.
kubectl get pod -l app=yourapp --field-selector=status.phase==Running -o jsonpath="{.items[0].metadata.name}"
Note however that for not too old kubectl versions many reasons to find a running pod are mute, because a lot of commands which expect a pod also accept a deployment or service and automatically select a corresponding pod. To quote from the documentation:
$ echo exec logs port-forward | xargs -n1 kubectl help | grep -C1 'service\|deploy\|job'
# Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default
kubectl exec deploy/mydeployment -- date
# Get output from running 'date' command from the first pod of the service myservice, using the first container by default
kubectl exec svc/myservice -- date
--
# Return snapshot logs from first container of a job named hello
kubectl logs job/hello
# Return snapshot logs from container nginx-1 of a deployment named nginx
kubectl logs deployment/nginx -c nginx-1
--
Use resource type/name such as deployment/mydeployment to select a pod. Resource type defaults to 'pod' if omitted.
--
# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment
kubectl port-forward deployment/mydeployment 5000 6000
# Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service
kubectl port-forward service/myservice 8443:https
(Note logs also accepts a service, even though an example is omitted in the help.)
The selection algorithm favors "active pods" for which a main criterion is having a status of "Running" (see https://github.com/kubernetes/kubectl/blob/2d67b5a3674c9c661bc03bb96cb2c0841ccee90b/pkg/polymorphichelpers/attachablepodforobject.go#L51).
Related
I want to obtain the log for all the pods that I currently have.
If I do:
oc logs $(oc get pods -o custom-columns=POD:.metadata.name --no-headers)
I get:
error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
Nevertheless, if I just run oc get pods -o custom-columns=POD:.metadata.name --no-headers I get a correct list with just the names of the pods.
One possibility would be to loop over the returned Pod names:
for p in $(oc get pods -o custom-columns=POD:.metadata.name --no-headers); do
oc logs $p
done
Note that when you want to get the logs for all Pods of a single DeploymentConfig you can directly use that as an argument for oc logs:
oc logs dc/myapplication
In bash I am trying to use a variable in a jsonpath for an openshift patch cli command:
OS_OBJECT='sample.k8s.io/element'
VALUE='5'
oc patch quota "my-object" -p '{"spec":{"hard":{"$OS_OBJECT":"$VALUE"}}}'
But that gives the error:
Error from server: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
indicating that the variable is not substituted/expanded.
If I write it explicitly it works:
oc patch quota "my-object" -p '{"spec":{"hard":{"sample.k8s.io/element":"5"}}}'
Any suggestions on how to include a variable in the jsonstring?
EDIT: Based on below answer I have also tried:
oc patch quota "my-object" -p "{'spec':{'hard':{'$OS_OBJECT':'$VALUE'}}}"
but that gives the error:
Error from server (BadRequest): invalid character '\'' looking for beginning of object key string
In single quotes everything is preserved by bash, you have to use double quotes for string interpolation to work (and use the escape sequence \" for the other double quotes).
Try this out:
oc patch quota "my-object" -p "{\"spec\":{\"hard\":{\"$OS_OBJECT\":\"$VALUE\"}}}"
Instead of OC patch would prefer OC apply on your templates. Templates are the best way to configure Openshift/Kubernetes objects which can be stored in git for version control to follow Infrastructure as code.
I am not the admin for my Openshift cluster, so can't access the resource quotas hence suggesting a way in Kubernetes but same can be applied in Openshift too, except the CLI change from kubectl to oc
Let's take a simple resource quota template:
$ cat resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: demo-quota
spec:
hard:
cpu: "1"
memory: 2Gi
pods: "10"
scopeSelector:
matchExpressions:
- operator : In
scopeName: PriorityClass
values: ["high"]
Now configure your quota using kubectl apply on your template. This creates resource which is configured in the template, in our case its resourcequota
$ kubectl apply -f resourcequota.yaml
resourcequota/demo-quota created
$ kubectl get quota
NAME CREATED AT
demo-quota 2019-11-19T12:23:37Z
$ kubectl describe quota demo-quota
Name: demo-quota
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 1
memory 0 2Gi
pods 0 10
As your looking for an update in resource quota using patch, I would suggest here to edit the template and execute kubectl apply again to update the object.
$ kubectl apply -f resourcequota.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
resourcequota/demo-quota configured
$ kubectl describe quota demo-quota
Name: demo-quota
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 2
memory 0 4Gi
pods 0 20
Similarly, you can execute oc apply for your operations as oc patch is not so user-friendly to configure.
I have a java 11 app with jdbc driver running together with mysql 8.0, the app is able to connect to mysql and execute one sql, but it looks like it never gets a response back?
It looks like a connectivity issue.
At first it'd be good to look at the Java program output.
First simple checks are at the Kubernetes level to ensure that key components are alive:
$ kubectl get deployments
$ kubectl get services
$ kubectl get pods
Additional checks could be done from within the container where your Java app is running.
A possible approach is below.
List deployments of your app and their labels:
$ kubectl get deployments --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
hello-node 2/2 2 2 1h app=hello-node
Having got the label, you can list the relevant pods and their containers:
$ LABEL=hello-node; kubectl get pods -l app=$LABEL -o custom-columns=POD:metadata.name,CONTAINER:spec.containers[*].name
POD CONTAINER
hello-node-55b49fb9f8-7tbh4 hello-node
hello-node-55b49fb9f8-p7wt6 hello-node
Now it's possible to run basic diagnostic commands from within the Java app container.
Ping might not achieve the target but is available almost always in container and does primitive check of DNS resolution.
Services from the same namespace should be available via short DNS name.
Services from other namespaces inside of the same Kubernetes cluster should be available via internal FQDN.
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- ping -c1 hello-node
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- ping -c1 hello-node.default.svc.cluster.local
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- mysql -u [username] -p [dbname] -e [query]
From here on the connectivity diagnostics is pretty similar to the bare-metal server except the fact that you are limited by the tools available inside of container. You might install missing packages into the container as needed.
As soon as you obtain more diagnostic information, you'll get a clue what to check next.
I need to automate monitoring log of pods of an app
Monitoring a pod's log can be done using oc CLI
oc log -f my-app-5-43j
However, the pod's name changes dynamically over the deployments. If I want to automate the monitoring, like running a cron job, continually tailing the log even after another deployment, how should I do?
Will Gordon already commented solution, so I provide more practical usage for your understanding.
If you deploy your pod using deploymentConfig, daemonSet and so on, you can see logs of the pod without specifying a pod name as follows.
# oc logs -f dc/<your deploymentConfig name>
# oc logs -f ds/<your daemonset name>
Or you can get first pod name dynamically using jsonpath output option to see log.
# oc logs -f $(oc get pod -o jsonpath='{.items[0].metadata.name}')
If you can specify the pod with a specific label, you can use -l option either.
# oc logs -f $(oc get pod -l app=database -o jsonpath='{.items[0].metadata.name}')
I'm trying to create a secret on OpenShift v3.3.0 using:
oc create secret generic my-secret --from-file=application-cloud.properties=src/main/resources/application-cloud.properties -n my-project
Because I created the same secret earlier, I get this error message:
Error from server: secrets "my-secret" already exists
I looked at oc, oc create and oc create secret options and could not find an option to overwrite the secret when creating it.
I then tried to delete the existing secret with oc delete. All the commands listed below return either No resources found or a syntax error.
oc delete secrets -l my-secret -n my-project
oc delete secret -l my-secret -n my-project
oc delete secrets -l my-secret
oc delete secret -l my-secret
oc delete pods,secrets -l my-project
oc delete pods,secrets -l my-secret
oc delete secret generic -l my-secret
Do you know how to delete a secret or overwrite a secret upon creation using the OpenShift console or the command line?
"my-secret" is the name of the secret, so you should delete it like this:
oc delete secret my-secret
Add -n option if you are not using the project where the secret was created
oc delete secret my-secret -n <namespace>
I hope by this time you might have the answer ready, just sharing if this can help others.
As on today here are the details of CLI version and Openshift version which I am working on:
$ oc version
oc v3.6.173.0.5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth
Server <SERVER-URL>
openshift v3.11.0+ec8630f-265
kubernetes v1.11.0+d4cacc0
Let's take a simple secret with a key-value pair generated using a file, will get to know the advantage if generated via a file.
$ echo -n "password" | base64
cGFzc3dvcmQ=
Will create a secret with this value:
$ cat clientSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
clienttoken: cGFzc3dvcmQ=
$ oc apply -f clientSecret.yaml
secret "test-secret" created
Let's change the password and update it in the YAML file.
$ echo -n "change-password" | base64
Y2hhbmdlLXBhc3N3b3Jk
$ cat clientSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
clienttoken: Y2hhbmdlLXBhc3N3b3Jk
From the definition of oc create command, it creates a resource if found throws an error. So this command won't fit to update a configuration of a resource, in our case its a secret.
$ oc create --help
Create a resource by filename or stdin
To make life easier, Openshift has provided oc apply command to apply a configuration to a resource if there is a change. This command is also used to create a resource, which helps a lot during automated deployments.
$ oc apply --help
Apply a configuration to a resource by filename or stdin.
$ oc apply -f clientSecret.yaml
secret "test-secret" configured
By the time you check the secret in UI, a new/updated password appears on the console.
So if you have noticed, first time apply has resulted in created - secret "test-secret" created and in subsequent apply results in configured - secret "test-secret" configured