List all PVCs of an Openshift cluster - openshift

How to list from the commandline, all PVCs of an Openshift cluster ?
From my understanding, the scope of PVCs is the namespace/project, in which it was created.
Listing the PVCs implies being connected (using) or at least mentioning the namespace.
The best I came up with is :
$ for i in $(oc get project -o name|cut -d"/" -f 2);do echo "Project: $i";oc get pvc -n $i;done
Is there a better/cleaner/quickier way ?

As an admin, try running:
oc get pvc --all-namespaces

Related

Logs for list of pods

I want to obtain the log for all the pods that I currently have.
If I do:
oc logs $(oc get pods -o custom-columns=POD:.metadata.name --no-headers)
I get:
error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
Nevertheless, if I just run oc get pods -o custom-columns=POD:.metadata.name --no-headers I get a correct list with just the names of the pods.
One possibility would be to loop over the returned Pod names:
for p in $(oc get pods -o custom-columns=POD:.metadata.name --no-headers); do
oc logs $p
done
Note that when you want to get the logs for all Pods of a single DeploymentConfig you can directly use that as an argument for oc logs:
oc logs dc/myapplication

jdbc driver does not work in kubernetes, fails with timeout

I have a java 11 app with jdbc driver running together with mysql 8.0, the app is able to connect to mysql and execute one sql, but it looks like it never gets a response back?
It looks like a connectivity issue.
At first it'd be good to look at the Java program output.
First simple checks are at the Kubernetes level to ensure that key components are alive:
$ kubectl get deployments
$ kubectl get services
$ kubectl get pods
Additional checks could be done from within the container where your Java app is running.
A possible approach is below.
List deployments of your app and their labels:
$ kubectl get deployments --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
hello-node 2/2 2 2 1h app=hello-node
Having got the label, you can list the relevant pods and their containers:
$ LABEL=hello-node; kubectl get pods -l app=$LABEL -o custom-columns=POD:metadata.name,CONTAINER:spec.containers[*].name
POD CONTAINER
hello-node-55b49fb9f8-7tbh4 hello-node
hello-node-55b49fb9f8-p7wt6 hello-node
Now it's possible to run basic diagnostic commands from within the Java app container.
Ping might not achieve the target but is available almost always in container and does primitive check of DNS resolution.
Services from the same namespace should be available via short DNS name.
Services from other namespaces inside of the same Kubernetes cluster should be available via internal FQDN.
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- ping -c1 hello-node
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- ping -c1 hello-node.default.svc.cluster.local
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- mysql -u [username] -p [dbname] -e [query]
From here on the connectivity diagnostics is pretty similar to the bare-metal server except the fact that you are limited by the tools available inside of container. You might install missing packages into the container as needed.
As soon as you obtain more diagnostic information, you'll get a clue what to check next.

monitor log of a pod with dynamic name

I need to automate monitoring log of pods of an app
Monitoring a pod's log can be done using oc CLI
oc log -f my-app-5-43j
However, the pod's name changes dynamically over the deployments. If I want to automate the monitoring, like running a cron job, continually tailing the log even after another deployment, how should I do?
Will Gordon already commented solution, so I provide more practical usage for your understanding.
If you deploy your pod using deploymentConfig, daemonSet and so on, you can see logs of the pod without specifying a pod name as follows.
# oc logs -f dc/<your deploymentConfig name>
# oc logs -f ds/<your daemonset name>
Or you can get first pod name dynamically using jsonpath output option to see log.
# oc logs -f $(oc get pod -o jsonpath='{.items[0].metadata.name}')
If you can specify the pod with a specific label, you can use -l option either.
# oc logs -f $(oc get pod -l app=database -o jsonpath='{.items[0].metadata.name}')

Kubernetes (kubectl) get running pods

I am trying to get the first pod from within a deployment (filtered by labels) with status running - currently I could only achieve the following, which will just give me the first pod within a deployment (filtered by labels) - and not for sure a running pod, e.g. it might also be a terminating one:
kubectl get pod -l "app=myapp" -l "tier=webserver" -l "namespace=test"
-o jsonpath="{.items[0].metadata.name}"
How is it possible to
a) get only a pod list of "RUNNING" pods and (couldn't find anything here or on google)
b) select the first one from that list. (thats what I'm currently doing)
Regards
Update: I already tried the link posted in the comments earlier with the following:
kubectl get pod -l "app=ms-bp" -l "tier=webserver" -l "namespace=test"
-o json | jq -r '.items[] | select(.status.phase = "Running") | .items[0].metadata.name'
the result is 4x "null" - there are 4 running pods.
Edit2: Resolved - see comments
Since kubectl 1.9 you have the option to pass a --field-selector argument (see https://github.com/kubernetes/kubernetes/pull/50140). E.g.
kubectl get pod -l app=yourapp --field-selector=status.phase==Running -o jsonpath="{.items[0].metadata.name}"
Note however that for not too old kubectl versions many reasons to find a running pod are mute, because a lot of commands which expect a pod also accept a deployment or service and automatically select a corresponding pod. To quote from the documentation:
$ echo exec logs port-forward | xargs -n1 kubectl help | grep -C1 'service\|deploy\|job'
# Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default
kubectl exec deploy/mydeployment -- date
# Get output from running 'date' command from the first pod of the service myservice, using the first container by default
kubectl exec svc/myservice -- date
--
# Return snapshot logs from first container of a job named hello
kubectl logs job/hello
# Return snapshot logs from container nginx-1 of a deployment named nginx
kubectl logs deployment/nginx -c nginx-1
--
Use resource type/name such as deployment/mydeployment to select a pod. Resource type defaults to 'pod' if omitted.
--
# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment
kubectl port-forward deployment/mydeployment 5000 6000
# Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service
kubectl port-forward service/myservice 8443:https
(Note logs also accepts a service, even though an example is omitted in the help.)
The selection algorithm favors "active pods" for which a main criterion is having a status of "Running" (see https://github.com/kubernetes/kubectl/blob/2d67b5a3674c9c661bc03bb96cb2c0841ccee90b/pkg/polymorphichelpers/attachablepodforobject.go#L51).

Openshift: how to edit scc non-interactively?

I am experimenting with openshift/minishift, I find myself having to run:
oc edit scc privileged
and add:
- system:serviceaccount:default:router
So I can expose the pods. Is there a way to do it in a script?
I know oc adm have some command for policy manipulation but I can't figure out how to add this line.
You can achieve it using oc patch command and with type json. The snippet below will add a new item to array before 0th element. You can try it out with a fake "bla" value etc.
oc patch scc privileged --type=json -p '[{"op": "add", "path": "/users/0", "value":"system:serviceaccount:default:router"}]'
The --type=json will interpret the provided patch as jsonpatch operation. Unfortunately oc patch --help doesn't provide any example for json patch type. Luckily example usage can be found in kubernetes docs: kubectl patch
I have found an example piping to sed Here and adapted it to ruby so I can easily edit the data structure.
oc get scc privileged -o json |\
ruby -rjson -e 'i = JSON.load(STDIN.read); i["users"].push "system:serviceaccount:default:router"; puts i.to_json ' |\
oc replace scc -f -
Here is quick and dirty script to get started with minishift
The easiest way to add and remove users to SCCs from the command line is using the oc adm policy commands:
oc adm policy add-scc-to-user <scc_name> <user_name>
For more info, see this section.
So for your specific use-case, it would be:
oc adm policy add-scc-to-user privileged system:serviceaccount:default:router
I'm surprised its needed though. I use "oc cluster up" normally, but testing with recent minishift, its already added out of the box:
$ minishift start
$ eval $(minishift oc-env)
$ oc login -u system:admin
$ oc get scc privileged -o yaml | grep system:serviceaccount:default:router
- system:serviceaccount:default:router
$ minishift version
minishift v1.14.0+1ec5877
$ oc version
openshift v3.7.1+a8deba5-34