openshift pod - Ready state shows False - openshift

When i run the below oc command:
oc describe mypod
i see the below status under Conditions section:
Conditions:
Type Status
Initialized True
Ready False
ContainersReady True
PodScheduled True
What could cause the Ready to be shown as False?
The pod is up and running for more than a month.
I could also see in the service to which this pod is associated, there is a X mark under receiving traffic column. (i think this means this pod is not taking traffic)

Related

How do i use mysql operaor with minikube

I went through the mysql operator documentation on github and i followed through but i kept getting this error
Patching failed with inconsistencies: (('remove', ('status', 'kopf'), {'dummy': '2022-07-14T15:51:30.145945'}, None),)
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Back-off restarting failed container
here are the kubernetes file used
kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-crds.yaml
kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-operator.yaml
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mycluster
spec:
secretName: mypwds
tlsUseSelfSigned: true
instances: 3
router:
instances: 1
I dont know what to do. Any help will be much appreciated

How to delete pending pods in kubernetes?

I have two pending pods which I cannot delete by any means. Could you help?
OS: Cent OS 7.8
Docker: 1.13.1
kubenetes: "v1.20.1"
[root#master-node ~]# k get pods --all-namespaces (note: k = kubectl alias)
NAMESPACE NAME READY STATUS RESTARTS AGE
**default happy-panda-mariadb-master-0 0/1 Pending** 0 11m
**default happy-panda-mariadb-slave-0 0/1 Pending** 0 49m
default whoami 1/1 Running 0 5h13m
[root#master-node ~]# k describe pod/happy-panda-mariadb-master-0
Name: happy-panda-mariadb-master-0
Namespace: default
Priority: 0
Node: <none>
Labels: app=mariadb
chart=mariadb-7.3.14
component=master
controller-revision-hash=happy-panda-mariadb-master-7b55b457c9
release=happy-panda
statefulset.kubernetes.io/pod-name=happy-panda-mariadb-master-0
IPs: <none>
Controlled By: StatefulSet/happy-panda-mariadb-master
Containers:
mariadb:
Image: docker.io/bitnami/mariadb:10.3.22-debian-10-r27
Port: 3306/TCP
Host Port: 0/TCP
Liveness: exec [sh -c password_aux="${MARIADB_ROOT_PASSWORD:-}"
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-happy-panda-mariadb-master-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: happy-panda-mariadb-master
Optional: false
default-token-wpvgf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wpvgf
Optional: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 15m default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 15m default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
[root#master-node ~]# k get events
LAST SEEN TYPE REASON OBJECT MESSAGE
105s Normal FailedBinding persistentvolumeclaim/data-happy-panda-mariadb-master-0 no persistent volumes available for this claim and no storage class is set
105s Normal FailedBinding persistentvolumeclaim/data-happy-panda-mariadb-slave-0 no persistent volumes available for this claim and no storage class is set
65m Warning FailedScheduling pod/happy-panda-mariadb-master-0 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
I already tried delete by various ways but nothing worked (I also tried to delete from the dashboard)
**kubectl delete pod happy-panda-mariadb-master-0 --namespace="default"
k delete deployment mysql-1608901361
k delete pod/happy-panda-mariadb-master-0 -n default --grace-period 0 --force**
Could you advise me on this?
kubectl delete rc replica set names
Or You forgot to specify storageClassName: manual in PersistentVolumeClaim.
You should delete the statefulset which controls the pods instead of deleting the pods directly. The reason pods are not getting deleted is because statefulset controller is recreating them after you delete it.
kubectl delete statefulset happy-panda-mariadb-master

Issue running helm command on a schedule

I am trying to delete temporary pods and other artifacts using helm delete. I am trying to run this helm delete to run on a schedule. Here is my stand alone command which works
helm delete --purge $(helm ls -a -q temppods.*)
However if i try to run this on a schedule as below i am running into issues.
Here is what mycron.yaml looks like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronbox
spec:
serviceAccount: cron-z
successfulJobsHistoryLimit: 1
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronbox
image: alpine/helm:2.9.1
args:
- delete
- --purge
- $(helm ls -a -q temppods.*)
restartPolicy: OnFailure
I ran
oc create -f ./mycron.yaml
This created the cronjob
Every 5th minute a pod is getting created and the helm command that is part of the cron job runs.
I am expecting the artifacts/pods name beginning with temppods* to be deleted.
What i get is:
Error: pods is forbidden: User "system:serviceacount:myproject:default" cannot list pods in the namespace "kube-system": no RBAC policy matched
i then created a service account cron-z and gave edit access to it. I added this serviceAccount to my yaml thinking when my pod will be created it will have the service account cron-z associated to it. Still no luck. I see the cron-z is not getting assoicated with the pod that gets created every 5 minutes and i still see default as the service name associated with the pod.
You'll need to have a service account for helm to use tiller with as well as an actual tiller service account github.com/helm/helm/blob/master/docs/rbac.md

Kubernetes (kubectl) get running pods

I am trying to get the first pod from within a deployment (filtered by labels) with status running - currently I could only achieve the following, which will just give me the first pod within a deployment (filtered by labels) - and not for sure a running pod, e.g. it might also be a terminating one:
kubectl get pod -l "app=myapp" -l "tier=webserver" -l "namespace=test"
-o jsonpath="{.items[0].metadata.name}"
How is it possible to
a) get only a pod list of "RUNNING" pods and (couldn't find anything here or on google)
b) select the first one from that list. (thats what I'm currently doing)
Regards
Update: I already tried the link posted in the comments earlier with the following:
kubectl get pod -l "app=ms-bp" -l "tier=webserver" -l "namespace=test"
-o json | jq -r '.items[] | select(.status.phase = "Running") | .items[0].metadata.name'
the result is 4x "null" - there are 4 running pods.
Edit2: Resolved - see comments
Since kubectl 1.9 you have the option to pass a --field-selector argument (see https://github.com/kubernetes/kubernetes/pull/50140). E.g.
kubectl get pod -l app=yourapp --field-selector=status.phase==Running -o jsonpath="{.items[0].metadata.name}"
Note however that for not too old kubectl versions many reasons to find a running pod are mute, because a lot of commands which expect a pod also accept a deployment or service and automatically select a corresponding pod. To quote from the documentation:
$ echo exec logs port-forward | xargs -n1 kubectl help | grep -C1 'service\|deploy\|job'
# Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default
kubectl exec deploy/mydeployment -- date
# Get output from running 'date' command from the first pod of the service myservice, using the first container by default
kubectl exec svc/myservice -- date
--
# Return snapshot logs from first container of a job named hello
kubectl logs job/hello
# Return snapshot logs from container nginx-1 of a deployment named nginx
kubectl logs deployment/nginx -c nginx-1
--
Use resource type/name such as deployment/mydeployment to select a pod. Resource type defaults to 'pod' if omitted.
--
# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment
kubectl port-forward deployment/mydeployment 5000 6000
# Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service
kubectl port-forward service/myservice 8443:https
(Note logs also accepts a service, even though an example is omitted in the help.)
The selection algorithm favors "active pods" for which a main criterion is having a status of "Running" (see https://github.com/kubernetes/kubectl/blob/2d67b5a3674c9c661bc03bb96cb2c0841ccee90b/pkg/polymorphichelpers/attachablepodforobject.go#L51).

Kubenetes doesn't recover service after minion failure

I am testing Kubernetes redundancy features with a testbed made of one master and three minions.
Case: I am running a service with 3 replicas on minions 1 and 2 and minion3 stopped
[root#centos-master ajn]# kubectl get nodes
NAME STATUS AGE
centos-minion3 NotReady 14d
centos-minion1 Ready 14d
centos-minion2 Ready 14d
[root#centos-master ajn]# kubectl describe pods $MYPODS | grep Node:
Node: centos-minion2/192.168.0.107
Node: centos-minion1/192.168.0.155
Node: centos-minion2/192.168.0.107
Test: After starting minion3 and stopping minion2 (on which 2 pods are running)
[root#centos-master ajn]# kubectl get nodes
NAME STATUS AGE
centos-minion3 Ready 15d
centos-minion1 Ready 14d
centos-minion2 NotReady 14d
Result: The service kind doesn't recover from minion failure and Kubernetes continue showing pods on the failed minion.
[root#centos-master ajn]# kubectl describe pods $MYPODS | grep Node:
Node: centos-minion2/192.168.0.107
Node: centos-minion1/192.168.0.155
Node: centos-minion2/192.168.0.107
Expected result (at least in my understanding): the service should have been built on the currently available minion 1 and 3
As far as I understand, the role of service kind is to make the deployment "globally" available so we can refer to them independently of where deployments are in the cluster.
Am I doing something wrong?
I'm using the follwoing yaml spec:
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-www 
spec:
  replicas: 3
  selector:
    app:  nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
It looks like you're always trying to read the same pods that are referenced in $MYPODS. Pod names are created dynamically by the ReplicationController, so instead of kubectl describe pods $MYPODS try this instead:
kubectl get pods -l app=nginx -o wide
This will always give you the currently scheduled pods for your app.