How to restart pod in OpenShift? - openshift

I updated a file (for debug output) in a running pod, but it isn't getting recognized. I was going to restart the pod to get it to take but I only see oc stop and not oc start or oc restart. How would I force a refresh of files in the pod?
I am thinking maybe it is a Ruby thing (like opcache in PHP). But figured a restart of the pod would handle it. Just can't figure out how to restart a pod.

You need to do your changes in the deployment config but not in the pod. Because OpenShift treats pods as largely immutable; changes cannot be made to a pod definition while it is running. https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/pods_and_services.html#pods
If you make some changes in deployment config and save them, pod will restart and and your changes will take effect:
oc edit dc "deploy-config-example"
If you change something in volumes or configmaps you need to delete pod for his restart:
oc delete pod "name-of-your-pod"
And pod will restart. Or better still trigger a new deployment by running:
oc rollout latest "deploy-config-example"
Using oc rollout is better because it will re-deploy all pods if you have a scaled application, and you don't need to identify each pod and delete it.

You can scale deployments down (to zero) and then up again:
oc get deployments -n <your project> -o wide
oc get pods -n <your project> -o wide
oc scale --replicas=0 deployment/<your deployment> -n <your project>
oc scale --replicas=1 deployment/<your deployment> -n <your project>
watch oc get pods -n <your project> # wait until your deployment is up again

If you want to do it using GUI :
Login to ocp
Click workloads -> Deployment Configs
Find the pod you want to restart.
On the right side, click on the 3 dots.
Click start rollout.
If you delete your pod, or scale it to 0 and to 1 again you might lose some clients, because you are basically stopping and restarting your application. But in rollout, your existing pod waits for the new pod to get ready and then deletes itself. So I guess rollout is safer than deleting or scaling 0/1.

Thanks Noam Manos for your solution.
I've used "Application Console" in Openshift. I've navigated to Applications - Deployment - #3 (check for your active deployment) to see my pod with up and down arrows. Currently, I've 1 pod running. So, I've clicked on down arrow to scale down to 0 pod. Then, I clicked on up arrow to scale up to 1 pod.

Follow the below steps
login to open shift
click on monitor tab
select the component for which you want to restart the pod
click the action drop down ( right top corner )
delete the existing pod
new pod automatically generated.

You also can go to DeploymentConfig and choose option "Start rollout" from actions.
And if nothing helps, there is also such thing as
Workloads -> ReplicationControllers
they controll replica numbers.
You delete such controller, and then another such controller is created which creates your new pod.

Related

Monitoring the progress of specified pods with oc cli

I want to know is there a way to monitor the progress of a particular pod instead of seeing all pods?
For example I scale consleapp
oc scale dc consleapp --replicas=3
After that command I want to watch the progress of only consoleapp and ensure the pods are active.
I thought you'd be able to run this command
oc get pods consoleapp -watch but it does not work. Is there a way for me to monitor the progress of this? very similar to oc rollout status deploymentconfig/consoleapp --watch but without rolling out a new deployment.
When you run oc get pods consoleapp, you're asking for a pod named consoleapp, which of course doesn't exist. If you want to watch all the pods managed by the DeploymentConfig, you can use the --selector (-l) option to select pods that match the selector in your DeploymentConfig.
If you have:
spec:
selector:
name: consoleapp
Then you would run:
oc get pod -l name=consoleapp -w
If your selector has multiple labels, combine them with commas:
oc get pod -l app=consoleapp,component=frontend -w
NB: DeploymentConfigs are considered deprecated and can generally be replaced by standard Kubernetes Deployments:
Both Kubernetes Deployment objects and OpenShift Container Platform-provided DeploymentConfig objects are supported in OpenShift Container Platform; however, it is recommended to use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects.
(from "Understanding Deployment and DeploymentConfig objects")

OpenShift oc apply and rollout

I would like to perform the following script in our test-environment to deploy our application in OpenShift 4.8.
oc apply -f deployment-config.yaml
oc rollout latest dc/my-application
The trigger in deployment config is ConfigChange. If e.g. an environment variable has changed in deployment config oc apply -f deployment-config.yaml will trigger a rollout.
The deployment config uses snapshot as image. We don't have a version number of our snapshot which means that a new snapshot might need to be deployed even though deployment config has not been changed. Thats why we use oc rollout latest dc/my-application.
image: "<repo-url>/my-application:snapshot"
imagePullPolicy: Always
The problem is that sometimes both oc apply -f deployment-config.yaml and oc rollout latest dc/my-application will trigger a rollout.
Is there a way to do oc apply -f deployment-config.yaml without triggering a rollout? Or do you se another solution?
As of Deployment triggers you need to define the trigger as empty field. If you just remove the trigger, a config change trigger will be added by default.
If no triggers are defined on a DeploymentConfig object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually.

OpenShift single node PersistentVolume with hostPath requires privileged pods, how to set as default?

I am fairly new to OpenShift and have been using CRC (Code Ready Containers) for a little while, and now decided to install the single server OpenShift on bare metal using the Assisted-Installer method from https://cloud.redhat.com/blog/deploy-openshift-at-the-edge-with-single-node-openshift and https://console.redhat.com/openshift/assisted-installer/clusters/. This has worked well and I have a functional single-server.
As a single server in a test environment (without NFS available) I need/want to create PersistentVolumes with hostPath (localhost storage) - these work flawlessly in CRC. However on the full install, I run into an issue when mounting PVC's to pods as the pods were not running privileged. I edited the deployment config and added the lines below (within the containers hash)
- resources: {}
...
securityContext:
privileged: true
... however still had errors as the restricted SCC has 'allowPrivilegedContainer: false'. I have done a horrible hack of changing this to true, so adding the lines above to the deployment yaml works. However there must be an easier way as none of these hacks seem present in CRC. I checked and CRC pods run restricted, the restricted SCC has privileged set to false, and the Persistent Volume is also using hostPath. I also do not have to edit the deployment yaml as above in CRC - it just works (tm).
Guidance here shows that the containers must run privileged, however the containers in CRC are running restricted and the SCC still has 'allowPrivilegedContainer: false'.
https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-hostpath.html
An example app creation as below (from the RedHat DO280 course) works without any massaging of privileges or deployment config in CRC, but on a real OS server requires the massaging above. As my server is purely for testing, I would like to make it easier without doing the hackjob and deployment changes above.
oc new-app --name mysql --docker-image registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7
oc create secret generic mysql --from-literal password=r3dh4t123
oc set env deployment mysql --prefix MYSQL_ROOT_ --from secret/mysql
oc set volumes deployment/mysql --name mysql-storage --add --type pvc --claim-size 2Gi --claim-mode rwo --mount-path /var/lib/mysql/data
oc get pods -l deployment=mysql
oc get pvc
Any help appreciated.
EDIT: I have overcome this now by enabling nfs-server and adding entries to /etc/exports. However I'm still interested to understand how CRC manages the above issue when using hostPath
The short answer to this is: don't use hostPath.
You are using hostPath to make use of arbitrary disk space available on the underlying host's volume. hostPath can also be used to read/write any directory path on the underlying host's volume -- which, as you can imagine, should be used with great care.
Have a look at this as an alternative -- https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-local.html

How to enable mutual SSL verification mode in Redhat-SSO image for OpenShift

I am using the template sso72-x509-postgresql-persistent, which is based on Redhat-SSO and Keycloak, to create an application in OpenShift.
I am going to enable its mutual SSL mode, so that a user has to only provide his certificate instead of user name and password in his request. The document (https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/7.2/html-single/server_administration_guide/index#x509) told me to edit the standalone.xml file to add configuration sections. It worked fine.
But the template image sso72-x509-postgresql-persistent had problem with this procedure, because after it was deployed on the OpenShift, any changes on the files within the docker have been lost after restart of the docker.
Is there anyway to enable the mutual SSL mode through another level matter like commandline or API instead of editting a configuration file, except making my own docker image?
Ok, I'm including this anyway. I wasn't able to get this working due to permissions issues (the mounted files didn't persist the same permissions as before, so the container continued to fail. But a lot of work went into this answer, so hopefully it points you in the right direction!
You can add a Persistent Volume (PV) to ensure your configuration changes survive a restart. You can add a PV to your deployment via:
DON'T DO THIS
oc set volume deploymentconfig sso --add -t pvc --name=sso-config --mount-path=/opt/eap/standalone/configuration --claim-mode=ReadWriteOnce --claim-size=1Gi
This will bring up your RH-SSO image with a blank configuration directory, causing the pod to get stuck in Back-off restarting failed container. What you should do instead is:
Backup the existing configuration files
oc rsync <rhsso_pod_name>:/opt/eap/standalone/configuration ~/
Create a temporary, busybox deployment that can act as an intermediary for uploading the configuration files. Wait for deployment to complete
oc run busybox --image=busybox --wait --command -- /bin/sh -c "while true; do sleep 10; done"
Mount a new PV to the busybox deployment. Wait for deployment to complete
oc set volume deploymentconfig busybox --add -t pvc --name=sso-volume --claim-name=sso-config --mount-path=/configuration --claim-mode=ReadWriteOnce --claim-size=1Gi
Edit your configuration files now
Upload the configuration files to your new PV via the busybox pod
oc rsync ~/configuration/ <busybox_pod_name>:/configuration/
Destroy the busybox deployment
oc delete all -l run=busybox --force --grace-period=0
Finally, you attach your already created and ready-to-go persistent configuration to the RH SSO deployment
oc set volume deploymentconfig sso --add -t pvc --name=sso-volume --claim-name=sso-config --mount-path=/opt/eap/standalone/configuration
Once your new deployment is...still failing because of permission issues :/

Openshift temporarily knock-out a container

I have the EFK stack deployed for logging on an openshift 3.6 cluster with the standard Ansible playbook provided by openshift.
So there is one fluentd pod running on every node of the cluster and two elasticsearch containers in total.
I would like to temporarily disable a fluentd container. When I delete the pod, a new one is started in its place after a few seconds because of the DaemonSet. How could I prolong the time that the fluentd pod is down?
You can change the node selector label on Daemon set of fluentd
oc edit ds logging-fluentd
nodeSelector:
logging-infra-fluentd: "true"
Change the value "true" to "false" ,save,and delete the fluentd pod it will not create again.
Another approach would be to mark the node as "unschedulable", which blocks new PODs from being assigned there, and then deleting the fluentd POD. The downside is, the applications will not be able to put their PODs on this node either, however existing ones would remain.
# disable a node from taking PODs
oadm manage-node node1.example.com --schedulable=false
# a blanket bombing of fluentd
oc -n logging delete pods --all
# enable it back
oadm manage-node node1.example.com --schedulable=true
oc -n logging delete pods --all