Cannot add hostPath to under volumes in custom SCC - openshift

I have an OpenShift custom SCC which I have deployed.
I want to add hostPath to the volumes section.
I use the following command:
$ oc edit scc custom-scc
Add line
- hostPath
Save and exit
Upon returning to edit I see that nothing my addition was removed.
I don't have this problem with other edits such as capabilities.

I needed to also add:
allowHostDirVolumePlugin: true

Related

OpenShift oc apply and rollout

I would like to perform the following script in our test-environment to deploy our application in OpenShift 4.8.
oc apply -f deployment-config.yaml
oc rollout latest dc/my-application
The trigger in deployment config is ConfigChange. If e.g. an environment variable has changed in deployment config oc apply -f deployment-config.yaml will trigger a rollout.
The deployment config uses snapshot as image. We don't have a version number of our snapshot which means that a new snapshot might need to be deployed even though deployment config has not been changed. Thats why we use oc rollout latest dc/my-application.
image: "<repo-url>/my-application:snapshot"
imagePullPolicy: Always
The problem is that sometimes both oc apply -f deployment-config.yaml and oc rollout latest dc/my-application will trigger a rollout.
Is there a way to do oc apply -f deployment-config.yaml without triggering a rollout? Or do you se another solution?
As of Deployment triggers you need to define the trigger as empty field. If you just remove the trigger, a config change trigger will be added by default.
If no triggers are defined on a DeploymentConfig object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually.

OpenShift single node PersistentVolume with hostPath requires privileged pods, how to set as default?

I am fairly new to OpenShift and have been using CRC (Code Ready Containers) for a little while, and now decided to install the single server OpenShift on bare metal using the Assisted-Installer method from https://cloud.redhat.com/blog/deploy-openshift-at-the-edge-with-single-node-openshift and https://console.redhat.com/openshift/assisted-installer/clusters/. This has worked well and I have a functional single-server.
As a single server in a test environment (without NFS available) I need/want to create PersistentVolumes with hostPath (localhost storage) - these work flawlessly in CRC. However on the full install, I run into an issue when mounting PVC's to pods as the pods were not running privileged. I edited the deployment config and added the lines below (within the containers hash)
- resources: {}
...
securityContext:
privileged: true
... however still had errors as the restricted SCC has 'allowPrivilegedContainer: false'. I have done a horrible hack of changing this to true, so adding the lines above to the deployment yaml works. However there must be an easier way as none of these hacks seem present in CRC. I checked and CRC pods run restricted, the restricted SCC has privileged set to false, and the Persistent Volume is also using hostPath. I also do not have to edit the deployment yaml as above in CRC - it just works (tm).
Guidance here shows that the containers must run privileged, however the containers in CRC are running restricted and the SCC still has 'allowPrivilegedContainer: false'.
https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-hostpath.html
An example app creation as below (from the RedHat DO280 course) works without any massaging of privileges or deployment config in CRC, but on a real OS server requires the massaging above. As my server is purely for testing, I would like to make it easier without doing the hackjob and deployment changes above.
oc new-app --name mysql --docker-image registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7
oc create secret generic mysql --from-literal password=r3dh4t123
oc set env deployment mysql --prefix MYSQL_ROOT_ --from secret/mysql
oc set volumes deployment/mysql --name mysql-storage --add --type pvc --claim-size 2Gi --claim-mode rwo --mount-path /var/lib/mysql/data
oc get pods -l deployment=mysql
oc get pvc
Any help appreciated.
EDIT: I have overcome this now by enabling nfs-server and adding entries to /etc/exports. However I'm still interested to understand how CRC manages the above issue when using hostPath
The short answer to this is: don't use hostPath.
You are using hostPath to make use of arbitrary disk space available on the underlying host's volume. hostPath can also be used to read/write any directory path on the underlying host's volume -- which, as you can imagine, should be used with great care.
Have a look at this as an alternative -- https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-local.html

How can i disable the automatic build triggered from build configuration in openshift?

I am trying to create a cicd pipeline with openshift. Initially, when creating the application using 'oc new-app' command, it automatically triggers the build. How i need to disable the initial build other than deleting or cancel the build?
How i need to disable the initial build other than deleting or cancel the build?
oc new-app can not prevent the initial build.
It had discussed here: https://github.com/openshift/origin/issues/15429
Unfortunately it does not implement now.
But, you can prevent initial build as removing all triggers from buildConfig by modifying yaml of buildConfig manually.
First export oc new-app as yaml format.
# oc new-app --name=test \
centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git -o yaml --dry-run > test.yml
Remove all triggers as changing the configuration to triggers: [].
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: ruby-25-centos7:latest
type: Source
triggers: []
After modifying, create resources using oc create -f.
# oc create -f test.yml
imagestream.image.openshift.io/ruby-25-centos7 created
imagestream.image.openshift.io/ruby-ex created
buildconfig.build.openshift.io/ruby-ex created
deploymentconfig.apps.openshift.io/ruby-ex created
service/ruby-ex created
The build does not run until you run oc start-build <bc name> and oc rollout latest dc/<dc name>.
I hope this use case is helpful for you.

How to restart pod in OpenShift?

I updated a file (for debug output) in a running pod, but it isn't getting recognized. I was going to restart the pod to get it to take but I only see oc stop and not oc start or oc restart. How would I force a refresh of files in the pod?
I am thinking maybe it is a Ruby thing (like opcache in PHP). But figured a restart of the pod would handle it. Just can't figure out how to restart a pod.
You need to do your changes in the deployment config but not in the pod. Because OpenShift treats pods as largely immutable; changes cannot be made to a pod definition while it is running. https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/pods_and_services.html#pods
If you make some changes in deployment config and save them, pod will restart and and your changes will take effect:
oc edit dc "deploy-config-example"
If you change something in volumes or configmaps you need to delete pod for his restart:
oc delete pod "name-of-your-pod"
And pod will restart. Or better still trigger a new deployment by running:
oc rollout latest "deploy-config-example"
Using oc rollout is better because it will re-deploy all pods if you have a scaled application, and you don't need to identify each pod and delete it.
You can scale deployments down (to zero) and then up again:
oc get deployments -n <your project> -o wide
oc get pods -n <your project> -o wide
oc scale --replicas=0 deployment/<your deployment> -n <your project>
oc scale --replicas=1 deployment/<your deployment> -n <your project>
watch oc get pods -n <your project> # wait until your deployment is up again
If you want to do it using GUI :
Login to ocp
Click workloads -> Deployment Configs
Find the pod you want to restart.
On the right side, click on the 3 dots.
Click start rollout.
If you delete your pod, or scale it to 0 and to 1 again you might lose some clients, because you are basically stopping and restarting your application. But in rollout, your existing pod waits for the new pod to get ready and then deletes itself. So I guess rollout is safer than deleting or scaling 0/1.
Thanks Noam Manos for your solution.
I've used "Application Console" in Openshift. I've navigated to Applications - Deployment - #3 (check for your active deployment) to see my pod with up and down arrows. Currently, I've 1 pod running. So, I've clicked on down arrow to scale down to 0 pod. Then, I clicked on up arrow to scale up to 1 pod.
Follow the below steps
login to open shift
click on monitor tab
select the component for which you want to restart the pod
click the action drop down ( right top corner )
delete the existing pod
new pod automatically generated.
You also can go to DeploymentConfig and choose option "Start rollout" from actions.
And if nothing helps, there is also such thing as
Workloads -> ReplicationControllers
they controll replica numbers.
You delete such controller, and then another such controller is created which creates your new pod.

How to edit existing rolebinding in Openshift using yml files?

I have a predefined rolebinding in place for an Openshift project, that I want to edit/update using a .yml file.
I have already tried the below:
oc create –f –> failed, obvious because it exists, the error is:
Error from server: rolebinding "edit" already exists
oc patch –f–> failed, looks like patch only accepts –p argument, the error is:
Error: Must specify -p to patch
See 'oc patch -h' for help and examples.
oc replace –f–> failed, the error is:
Error: error when replacing "sample.yml": resource name may not be empty
If I were to run the create command against the file on a new project, it works.
Please do respond if any one has thoughts on this.
Thanks much,
Aneesh
Perhaps oc patch -p $(cat file.patch) ... would work for you?
If I understand what you are trying to do, I would use separate oc adm policy commands to do it.
oc adm policy add-role-to-group edit group-name-a
oc adm policy add-role-to-group edit group-name-b
oc adm policy add-role-to-group edit group-name-x
oc adm policy add-role-to-group edit group-name-y
oc adm policy add-role-to-group edit group-name-z
Using oc patch is only going to work if there existed a resource object for the roleRef name already, it isn't going to create the whole role binding object if it doesn't exist.
Modifying roles is one of the times where you can't make additive changes by simply loading a new resource object definition.
You can see this by enabling logging for the oc command and looking at what it does. In the case of adding additional role bindings, it will first query the existing role binding and add the change to it and then load the new modified entry with all role bindings in it.
Run:
oc --loglevel=9 adm policy add-role-to-group edit group-name-x
when only group-name-a and group-name-b have already been set up to see what I mean.