OpenShift single node PersistentVolume with hostPath requires privileged pods, how to set as default? - openshift

I am fairly new to OpenShift and have been using CRC (Code Ready Containers) for a little while, and now decided to install the single server OpenShift on bare metal using the Assisted-Installer method from https://cloud.redhat.com/blog/deploy-openshift-at-the-edge-with-single-node-openshift and https://console.redhat.com/openshift/assisted-installer/clusters/. This has worked well and I have a functional single-server.
As a single server in a test environment (without NFS available) I need/want to create PersistentVolumes with hostPath (localhost storage) - these work flawlessly in CRC. However on the full install, I run into an issue when mounting PVC's to pods as the pods were not running privileged. I edited the deployment config and added the lines below (within the containers hash)
- resources: {}
...
securityContext:
privileged: true
... however still had errors as the restricted SCC has 'allowPrivilegedContainer: false'. I have done a horrible hack of changing this to true, so adding the lines above to the deployment yaml works. However there must be an easier way as none of these hacks seem present in CRC. I checked and CRC pods run restricted, the restricted SCC has privileged set to false, and the Persistent Volume is also using hostPath. I also do not have to edit the deployment yaml as above in CRC - it just works (tm).
Guidance here shows that the containers must run privileged, however the containers in CRC are running restricted and the SCC still has 'allowPrivilegedContainer: false'.
https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-hostpath.html
An example app creation as below (from the RedHat DO280 course) works without any massaging of privileges or deployment config in CRC, but on a real OS server requires the massaging above. As my server is purely for testing, I would like to make it easier without doing the hackjob and deployment changes above.
oc new-app --name mysql --docker-image registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7
oc create secret generic mysql --from-literal password=r3dh4t123
oc set env deployment mysql --prefix MYSQL_ROOT_ --from secret/mysql
oc set volumes deployment/mysql --name mysql-storage --add --type pvc --claim-size 2Gi --claim-mode rwo --mount-path /var/lib/mysql/data
oc get pods -l deployment=mysql
oc get pvc
Any help appreciated.
EDIT: I have overcome this now by enabling nfs-server and adding entries to /etc/exports. However I'm still interested to understand how CRC manages the above issue when using hostPath

The short answer to this is: don't use hostPath.
You are using hostPath to make use of arbitrary disk space available on the underlying host's volume. hostPath can also be used to read/write any directory path on the underlying host's volume -- which, as you can imagine, should be used with great care.
Have a look at this as an alternative -- https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-local.html

Related

Compute Engine Deploy Container

I am using golang to programmatically create and destroy one-off Compute Engine instances using the Compute Engine API.
I can create an instance just fine, but what I'm really having trouble with is launching a container on startup.
You can do it from the Console UI:
But as far as I can tell it's extremely hard to do it programmatically, especially with Container Optimized OS as the base image. I tried doing a startup script that does a docker pull us-central1-docker.pkg.dev/project/repo/image:tag but it fails because you need to do gcloud auth configure-docker us-central1-docker.pkg.dev first for that to work and COOS doesn't have gcloud nor a package manager to get it.
All my workarounds seem hacky:
Manually create a VM template that has the desired container and create instances of the template
Put container in external registry like docker hub (not acceptable)
Use Ubuntu instead of COOS with a package manager so I can programmatically install gcloud, docker, and the container on startup
Use COOS to pull down an image from dockerhub containing gcloud, then do some sort of docker-in-docker mount to pull it down
Am I missing something or is it just really cumbersome to deploy a container to a compute engine instance without using gcloud or the Console UI?
To have a Compute Engine start a container when the Compute Engine starts, one has to define meta data for the description of the container. When the COOS starts, it appears to run an application called konlet which can be found here:
https://github.com/GoogleCloudPlatform/konlet
If we look at the documentation for this, it says:
The agent parses container declaration that is stored in VM instance metadata under gce-container-declaration key and starts the container with the declared configuration options.
Unfortunately, I haven't found any formal documentation for the structure of this metadata. While I couldn't find documentation, I did find two possible solutions:
Decipher the source code of konlet and break it apart to find out how the metadata maps to what is passed when the docker container is started
or
Create a Compute Engine by hand with the desired container definitions and then start the Compute Engine. SSH into the Compute Engine and then retrieve the current metadata. We can read about retrieving meta data here:
https://cloud.google.com/compute/docs/metadata/overview
It turns out, it's not too hard to pull down a container from Artifact Registry in Container Optimized OS:
Run docker-credential-gcr configure-docker --registries [region]-docker.pkg.dev
See: https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_images_in_or
So what you can do is put the above line along with docker pull [image] and docker run ... into a startup script. You can specify a startup script when creating an instance using the metadata field: https://cloud.google.com/compute/docs/instances/startup-scripts/linux#api
This seems the least hacky way of provisioning an instance with a container programmatically.
You mentioned you used docker-credential-gcr to solve your problem. I tried the same in my startup script:
docker-credential-gcr configure-docker --registries us-east1-docker.pkg.dev
But it returns:
ERROR: Unable to save docker config: mkdir /root/.docker: read-only file system
Is there some other step needed? Thanks.
I recently ran into the other side of these limitations (and asked a question on the topic).
Basically, I wanted to provision a COOS instance without launching a container. I was unable to, so I just launched a container from a base image and then later in my CI/CD pipeline, Dockerized my app, uploaded it to Artifact Registry and replaced the base image on the COOS instance with my newly built app.
The metadata I provided to launch the initial base image as a container:
spec:
containers:
- image: blairnangle/python3-numpy-ta-lib:latest
name: containervm
securityContext:
privileged: false
stdin: false
tty: false
volumeMounts: []
restartPolicy: Always
volumes: []
I'm a Terraform fanboi, so the metadata exists within some Terraform configuration. I have a public project with the code that achieves this if you want to take a proper look: blairnangle/dockerized-flask-on-gce.

Openshift on AWS is automatically allowing container to be deployed as root user

I tried to deploy library/cassandra image cassandra container in Sandbox Openshift cluster but it threw me this error in pod logs,
"Running Cassandra as root user or group is not recommended - please start Cassandra using a different system user.
If you really want to force running Cassandra as root, use -R command line option."
When I checked the container description, I could see that SCC is set to Restricted...So looks like in Sandbox openshift, SCC "Restricted" is set for "Default" Service account by default..
But in AWS when I tried to install openshift with installer option, I didnt face this error with same library/cassandra image..
Looks like default Service account is not by default associated with "Restricted" SCC...
could someone clarify what is the difference in Sandbox environment which throws this error? and How can I set the same config in AWS openshift so that default Service account can be associated with restricted SCC?
I can't see your specific environment, but from the error message I suspect it's being triggered by the GROUP=0, not user=0.
To confirm:
$ oc get pods (whatever) -o yaml | grep openshift.io/scc
This will show you which SCC admitted the pod into the cluster. It should be "restricted" based on what you said. If so, then we've got some good evidence that it's just the group.
Next, you can look for something like this:
$ oc rsh (podname) id -a
uid=1000640000(1000640000) gid=0(root) groups=0(root),1000640000
UID (user) is in the expected billion+ range defined in the namespace annotation. GID (group) is zero.
With that in place, you can either ignore the error, knowing it's own group=0 that's in place, or you can set a securityContext for your pod (or container) to specify a different gid.
I came to know that "default" project has different set of permissions so even a container with user id 0 can be deployed in default namespace..
In Sandbox cluster, the project is dev or stage so it works with correct security level..

Openshift - API to get ARTIFACT_URL parameter of a pod or the version of its deployed app

What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)

How to enable mutual SSL verification mode in Redhat-SSO image for OpenShift

I am using the template sso72-x509-postgresql-persistent, which is based on Redhat-SSO and Keycloak, to create an application in OpenShift.
I am going to enable its mutual SSL mode, so that a user has to only provide his certificate instead of user name and password in his request. The document (https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/7.2/html-single/server_administration_guide/index#x509) told me to edit the standalone.xml file to add configuration sections. It worked fine.
But the template image sso72-x509-postgresql-persistent had problem with this procedure, because after it was deployed on the OpenShift, any changes on the files within the docker have been lost after restart of the docker.
Is there anyway to enable the mutual SSL mode through another level matter like commandline or API instead of editting a configuration file, except making my own docker image?
Ok, I'm including this anyway. I wasn't able to get this working due to permissions issues (the mounted files didn't persist the same permissions as before, so the container continued to fail. But a lot of work went into this answer, so hopefully it points you in the right direction!
You can add a Persistent Volume (PV) to ensure your configuration changes survive a restart. You can add a PV to your deployment via:
DON'T DO THIS
oc set volume deploymentconfig sso --add -t pvc --name=sso-config --mount-path=/opt/eap/standalone/configuration --claim-mode=ReadWriteOnce --claim-size=1Gi
This will bring up your RH-SSO image with a blank configuration directory, causing the pod to get stuck in Back-off restarting failed container. What you should do instead is:
Backup the existing configuration files
oc rsync <rhsso_pod_name>:/opt/eap/standalone/configuration ~/
Create a temporary, busybox deployment that can act as an intermediary for uploading the configuration files. Wait for deployment to complete
oc run busybox --image=busybox --wait --command -- /bin/sh -c "while true; do sleep 10; done"
Mount a new PV to the busybox deployment. Wait for deployment to complete
oc set volume deploymentconfig busybox --add -t pvc --name=sso-volume --claim-name=sso-config --mount-path=/configuration --claim-mode=ReadWriteOnce --claim-size=1Gi
Edit your configuration files now
Upload the configuration files to your new PV via the busybox pod
oc rsync ~/configuration/ <busybox_pod_name>:/configuration/
Destroy the busybox deployment
oc delete all -l run=busybox --force --grace-period=0
Finally, you attach your already created and ready-to-go persistent configuration to the RH SSO deployment
oc set volume deploymentconfig sso --add -t pvc --name=sso-volume --claim-name=sso-config --mount-path=/opt/eap/standalone/configuration
Once your new deployment is...still failing because of permission issues :/

Hide/obfuscate environmental parameters in docker

I'm using the mysql image as an example, but the question is generic.
The password used to launch mysqld in docker is not visible in docker ps however it's visible in docker inspect:
sudo docker run --name mysql-5.7.7 -e MYSQL_ROOT_PASSWORD=12345 -d mysql:5.7.7
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b98afde2fab7 mysql:5.7.7 "/entrypoint.sh mysq 6 seconds ago Up 5 seconds 3306/tcp mysql-5.7.7
sudo docker inspect b98afde2fab75ca433c46ba504759c4826fa7ffcbe09c44307c0538007499e2a
"Env": [
"MYSQL_ROOT_PASSWORD=12345",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.7-rc"
]
Is there a way to hide/obfuscate environment parameters passed when launching containers. Alternatively, is it possible to pass sensitive parameters by reference to a file?
Weirdly, I'm just writing an article on this.
I would advise against using environment variables to store secrets, mainly for the reasons Diogo Monica outlines here; they are visible in too many places (linked containers, docker inspect, child processes) and are likely to end up in debug info and issue reports. I don't think using an environment variable file will help mitigate any of these issues, although it would stop values getting saved to your shell history.
Instead, you can pass in your secret in a volume e.g:
$ docker run -v $(pwd)/my-secret-file:/secret-file ....
If you really want to use an environment variable, you could pass it in as a script to be sourced, which would at least hide it from inspect and linked containers (e.g. CMD source /secret-file && /run-my-app).
The main drawback with using a volume is that you run the risk of accidentally checking the file into version control.
A better, but more complicated solution is to get it from a key-value store such as etcd (with crypt), keywhiz or vault.
You say "Alternatively, is it possible to pass sensitive parameters by reference to a file?", extract from the doc http://docs.docker.com/reference/commandline/run/ --env-file=[] Read in a file of environment variables.