OpenShift 4.3 - Attach PVC to pod without privileged access - openshift

I'm trying to mount PVC to MongoDB deployment without privileged access.
I've tried to set anyuid for pods via:
oc adm policy add-scc-to-user anyuid -z default --as system:admin
In deployment I'm using securityContext config. I've tried several combination of fsGroup etc. :
spec:
securityContext:
runAsUser: 99
runAsGroup: 99
supplementalGroups:
- 99
fsGroup: 99
When I go to the pod uid and guid is set correctly:
bash-4.2$ id
uid=99(nobody) gid=99(nobody) groups=99(nobody)
bash-4.2$ whoami
nobody
bash-4.2$ cd /var/lib/mongodb/data
bash-4.2$ touch test.txt
touch: cannot touch 'test.txt': Permission denied
But pod can't write to the pvc directory:
ERROR: Couldn't write into /var/lib/mongodb/data
CAUSE: current user doesn't have permissions for writing to /var/lib/mongodb/data directory
DETAILS: current user id = 99, user groups: 99 0
DETAILS: directory permissions: drwxr-xr-x owned by 0:0, SELinux: system_u:object_r:container_file_t:s0:c234,c491
I've tried to instantiate also MySQL template with PVC without any configuration change from OpenShift catalog and it's the same issue.
Thanks for the help.

Temporary solution is to use init container with root privileges to change owner of mounted path:
initContainers:
- name: mongodb-init
image: alpine
command: ["sh", "-c", "chown -R 99 /var/lib/mongodb/data"]
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: mongodb-pvc
But also I'm looking at tool named Udica. It can generate SELinux security policies for container: https://github.com/containers/udica

Related

How to permanently change sysctl settings on a GKE host node?

We have a kubernetes cluster running in Google GKE. I want to permanently set another value for fs.aio-max-nr in sysctl, but it keeps changing back to default after running sudo reboot.
This is what I've tried:
sysctl -w fs.aio-max-nr=1048576
echo 'fs.aio-max-nr = 1048576' | sudo tee --append /etc/sysctl.d/99-gke-defaults.conf
echo 'fs.aio-max-nr = 1048576' | sudo tee --append /etc/sysctl.d/00-sysctl.conf
Is it possible to change this permanently? And why isn't there a etc/sysctl.config but two sysctl files in sysctl.d/ folder?
I'd do this by deploying a DaemonSet on all the nodes on which you need this setting. The only drawback here is that the DaemonSet pod will need to run with elevated privileges. The container has access to /proc on the host, so then you just need to execute your sysctl commands in a script and then exit.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: sysctl
spec:
template:
spec:
containers:
- name: sysctl
image: alpine
command:
- /bin/sh
- -c
- sysctl fs.aio-max-nr=1048576
securityContext:
privileged: true
There's also example here.
I ended up switching node image from Googles default image cos_containerd to ubuntu containerd. This made the sysctl changes permanent.

Openshift volume mount permission errors

I am using shared volume EFS in my Openshift deployment..
The volume is showing lesser permission for group
drwxr-xr-x. 2 root root 6144 Oct 21 04:33 events.
I have set supplemental group
securityContext:
supplementalGroups:
- 5555
kubectl exec -it pod/app-599b696b46-hkv6b -n appr -- bash
bash-4.2$ id
uid=1000940000(1000940000) gid=0(root) groups=0(root),5555,1000940000
EFS access point is also set with right permission
Still I have same permission errors..any help on this is much appreciated...

Openshift container with wrong openshift.io/scc

Having unexplained behavior in openshift 4.4.17 cluster: oauth-openshift Deployment (in openshift-authentication namespace) has replicas=2, the first pod is Running with:
openshift.io/scc: anyuid
the second pod goes in CrashLoopBackOff state, and scc assigned to it is the one below:
openshift.io/scc: nginx-ingress-scc (that is a customized scc for nginx purposes)
By documentation:
By default, the pods inside openshift-authentication and openshift-authentication-operator namespace runs with anyuid SCC.
I suppose something has been changed in the cluster but i cannot figure out where the mistake is.
Oauth-penshift Deployment is in its default configuration:
serviceAccountName: oauth-openshift
namespace: openshift-authentication
$ oc get scc anyuid -o yaml
users:
system:serviceaccount:default:oauth-openshift
system:serviceaccount:openshift-authentication:oauth-openshift
system:serviceaccount:openshift-authentication:default
$ oc get pod -n openshift-authentication
NAME READY STATUS RESTARTS AGE
oauth-openshift-59f498986d-lmxdv 0/1 CrashLoopBackOff 158 13h
oauth-openshift-d4968bd74-ll7mn 1/1 Running 0 23d
$ oc logs oauth-openshift-59f498986d-lmxdv -n openshift-authentication
Copying system trust bundle
cp: cannot remove '/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem': Permission denied
$ oc get pod oauth-openshift-59f498986d-lmxdv -n openshift-authentication -o=yaml|grep serviceAccount
serviceAccount: oauth-openshift
serviceAccountName: oauth-openshift
$ oc get pod oauth-openshift-59f498986d-lmxdv -n openshift-authentication -o=yaml|grep scc
openshift.io/scc: nginx-ingress-scc
Auth Operator:
$ oc get pod -n openshift-authentication-operator
NAME READY STATUS RESTARTS AGE
authentication-operator-5498b9ddcb-rs9v8 1/1 Running 0 33d
$ oc get pod authentication-operator-5498b9ddcb-rs9v8 -n openshift-authentication-operator -o=yaml|grep scc
openshift.io/scc: anyuid
The managementState is set to Managed
First of all, you should check if your SCC priority is customized or not. For example, anyuid scc priority is 10 and it's the highest by default.
But if other SCC(in this case, nginx-ingress-scc) is configured more than 10 priority, then the SCC is selected by the oauth pod unexpectedly. It may causes this issue.
The problem was the customized scc (nginx-ingress-scc) had priority higher than 10, that is the anyuid's priority.
Now solved.

Cannot externally access the OpenShift 4.2 built-in docker registry

I have a kubeadmin account for OpenShift 4.2 and am able to successfully login via oc login -u kubeadmin.
I exposed the built-in docker registry through DefaultRoute as documented in https://docs.openshift.com/container-platform/4.2/registry/securing-exposing-registry.html
My docker client runs on macOS and is configured to trust the default self-signed certificate of the registry
openssl s_client -showcerts -connect $(oc registry info) </dev/null 2>/dev/null|openssl x509 -outform PEM > tls.pem
security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain tls.pem
Now when I try logging into the built-in registry, I get the following error
docker login $(oc registry info) -u $(oc whoami) -p $(oc whoami -t)
Error response from daemon: Get https://my-openshift-registry.com/v2/: unauthorized: authentication required
The registry logs report the following errors
error authorizing context: authorization header required
invalid token: Unauthorized
And more specifically
oc logs -f -n openshift-image-registry deployments/image-registry
time="2019-11-29T18:03:25.581914855Z" level=warning msg="error authorizing context: authorization header required" go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=aa41909a-4aa0-42a5-9568-91aa77c7f7ab http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri=/v2/ http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))"
time="2019-11-29T18:03:25.581958296Z" level=info msg=response go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=d2216e3a-0e12-4e77-b3cb-fd47b6f9a804 http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri=/v2/ http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration="923.654µs" http.response.status=401 http.response.written=87
time="2019-11-29T18:03:26.187770058Z" level=error msg="invalid token: Unauthorized" go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=638fc003-1d4a-433c-950e-f9eb9d5328c4 http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri="/openshift/token?account=kube%3Aadmin&client_id=docker&offline_token=true" http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))"
time="2019-11-29T18:03:26.187818779Z" level=info msg=response go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=5486d94a-f756-401b-859d-0676e2a28465 http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri="/openshift/token?account=kube%3Aadmin&client_id=docker&offline_token=true" http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))" http.response.contenttype=application/json http.response.duration=6.97799ms http.response.status=401 http.response.written=0
My oc client is
oc version
Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.0+b4261e0", GitCommit:"b4261e07ed", GitTreeState:"clean", BuildDate:"2019-07-06T03:16:01Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6+2e5ed54", GitCommit:"2e5ed54", GitTreeState:"clean", BuildDate:"2019-10-10T22:04:13Z", GoVersion:"go1.12.8", Compiler:"gc", Platform:"linux/amd64"}
My docker info is
docker info
Client:
Debug Mode: false
Server:
Containers: 7
Running: 0
Paused: 0
Stopped: 7
Images: 179
Server Version: 19.03.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.184-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 6
Total Memory: 5.818GiB
Name: docker-desktop
ID: JRNE:4IBW:MUMK:CGKT:SMWT:27MW:D6OO:YFE5:3KVX:AEWI:QC7M:IBN4
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 29
Goroutines: 44
System Time: 2019-11-29T21:12:21.3565037Z
EventsListeners: 2
HTTP Proxy: gateway.docker.internal:3128
HTTPS Proxy: gateway.docker.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
I have tried adding the registry-viewer role to kubeadmin, but this did not make any difference
oc policy add-role-to-user registry-viewer kubeadmin
oc policy add-role-to-user registry-viewer kube:admin
Is there any suggestion as to what I could try or how to diagnose the problem further? I am able to access the registry from within the cluster, however, I need to access it externally through docker login.
As silly as it sounds, the problem was that $(oc whoami) evaluated to kube:admin instead of kubeadmin and only the latter works. For example, in order to successfully login I had to replace
docker login $(oc registry info) -u $(oc whoami) -p $(oc whoami -t)
with
docker login $(oc registry info) -u kubeadmin -p $(oc whoami -t)
The relevant role is registry-viewer, however, I think the user kubeadmin would have it pre-configured
oc policy add-role-to-user registry-viewer kubeadmin
oc adm policy add-cluster-role-to-user registry-viewer kubeadmin
To add registry viewer role the command is
oc adm policy add-cluster-role-to-user registry-viewer kubeadmin
You can refer to their documentation to work with the internal registry.

How to give a container root permission (serviceaccount) before starting the build

Openshift does not allow to run containers as root, but you can do this by creating a service account:
oc adm policy add-scc-to-user anyuid -z useroot
and then patching the deployment configuration, this will consequently deploy a new replication controller version with the new changes, is it possible to create the service account and include it in the following command:
oc new-app --name=test --docker-image=myregistry.com/test:latest
and have the service Account name included in the above command to avoid having a new version of the app or if there's any other possibility to foresee this root permission error and decrease the security for the pod to run as root without patching or redeploy the app
Will and Graham has already provided great comments for you,
so I suggest additional practical details of them as follows.
If you grant anyuid scc to default ServiceAccount before oc new-app, the test pods are going to run as root permission without version change.
# oc adm policy add-scc-to-user anyuid -z default
# oc new-app --name=test --docker-image=myregistry.com/test:latest
# oc rollout history dc/test
deploymentconfigs "test"
REVISION STATUS CAUSE
1 Complete config change
# oc rsh dc/test id
uid=0(root) gid=0(root) groups=0(root)
OR
If you need to specify the custom ServiceAccount name, you can extract oc new-app yaml and create resources after add serviceAccountName: useroot element to it. These steps also do not change the deployment version.
# oc create sa useroot
# oc adm policy add-scc-to-user anyuid -z useroot
# oc new-app --name=test --docker-image=myregistry.com/test:latest -o yaml --dry-run > test.yml
# vim test.yml
apiVersion: v1
items:
- apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
...
spec:
...
template:
spec:
serviceAccountName: useroot
...
# oc create -f ./test.yml
imagestream.image.openshift.io/test created
deploymentconfig.apps.openshift.io/test created
service/test created
# oc rollout history dc/test
deploymentconfigs "test"
REVISION STATUS CAUSE
1 Complete config change
# oc rsh dc/test id
uid=0(root) gid=0(root) groups=0(root)