I am using shared volume EFS in my Openshift deployment..
The volume is showing lesser permission for group
drwxr-xr-x. 2 root root 6144 Oct 21 04:33 events.
I have set supplemental group
securityContext:
supplementalGroups:
- 5555
kubectl exec -it pod/app-599b696b46-hkv6b -n appr -- bash
bash-4.2$ id
uid=1000940000(1000940000) gid=0(root) groups=0(root),5555,1000940000
EFS access point is also set with right permission
Still I have same permission errors..any help on this is much appreciated...
Related
I'm struggling with import dump via kubectl to MySql database running in Kubernetes. There is no error output, but also no data imported.
Here is proof that there is such pod, also dump file on disk root called /database.sql and command.
root#node-1:~# kubectl get pods -n esopa-test | grep mariadb
esopa-test-mariadb-0 1/1 Running 0 14d
root#node-1:~# ll /database.sql
-rw-r--r-- 1 root root 4418347 Oct 14 08:50 /database.sql
root#node-1:~# kubectl exec esopa-test-mariadb-0 -n esopa-test -- mysql -u root -proot database < /database.sql
root#node-1:~#
Thank you for any advice
You can copy files from a pod to node by using kubectl cp command.
To copy files from pod to node syntax is very simple:
kubectl cp <some-namespace>/<some-pod>:<directory-inside-pod> <directory_on_your_node>
So in your use case you can use following command:
kubectl cp esopa-test/esopa-test-mariadb-0:/database.sql <directory_on_your_node>
And to copy files from node to pod you can use:
kubectl cp <directory_on_your_node> esopa-test/esopa-test-mariadb-0:/database.sql
I'm new to openshift or K8'S. I have installed Openshift v3.11.0+bf985b1-463 cluster in my centos 7.
While running prerequisites.yml and deploy_cluster.yml run status is successful. And i have updated htpasswd and granted the cluster-admin role for my user.
htpasswd -b ${HTPASSWD_PATH}/htpasswd $OKD_USERNAME ${OKD_PASSWORD}
oc adm policy add-cluster-role-to-user cluster-admin $OKD_USERNAME
and i have create the user and identity also by the below cmd.
oc create user bob
oc create identity ldap_provider:bob
oc create useridentitymapping ldap_provider:bob bob
When i try to login with oc login -u bob -p password it say's
Login failed (401 Unauthorized)
Verify you have provided correct credentials.
But i can able to login with oc login -u system:admin
For your information: the okd deploy_cluster.yml ran successfully but the below pod is in error state.
Is that cause the problem? cmd oc get pods
Suggest me how can i fix the issue. Thank you.
UPDATE:
I have ran the deploy_cluster.yml once again the login issue is solved able to login. But it fails with the below error.
This phase can be restarted by running: playbooks/openshift-logging/config.yml
Node logging-es-data-master-ioblern6 in cluster logging-es was unable to rollout. Please see documentation regarding recovering during a rolling cluster restart
In openshift console the Logging Pod have the below event.
But all the servers have enough memory like more than 65% is free.
And the Ansible version is 2.6.5
1 Master node config:
4CPU, 16GB RAM, 50GB HDD
2 Slave and 1 infra node config:
4CPU, 16GB RAM, 20GB HDD
To create a new user try to follow these steps:
1 Create on each master node the password entry in htpasswd file with:
$ htpasswd -b </path/to/htpasswd> <user_name>
$ htpasswd -b /etc/origin/master/htpasswd myUser myPassword
2 Restart on each master node the master api and master controllers
$ master-restart controllers && master-restart api
or
$ /usr/local/bin/master-restart api && /usr/local/bin/master-restart controllers
3 Apply needed roles
$ oc adm policy add-cluster-role-to-user cluster-admin myUser
4 Login as myUser
$ oc login -u myUser -p myPassword
Running again the deploy_cluster.yaml after configuring the htpasswd file, will force the restart of master controllers and api so you've been able to login as your new user.
About the other problem, registry-console and loggin-es-data-master pods not running it's because you cannot run again the deploy_cluster.yaml when your cluster is already up and running so you have to uninstall okd and then run again the playbook. This happens because the SDN is already working and all your nodes already own all needed certificates.
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
and then again
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
More detailed informations are here
If, after all this procedure, the logging-es-data-master pod should not run then uninstall the logging component with
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
and then uninstall the whole okd and install it again.
If your cluster is already working and you cannot perform again the installation so try only to uninstall and reinstall the logging component:
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=True
RH detailed instructinos are here
I'm trying to mount PVC to MongoDB deployment without privileged access.
I've tried to set anyuid for pods via:
oc adm policy add-scc-to-user anyuid -z default --as system:admin
In deployment I'm using securityContext config. I've tried several combination of fsGroup etc. :
spec:
securityContext:
runAsUser: 99
runAsGroup: 99
supplementalGroups:
- 99
fsGroup: 99
When I go to the pod uid and guid is set correctly:
bash-4.2$ id
uid=99(nobody) gid=99(nobody) groups=99(nobody)
bash-4.2$ whoami
nobody
bash-4.2$ cd /var/lib/mongodb/data
bash-4.2$ touch test.txt
touch: cannot touch 'test.txt': Permission denied
But pod can't write to the pvc directory:
ERROR: Couldn't write into /var/lib/mongodb/data
CAUSE: current user doesn't have permissions for writing to /var/lib/mongodb/data directory
DETAILS: current user id = 99, user groups: 99 0
DETAILS: directory permissions: drwxr-xr-x owned by 0:0, SELinux: system_u:object_r:container_file_t:s0:c234,c491
I've tried to instantiate also MySQL template with PVC without any configuration change from OpenShift catalog and it's the same issue.
Thanks for the help.
Temporary solution is to use init container with root privileges to change owner of mounted path:
initContainers:
- name: mongodb-init
image: alpine
command: ["sh", "-c", "chown -R 99 /var/lib/mongodb/data"]
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: mongodb-pvc
But also I'm looking at tool named Udica. It can generate SELinux security policies for container: https://github.com/containers/udica
I'm trying to launch postgres in IBM containers. I have just created volume by:
$ cf ic volume create pgdata
Then mount it:
$ cf ic run --volume pgdata:/var/pgsql -p 22 registry.ng.bluemix.net/ruimo/pgsql944-cli
After logging into container through ssh, I found the mounted directory is owned by root:
drwxr-xr-x 3 root root 4096 Jul 8 08:20 pgsql
Since postgres does not permit to run by root, I want to change the owner of this directory. But I cannot change the owner of this directory:
# chown postgres:postgres pgsql
chown: changing ownership of 'pgsql': Permission denied
Is it possible to change owner of mounted directory?
In IBM Containers, the user namespace is enabled for docker engine. When, the user namespace is enabled, the effective root inside the container is a non-root user out side the container process and NFS is not allowing the mapped non-root user to perform the chown operation on the volume inside the container. Please note that the volume pgdata is a NFS, this can verified by executing mount -t nfs4 from container.
You can try the workaround suggested for
How can I fix the permissions using docker on a bluemix volume?
In this scenario it will be
1. Mount the Volume to `/mnt/pgdata` inside the container
cf ic run --volume pgdata:/mnt/pgdata -p 22 registry.ng.bluemix.net/ruimo/pgsql944-cli
2. Inside the container
2.1 Create "postgres" group and user
groupadd --gid 1010 postgres
useradd --uid 1010 --gid 1010 -m --shell /bin/bash postgres
2.2 Add the user to group "root"
adduser postgres root
chmod 775 /mnt/pgdata
2.3 Create pgsql directory under bind-mount volume
su -c "mkdir -p /mnt/pgdata/pgsql" postgres
ln -sf /mnt/pgdata/pgsql /var/pgsql
2.2 Remove the user from group "root"
deluser postgres root
chmod 755 /mnt/pgdata
In your Dockerfile you can modify the permissions of a directory.
RUN chown postgres:postgres pgsql
Additionally when you ssh in you can modify the permissions of the directory by using sudo.
sudo chown postgres:postgres pgsql
Here are 3 different but possible solutions:
Using a dockerfile and doing a chown before mounting the volume.
USER ROOT command in dockerfile before you do a chown.
Use --cap-add flag.
im setting up a mysql container like so:
docker run -v /srv/information-db:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
now, this works when nothing is mounted on /srv on the host, but when i mount my drive, docker seems to write to the underlying filesystem (/), eg:
/]# ls -l /srv
total 0
/]# mount /dev/xvdc1 /srv
/]# mount
...
/dev/xvdc1 on /srv type ext4 (rw,relatime,seclabel,data=ordered)
/]# docker run -v /srv/information-db:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
/]# ls -l /srv
total 16
drwx------. 2 root root 16384 Apr 22 18:05 lost+found
/]# umount /dev/xvdc1
/]# ls -l /srv
total 4
drwxr-xr-x. 4 102 root 4096 Apr 22 18:24 information-db
Anyone seen this behaviour / have a solution?
Cheers
I've seen something like that. Try to perform stat -c %i checks both inside the host and container before and after mount event (in order to get inode values of the target dirs). I guess they're mismatched for a some reason when you mount external device.