Share mount namespace with a privileged pod - namespaces

I am trying to get the mount point details of host from a kuberentes pod. It is a privileged container.
Even if I mount the root file system , I am not able to check the mount details of a particular type say s3fsmay be because it belongs to a different namespace.
What is the best way to share the mount point namespace.

If you really just want details of the host's mount points rather than access to them you can run your Pod with hostPID: true and then inspect the mounts of a process in the proc filesystem that you know is running in the host's mount namespace (for example PID 1) like so: cat /proc/1/mounts

I did some research and found that kubernetes provides an option called MountPropagation which helps to achieve my requirement.
I tested this feature in my local set up and it did give me the result I wanted.
Few links that I found useful:
https://medium.com/kokster/kubernetes-mount-propagation-5306c36a4a2d
https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation

Related

Openshift 4.6 Node and Master Config Files

Where is the Openshift Master and Node Host Files in v4.6
Previously hosted below in v3
Master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml
You can check your current kubelet configuration using the following procedures instead of the configuration file on the node hosts like OCPv3. Because the kubelet configuration was managed dynamically as of OCPv4.
Further information is here, Generating a file that contains the current configuration.
You can check it using above reference procedures(Generate the configuration file) or oc CLI as follows.
$ oc get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz | \
jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
These files no longer exist in the same for as in OCP 3. To change anything on the machines themselves, you'll need to create MachineConfigs, as CoreOS is an immutable Operating System. If you change anything manually on the filesystem and reboot the machine, your changes will typically be reset.
To modify Worker Nodes, often the setting you are looking for can be configured via a kubeletConfig: Managing nodes - Modifying Nodes. Note that only certain settings can be changed, others cannot be changed at all.
For the Master Config, it depends on what you want to do, as you will potentially change the setting via a machineConfigPool or for example edit API Server setting via oc edit apiserver cluster. So it depends on what you actually want to change.

Cannot map agent.conf using Cygnus docker installation

I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!

Have `oc` follow a cluster depending on directory

I use the oc tool for several different clusters.
Since I am usually keeping local yaml files for any OpenShift objects I view/modify, either ad hoc or due to some config management scheme of the individual cluster, I have a separate directory on my machine for each cluster (which, in turn, is of coursed versioned in git). Let's call them ~/clusters/a/, ~/clusters/b/ etc.
Now. When I cd around on my local machine, the oc command uses the global ~/.kube/config to find the cluster I logged in last, to. Or in other words, oc does not care at all about which directory I am in.
Is there a way to have oc store a "local" configuration (i.e. in ~/clusters/a/.kube_config or something like that), so that when I enter the ~/clusters/a/ directory, I am automatically working with that cluster without having to explicitely switch clusters with oc login?
You could set the KUBECONFIG environment variable to specify different directories for configuration for each cluster. You would need to set the environment variable to respective directories in each separate terminal session window.
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable
To expand on Graham's answer, KUBECONFIG can specify a list of config files which will be merged if more than one exist. The first to set a particular value wins, as described in the merging rules.
So you can add a local config with just the current-context, e.g. ~/clusters/a/.kube_config could be
current-context: projecta/192-168-99-100:8443/developer
and ~/clusters/b/.kube_config:
current-context: projectb/192-168-99-101:8443/developer
Obviously need to adjust this for your particular cluster using the format
current-context: <namespace>/<cluster>/<user>
Then set KUBECONFIG with a relative path and the global config
export KUBECONFIG=./.kube_config:~/.kube/config
Note that if ./.kube_config does not exist it will be ignored.
The current-content will then be overridden by the one defined in the local .kube_config, if one exists.
I tested this locally with 2 minishift clusters and it seemed to work ok. Have not tested what the behaviour is when setting config though.

OpenShift: how to create a host alias

I have an external TLS-enabled service that I want my pods to access
https://abc.myservice.acme
abc.myservice.acme resolves to 1.2.3.4. I wish to override this IP address with another (say 5.6.7.8) for the pods to use.
I would add an entry for each pod's /etc/hosts to override the IP address, but I
have a feeling that it is an anti-pattern and there's probably a better way of doing this.
I investigated/tried:
creating a service + endpoint. This works, but the problem is the service name is not present in the SSL Certificate's SAN entry, so I'm getting a "SSL: no alternative certificate subject name matches target host name 'svc-external-acme'" message. Sure I can add it to the certificate SAN, but it's probably not the correct solution.
installing DNSmasq (https://developers.redhat.com/blog/2015/11/19/dns-your-openshift-v3-cluster/) on the worker nodes but again it feels like a complicated hack. There must be a simpler one.
hostAliases. Unfortunately, this is only available for kube 1.7+ but I'm on openshift 3.5 (kube 1.6). This would have been perfect.
Is there any way I can accomplish #3 in openshift?
I can edit the image to echo my desired entry to /etc/hosts, but I'm saving it as last resort.
-M
Maybe I'm a bit late answering this question
I had a similar issue with our dev environment and the way we managed to resolve it was:
We created a config-map with the desired content of the /etc/hosts file. I'm using hosts-delta as the name of the config map entry
We define a mount point of that config map inside the container (/app/hosts/). I think the directory /app/hosts should exist within the container filesystem so you should add a RUN mkdir -p /app/hosts in your Dockerfile
We modified the deployment config yaml adding a post start hook in this way:
lifecycle:
postStart:
exec:
command:
- /bin/sh
- '-c'
- |
cat /app/hosts/hosts-delta >> /etc/hosts
The previous snippet should be placed inside the spec > template > spec > containers element
Hope this helps somebody

I want to verify the existence of a file in a linux container from linux virtual machine

I am on my virtual machine and I must find a way to connect to the container and verify if there is a specific file. How can I do that?
If you have enabled SSH in your container, then you should be able to login to it from anywhere (even from the VM).
ssh usernae#lxc-hostname
Once logged in you can search for the file. There are various tools, but I like to use the locate command line tool.
locate <filename>
Hope this was useful.
Without SSH you can view the files and directory of the LXC containers; For this we need to find the pid (process identifier) of the container.
$>lxc-info -pHn <container-name-C1>
The above command will return you the pid number of the container launched by name C1
Now go to /proc/'{pid}'/root/; From this place you can view all the files of the lxc container by name C1; The beauty of LXC :)