I have an external TLS-enabled service that I want my pods to access
https://abc.myservice.acme
abc.myservice.acme resolves to 1.2.3.4. I wish to override this IP address with another (say 5.6.7.8) for the pods to use.
I would add an entry for each pod's /etc/hosts to override the IP address, but I
have a feeling that it is an anti-pattern and there's probably a better way of doing this.
I investigated/tried:
creating a service + endpoint. This works, but the problem is the service name is not present in the SSL Certificate's SAN entry, so I'm getting a "SSL: no alternative certificate subject name matches target host name 'svc-external-acme'" message. Sure I can add it to the certificate SAN, but it's probably not the correct solution.
installing DNSmasq (https://developers.redhat.com/blog/2015/11/19/dns-your-openshift-v3-cluster/) on the worker nodes but again it feels like a complicated hack. There must be a simpler one.
hostAliases. Unfortunately, this is only available for kube 1.7+ but I'm on openshift 3.5 (kube 1.6). This would have been perfect.
Is there any way I can accomplish #3 in openshift?
I can edit the image to echo my desired entry to /etc/hosts, but I'm saving it as last resort.
-M
Maybe I'm a bit late answering this question
I had a similar issue with our dev environment and the way we managed to resolve it was:
We created a config-map with the desired content of the /etc/hosts file. I'm using hosts-delta as the name of the config map entry
We define a mount point of that config map inside the container (/app/hosts/). I think the directory /app/hosts should exist within the container filesystem so you should add a RUN mkdir -p /app/hosts in your Dockerfile
We modified the deployment config yaml adding a post start hook in this way:
lifecycle:
postStart:
exec:
command:
- /bin/sh
- '-c'
- |
cat /app/hosts/hosts-delta >> /etc/hosts
The previous snippet should be placed inside the spec > template > spec > containers element
Hope this helps somebody
Related
I tried to deploy library/cassandra image cassandra container in Sandbox Openshift cluster but it threw me this error in pod logs,
"Running Cassandra as root user or group is not recommended - please start Cassandra using a different system user.
If you really want to force running Cassandra as root, use -R command line option."
When I checked the container description, I could see that SCC is set to Restricted...So looks like in Sandbox openshift, SCC "Restricted" is set for "Default" Service account by default..
But in AWS when I tried to install openshift with installer option, I didnt face this error with same library/cassandra image..
Looks like default Service account is not by default associated with "Restricted" SCC...
could someone clarify what is the difference in Sandbox environment which throws this error? and How can I set the same config in AWS openshift so that default Service account can be associated with restricted SCC?
I can't see your specific environment, but from the error message I suspect it's being triggered by the GROUP=0, not user=0.
To confirm:
$ oc get pods (whatever) -o yaml | grep openshift.io/scc
This will show you which SCC admitted the pod into the cluster. It should be "restricted" based on what you said. If so, then we've got some good evidence that it's just the group.
Next, you can look for something like this:
$ oc rsh (podname) id -a
uid=1000640000(1000640000) gid=0(root) groups=0(root),1000640000
UID (user) is in the expected billion+ range defined in the namespace annotation. GID (group) is zero.
With that in place, you can either ignore the error, knowing it's own group=0 that's in place, or you can set a securityContext for your pod (or container) to specify a different gid.
I came to know that "default" project has different set of permissions so even a container with user id 0 can be deployed in default namespace..
In Sandbox cluster, the project is dev or stage so it works with correct security level..
Where is the Openshift Master and Node Host Files in v4.6
Previously hosted below in v3
Master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml
You can check your current kubelet configuration using the following procedures instead of the configuration file on the node hosts like OCPv3. Because the kubelet configuration was managed dynamically as of OCPv4.
Further information is here, Generating a file that contains the current configuration.
You can check it using above reference procedures(Generate the configuration file) or oc CLI as follows.
$ oc get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz | \
jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
These files no longer exist in the same for as in OCP 3. To change anything on the machines themselves, you'll need to create MachineConfigs, as CoreOS is an immutable Operating System. If you change anything manually on the filesystem and reboot the machine, your changes will typically be reset.
To modify Worker Nodes, often the setting you are looking for can be configured via a kubeletConfig: Managing nodes - Modifying Nodes. Note that only certain settings can be changed, others cannot be changed at all.
For the Master Config, it depends on what you want to do, as you will potentially change the setting via a machineConfigPool or for example edit API Server setting via oc edit apiserver cluster. So it depends on what you actually want to change.
I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!
I am trying to get the mount point details of host from a kuberentes pod. It is a privileged container.
Even if I mount the root file system , I am not able to check the mount details of a particular type say s3fsmay be because it belongs to a different namespace.
What is the best way to share the mount point namespace.
If you really just want details of the host's mount points rather than access to them you can run your Pod with hostPID: true and then inspect the mounts of a process in the proc filesystem that you know is running in the host's mount namespace (for example PID 1) like so: cat /proc/1/mounts
I did some research and found that kubernetes provides an option called MountPropagation which helps to achieve my requirement.
I tested this feature in my local set up and it did give me the result I wanted.
Few links that I found useful:
https://medium.com/kokster/kubernetes-mount-propagation-5306c36a4a2d
https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
I'm running RabbitMQ V.2.0.0. on a Linux machine. The mnesia base is current the default, but the within that directory Rabbit creates directories, eg. rabbit#ip-123.1.1.123.
The ip in the directory name is based on the inet addr of the machine. This directories hold information about user, exchanges, vhost (I think).
My question is, how can I fix/config these directory names with ip to be not based on ip?
To change the Mnesia directory, just set MNESIA_DIR in /etc/rabbitmq/rabbitmq.conf.
Also, a great place to ask RabbitMQ related questions is on the rabbitmq-discuss mailing list.
It seems you can edit the scripts files (rabbitmq-server, rabbitmq-mulit and rabbitmqcti). In these scripts at the top is a hostname variable.
I set the hostname to localhost and restarted.
This is not the best, but good enough for my requirements. The hostname must be a proper address, it cannot be something arbitrary.
The main problem is that your new machine has new hostname - and directory is named after it (just renaming directory as mentioned before, does not help) so we need to rename your machine hostname and make RabbitMq to work with old files.
Let "ip-0-0-0-0" be old machine name (so there should be a mnesia folder /var/lib/rabbitmq/mnsesia/ip-0-0-0-0), and new machine host
name is something like "ip-1-1-1-1", but new name doesnot matter as we will overwrite it. Execute following commands:
sudo -s
echo "127.0.0.1 ip-0-0-0-0" >> /etc/hosts
echo "ip-0-0-0-0" > /etc/hostname
reboot
After reboot your machine will have a new name and RabbitMq should work with old files.