How to use a unique value in a Kubernetes ConfigMap - configuration

Problem
I have a monitoring application that I want to deploy inside of a DaemonSet. In the application's configuration, a unique user agent is specified to separate the node from other nodes. I created a ConfigMap for the application, but this only works for synchronizing the other settings in the environment.
Ideal solution?
I want to specify a unique value, like the node's hostname or another locally-inferred value, to use as the user agent string. Is there a way I can call this information from the system and Kubernetes will populate the desired key with a value (like the hostname)?
Does this make sense, or is there a better way to do it? I was looking through the documentation, but I could not find an answer anywhere for this specific question.
As an example, here's the string in the app config that I have now, versus what I want to use.
user_agent = "app-k8s-test"
But I'd prefer…
user_agent = $HOSTNAME
Is something like this possible?

You can use an init container to preprocess a config template from a config map. The preprocessing step can inject local variables into the config files. The expanded config is written to an emptyDir shared between the init container and the main application container. Here is an example of how to do it.
First, make a config map with a placeholder for whatever fields you want to expand. I used sed and and ad-hoc name to replace. You can also get fancy and use jinja2 or whatever you like. Just put whatever pre-processor you want into the init container image. You can use whatever file format for the config file(s) you want. I just used TOML here to show it doesn't have to be YAML. I called it ".tpl" because it is not ready to use: it has a string, _HOSTNAME_, that needs to be expanded.
$ cat config.toml.tpl
[blah]
blah=_HOSTNAME_
otherkey=othervalue
$ kubectl create configmap cm --from-file=config.toml.tpl
configmap "cm" created
Now write a pod with an init container that mounts the config map in a volume, and expands it and writes to another volume, shared with the main container:
$ cat personalized-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod-5
labels:
app: myapp
annotations:
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running and my config-map is && cat /etc/config/config.toml && sleep 3600']
volumeMounts:
- name: config-volume
mountPath: /etc/config
initContainers:
- name: expander
image: busybox
command: ['sh', '-c', 'cat /etc/config-templates/config.toml.tpl | sed "s/_HOSTNAME_/$MY_NODE_NAME/" > /etc/config/config.toml']
volumeMounts:
- name: config-tpl-volume
mountPath: /etc/config-templates
- name: config-volume
mountPath: /etc/config
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: config-tpl-volume
configMap:
name: cm
- name: config-volume
emptyDir:
$ kubctl create -f personalized-pod.yaml
$ sleep 10
$ kubectl logs myapp-pod
The app is running and my config-map is
[blah]
blah=gke-k0-default-pool-93916cec-p1p6
otherkey=othervalue
I made this a bare pod for an example. You can embed this type of pod in a DaemonSet's pod template.
Here, the Downward API is used to set the MY_NODE_NAME Environment Variable, since the Node Name is not otherwise readily available from within a container.
Note that for some reason, you can't get the spec.nodeName into a file, just an env var.
If you just need the hostname in an Env Var, then you can skip the init container.
Since the Init Container only runs once, you should not update the configMap and expect it to be reexpanded. If you need updates, you can do one of two things:
Instead of an init container, run a sidecar that watches the config map volume and re-expands when it changes (or just does it periodically). This requires that the main container also know how to watch for config file updates.
You can just make a new config map each time the config template changes, and edit the daemonSet to change the one line to point to a new config map.
And then do a rolling update to use the new config.

Related

Use kubernetes with helm and provide my predefined user and password with bitnami correctly

I am using kubernetes with helm 3.
I need to create a kubernetes pod with sql - creating:
database name: my_database
user: root
password:12345
port: 3306
The steps:
creating chart by:
helm create test
after the chart is created, change the Chart.yaml file in test folder, by adding dependencies section.
apiVersion: v2
name: test3
description: A Helm chart for Kubernetes
version: 0.1.0
appVersion: "1.16.0"
dependencies:
name: mysql
version: 8.8.23 repository: "https://charts.bitnami.com/bitnami"
run:
helm dependencies build test
After that there is a compressed file tgz.
So I extracted it and there is tar file - I extracted it too, and leave only the final extracted folder.
I presume this isn't the best approach of changing parameter in yaml for bitnami,
and using also the security.yaml - I would like knowing that better approach too.
I need to change the user + password, and link to database,
so I changed the values.yaml directly (any better approach?), for values: auth:rootPassword and auth:my_database.
the another following steps:
helm build dependencies test
helm install test --namespace test --create-namespace
after that there are two pods created.
I could check it by:
kubectl get pods -n test
and I see two pods running (maybe replication).
one of the pod: test-mysql-0 (the other is with random parse).
run:
kubectl exec --stdin --tty test-mysql-0 --namespace test-mysql -- /bin/sh
did enter the pod.
run:
mysql -rroot -p12345;
and then:
show databases;
That did showing all the database, including seeing the created database: my_database, successfully.
When I tried openning the mysql database from 'mysql workbench', and test (same user: root, and password, and port: 3306, and localhost), I couldn't run test (test connection button in database properties returns: 'failed to connect to database').
Why cannot I run properly 'mysql workbench', while in the pad itself - without any particular problem?
Is there any better approach than extrating the tgz file as I described above, and can I pass in better way (some secured yaml) the user+password?
(Right now is only the root password)
Thanks.
It sounds like you're trying to set the parameters in the dependent chart (please correct me if I'm wrong)
If this is right, all you need to do is add another section in your chart's values.yaml
name-of-dependency:
user-name: ABC
password: abcdef
the "name-of-dependency" is specified in your Chart.yaml file when you declare your chart. For example, here's my redis dependency from one of my own charts
dependencies:
- name: redis
repository: https://charts.bitnami.com/bitnami/
version: x.x.x
Then when I install the chart, I can override the redis chart's settings by doing this in my own chart's values.yaml
redis:
architecture: standalone
auth:
password: "secret-password-here"

Issue running helm command on a schedule

I am trying to delete temporary pods and other artifacts using helm delete. I am trying to run this helm delete to run on a schedule. Here is my stand alone command which works
helm delete --purge $(helm ls -a -q temppods.*)
However if i try to run this on a schedule as below i am running into issues.
Here is what mycron.yaml looks like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronbox
spec:
serviceAccount: cron-z
successfulJobsHistoryLimit: 1
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronbox
image: alpine/helm:2.9.1
args:
- delete
- --purge
- $(helm ls -a -q temppods.*)
restartPolicy: OnFailure
I ran
oc create -f ./mycron.yaml
This created the cronjob
Every 5th minute a pod is getting created and the helm command that is part of the cron job runs.
I am expecting the artifacts/pods name beginning with temppods* to be deleted.
What i get is:
Error: pods is forbidden: User "system:serviceacount:myproject:default" cannot list pods in the namespace "kube-system": no RBAC policy matched
i then created a service account cron-z and gave edit access to it. I added this serviceAccount to my yaml thinking when my pod will be created it will have the service account cron-z associated to it. Still no luck. I see the cron-z is not getting assoicated with the pod that gets created every 5 minutes and i still see default as the service name associated with the pod.
You'll need to have a service account for helm to use tiller with as well as an actual tiller service account github.com/helm/helm/blob/master/docs/rbac.md

Openshift - Environment variable getting evaluated to hostname

I want to pass an environment variable that should get evaluated to the hostname of the running container. This is what I am trying to do
oc new-app -e DASHBOARD_PROTOCOL=http -e ADMIN_PASSWORD=abc#123 -e KEYCLOAK_URL=http://keycloak.openidp.svc:8080 -e KEYCLOAK_REALM=master -e DASHBOARD_HOSTNAME=$HOSTNAME -e GF_INSTALL_PLUGINS=grafana-simple-json-datasource,michaeldmoore-annunciator-panel,briangann-gauge-panel,savantly-heatmap-panel,briangann-datatable-panel grafana/grafana:5.2.1
How to ensure that the DASHBOARD_HOSTNAME gets evaluated to the value of the hostname of the running container image
For take the hostname value from a pod you could use the metadata.name.
follow the eg:
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
After creating the application, you could edit the deployment config (oc edit dc/<deployment_config>) or patch it to configure the DASHBOARD_HOSTNAME environment variable using the Downward API.
This may be a personal preference but as much as oc new-app is convenient I'd rather work with (declarative) configuration files that are checked in and versioned in a code repo than with imperative commands.

Populate environment variables from OpenShift secret with Docker build strategy

I would like to use on opaque OpenShift secret inside a build pod as environment variable. The secret contains three key-value pairs, so they should become available as three environment variables. (This is for OpenShift 3.9).
I have found a documented example for OpenShift's Source build strategy (sourceStrategy), but need this in the context of a build configuration with Docker build strategy (dockerStrategy). oc explain suggests that extraction of secrets into environment variables should work with both build strategies. So far, so good:
oc explain bc.spec.strategy.sourceStrategy.env.valueFrom.secretKeyRef
oc explain bc.spec.strategy.dockerStrategy.env.valueFrom.secretKeyRef
My build configuration is created from a template, so I have added a section like this as a sibling of dockerStragegy where the template refers to the build configuration:
env:
- name: SECRET_1
valueFrom:
secretKeyRef:
name: my-secret
key: role-1
- name: SECRET_2
valueFrom:
secretKeyRef:
name: my-secret
key: role-2
- name: SECRET_3
valueFrom:
secretKeyRef:
name: my-secret
key: role-3
The secret was created like this:
oc create secret generic my-secret \
--from-literal=role-1=... --from-literal=role-2=... --from-literal=role-3=...
After uploading the new template (with oc replace) and recreating the application and hence the build configuration from it (with oc new-app) I observe the following:
The template contains env as expected (checked with oc get template -o yaml).
The build configuration does not contain the desired env (checked with oc get bc -o yaml).
What could be the reason why and am I correct in assuming that secrets can be made available inside environment variables for the Docker build strategy. For context: my Dockerfile sets up a relational database (in its ENTRYPOINT script), and needs to configure passwords for three roles, and these should stem from the secret.
This was my mistake: env should reside as a child (not sibling) of dockerStrategy inside the template (as had already been suggested by oc explain's cited path). I've now fixed this, and so the desired parts now show up both in the template and in the build configuration.

Openshift Configmap : create and update command

I am writing sample program to deploy into Openshift with configmap. I have the following configmap yaml in the source code folder so when devops is setup, Jenkins should pick up this yaml and create/update the configs.
apiVersion: v1
kind: ConfigMap
metadata:
name: sampleapp
data:
username: usernameTest
password: passwordTest
However, I could not find the command that would create/update if the config already exist (similar to kubectl apply command). Can you help with the correct command which would create the Resource if the job is run for the first time and update if otherwise.
I also want to create/update the Services,Routes from the yaml files in the src repository.
Thanks.
you can use "oc apply" command to update the resources already exists.
Like below Example:
#oc process -f openjdk-basic-template.yml -p APPLICATION_NAME=spring-rest -p SOURCE_REPOSITORY_URL=https://github.com/rest.git -p CONTEXT_DIR='' | oc apply -f-
service "spring-rest" configured
route "spring-rest" created
imagestream "spring-rest" configured
buildconfig "spring-rest" configured
deploymentconfig "spring-rest" configured
If you have configmap in yaml file or you store in some place
you can do replace it.
oc replace --force -f config-map.yaml this will update the existing configmap (it actually deletes and creates a new one)
After this - I executed:
oc set env --from=configmap/example-cm dc/example-dc