Populate environment variables from OpenShift secret with Docker build strategy - openshift

I would like to use on opaque OpenShift secret inside a build pod as environment variable. The secret contains three key-value pairs, so they should become available as three environment variables. (This is for OpenShift 3.9).
I have found a documented example for OpenShift's Source build strategy (sourceStrategy), but need this in the context of a build configuration with Docker build strategy (dockerStrategy). oc explain suggests that extraction of secrets into environment variables should work with both build strategies. So far, so good:
oc explain bc.spec.strategy.sourceStrategy.env.valueFrom.secretKeyRef
oc explain bc.spec.strategy.dockerStrategy.env.valueFrom.secretKeyRef
My build configuration is created from a template, so I have added a section like this as a sibling of dockerStragegy where the template refers to the build configuration:
env:
- name: SECRET_1
valueFrom:
secretKeyRef:
name: my-secret
key: role-1
- name: SECRET_2
valueFrom:
secretKeyRef:
name: my-secret
key: role-2
- name: SECRET_3
valueFrom:
secretKeyRef:
name: my-secret
key: role-3
The secret was created like this:
oc create secret generic my-secret \
--from-literal=role-1=... --from-literal=role-2=... --from-literal=role-3=...
After uploading the new template (with oc replace) and recreating the application and hence the build configuration from it (with oc new-app) I observe the following:
The template contains env as expected (checked with oc get template -o yaml).
The build configuration does not contain the desired env (checked with oc get bc -o yaml).
What could be the reason why and am I correct in assuming that secrets can be made available inside environment variables for the Docker build strategy. For context: my Dockerfile sets up a relational database (in its ENTRYPOINT script), and needs to configure passwords for three roles, and these should stem from the secret.

This was my mistake: env should reside as a child (not sibling) of dockerStrategy inside the template (as had already been suggested by oc explain's cited path). I've now fixed this, and so the desired parts now show up both in the template and in the build configuration.

Related

Use kubernetes with helm and provide my predefined user and password with bitnami correctly

I am using kubernetes with helm 3.
I need to create a kubernetes pod with sql - creating:
database name: my_database
user: root
password:12345
port: 3306
The steps:
creating chart by:
helm create test
after the chart is created, change the Chart.yaml file in test folder, by adding dependencies section.
apiVersion: v2
name: test3
description: A Helm chart for Kubernetes
version: 0.1.0
appVersion: "1.16.0"
dependencies:
name: mysql
version: 8.8.23 repository: "https://charts.bitnami.com/bitnami"
run:
helm dependencies build test
After that there is a compressed file tgz.
So I extracted it and there is tar file - I extracted it too, and leave only the final extracted folder.
I presume this isn't the best approach of changing parameter in yaml for bitnami,
and using also the security.yaml - I would like knowing that better approach too.
I need to change the user + password, and link to database,
so I changed the values.yaml directly (any better approach?), for values: auth:rootPassword and auth:my_database.
the another following steps:
helm build dependencies test
helm install test --namespace test --create-namespace
after that there are two pods created.
I could check it by:
kubectl get pods -n test
and I see two pods running (maybe replication).
one of the pod: test-mysql-0 (the other is with random parse).
run:
kubectl exec --stdin --tty test-mysql-0 --namespace test-mysql -- /bin/sh
did enter the pod.
run:
mysql -rroot -p12345;
and then:
show databases;
That did showing all the database, including seeing the created database: my_database, successfully.
When I tried openning the mysql database from 'mysql workbench', and test (same user: root, and password, and port: 3306, and localhost), I couldn't run test (test connection button in database properties returns: 'failed to connect to database').
Why cannot I run properly 'mysql workbench', while in the pad itself - without any particular problem?
Is there any better approach than extrating the tgz file as I described above, and can I pass in better way (some secured yaml) the user+password?
(Right now is only the root password)
Thanks.
It sounds like you're trying to set the parameters in the dependent chart (please correct me if I'm wrong)
If this is right, all you need to do is add another section in your chart's values.yaml
name-of-dependency:
user-name: ABC
password: abcdef
the "name-of-dependency" is specified in your Chart.yaml file when you declare your chart. For example, here's my redis dependency from one of my own charts
dependencies:
- name: redis
repository: https://charts.bitnami.com/bitnami/
version: x.x.x
Then when I install the chart, I can override the redis chart's settings by doing this in my own chart's values.yaml
redis:
architecture: standalone
auth:
password: "secret-password-here"

Openshift - Environment variable getting evaluated to hostname

I want to pass an environment variable that should get evaluated to the hostname of the running container. This is what I am trying to do
oc new-app -e DASHBOARD_PROTOCOL=http -e ADMIN_PASSWORD=abc#123 -e KEYCLOAK_URL=http://keycloak.openidp.svc:8080 -e KEYCLOAK_REALM=master -e DASHBOARD_HOSTNAME=$HOSTNAME -e GF_INSTALL_PLUGINS=grafana-simple-json-datasource,michaeldmoore-annunciator-panel,briangann-gauge-panel,savantly-heatmap-panel,briangann-datatable-panel grafana/grafana:5.2.1
How to ensure that the DASHBOARD_HOSTNAME gets evaluated to the value of the hostname of the running container image
For take the hostname value from a pod you could use the metadata.name.
follow the eg:
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
After creating the application, you could edit the deployment config (oc edit dc/<deployment_config>) or patch it to configure the DASHBOARD_HOSTNAME environment variable using the Downward API.
This may be a personal preference but as much as oc new-app is convenient I'd rather work with (declarative) configuration files that are checked in and versioned in a code repo than with imperative commands.

Install input secret into OpenShift build configuration

I have an OpenShift 3.9 build configuration my_bc and a secret my_secret of type kubernetes.io/ssh-auth. The secret was created like so:
oc create secret generic my_secret \
--type=kubernetes.io/ssh-auth \
--from-file=key
I have installed it as source secret into my_bc, and oc get bc/my_bc -o yaml reveals this spec:
source:
contextDir: ...
git:
uri: ...
sourceSecret:
name: my_secret
type: Git
As such, it is already effective in the sense that the OpenShift builder can pull from my private Git repository and produce an image with its Docker strategy.
I would now like to add my_secret also as an input secret to my_bc. My understanding is that this would not only allow the builder to make use of it (as source secret), but would allow other components inside the build to pick it up as well (as input secret). E.g. for the Docker strategy, it would exist in WORKDIR.
The documentation explains this with an example that adds the input secret when a build configuration is created:
oc new-build \
openshift/nodejs-010-centos7~https://github.com/openshift/nodejs-ex.git \
--build-secret secret-npmrc
Now the corresponding spec refers to the secret under secrets (not: sourceSecret), presumably because it is now an input secret (not: source secret).
source:
git:
uri: https://github.com/openshift/nodejs-ex.git
secrets:
- destinationDir: .
secret:
name: secret-npmrc
type: Git
oc set build-secret apparently allows adding source secrets (as well as push and pull secrets -- these are for interacting with container registries) to a build configuration with command line argument --source (as well as --push/--pull), but what about input secrets? I did not find out yet.
So I have these questions:
How can I add my_secret as input secret to an existing build configuration such as my_bc?
Where would the input secret show up at build time , e.g. under which path could a Dockerfile pick up the private key that is stored in my_secret?
This procedure now works for me (thanks to #GrahamDumpleton for his guidance):
leave build configuration's source secret as is for now; get bc/my_bc -o jsonpath='{.spec.source.sourceSecret}' reports map[name:my_secret] (w/o path)
add input secret to build configuration at .spec.source.secrets with YAML corresponding to oc explain bc.spec.source.secrets: oc edit bc/my_bc
sanity checks: oc get bc/my_bc -o jsonpath='{.spec.source.secrets}' reports [map[destinationDir:secret secret:map[name:my_secret]]]; oc describe bc/my_bc | grep 'Source Secret:' reports Source Secret: my_secret (no path) and oc describe bc/my_bc | grep "Build Secrets:" reports Build Secrets: my_secret->secret
access secret inside Dockerfile in a preliminary way: COPY secret/ssh-privatekey secret/my_secret, RUN chmod 0640 secret/my_secret; adjust ssh-privatekey if necessary (as suggested by oc get secret/my_secret -o jsonpath='{.data}' | sed -ne 's/^map\[\(.*\):.*$/\1/p')
rebuild and redeploy image
sanity check: oc exec -it <pod> -c my_db file /secret/my_secret reports /secret/my_secret: PEM RSA private key (the image's WORKDIR is /)
In the comments to the question it mentions to patch the BuildConfig. Here is a patch that works on v3.11.0:
$cat patch.json
{
"spec": {
"source": {
"secrets": [
{
"secret": {
"name": "secret-npmrc"
},
"destinationDir": "/etc"
}
]
}
}
}
$ oc patch -n your-eng bc/tag-realworld -p "$(<patch.json)"
buildconfig "tag-realworld" patched

Openshift Configmap : create and update command

I am writing sample program to deploy into Openshift with configmap. I have the following configmap yaml in the source code folder so when devops is setup, Jenkins should pick up this yaml and create/update the configs.
apiVersion: v1
kind: ConfigMap
metadata:
name: sampleapp
data:
username: usernameTest
password: passwordTest
However, I could not find the command that would create/update if the config already exist (similar to kubectl apply command). Can you help with the correct command which would create the Resource if the job is run for the first time and update if otherwise.
I also want to create/update the Services,Routes from the yaml files in the src repository.
Thanks.
you can use "oc apply" command to update the resources already exists.
Like below Example:
#oc process -f openjdk-basic-template.yml -p APPLICATION_NAME=spring-rest -p SOURCE_REPOSITORY_URL=https://github.com/rest.git -p CONTEXT_DIR='' | oc apply -f-
service "spring-rest" configured
route "spring-rest" created
imagestream "spring-rest" configured
buildconfig "spring-rest" configured
deploymentconfig "spring-rest" configured
If you have configmap in yaml file or you store in some place
you can do replace it.
oc replace --force -f config-map.yaml this will update the existing configmap (it actually deletes and creates a new one)
After this - I executed:
oc set env --from=configmap/example-cm dc/example-dc

How to use a unique value in a Kubernetes ConfigMap

Problem
I have a monitoring application that I want to deploy inside of a DaemonSet. In the application's configuration, a unique user agent is specified to separate the node from other nodes. I created a ConfigMap for the application, but this only works for synchronizing the other settings in the environment.
Ideal solution?
I want to specify a unique value, like the node's hostname or another locally-inferred value, to use as the user agent string. Is there a way I can call this information from the system and Kubernetes will populate the desired key with a value (like the hostname)?
Does this make sense, or is there a better way to do it? I was looking through the documentation, but I could not find an answer anywhere for this specific question.
As an example, here's the string in the app config that I have now, versus what I want to use.
user_agent = "app-k8s-test"
But I'd prefer…
user_agent = $HOSTNAME
Is something like this possible?
You can use an init container to preprocess a config template from a config map. The preprocessing step can inject local variables into the config files. The expanded config is written to an emptyDir shared between the init container and the main application container. Here is an example of how to do it.
First, make a config map with a placeholder for whatever fields you want to expand. I used sed and and ad-hoc name to replace. You can also get fancy and use jinja2 or whatever you like. Just put whatever pre-processor you want into the init container image. You can use whatever file format for the config file(s) you want. I just used TOML here to show it doesn't have to be YAML. I called it ".tpl" because it is not ready to use: it has a string, _HOSTNAME_, that needs to be expanded.
$ cat config.toml.tpl
[blah]
blah=_HOSTNAME_
otherkey=othervalue
$ kubectl create configmap cm --from-file=config.toml.tpl
configmap "cm" created
Now write a pod with an init container that mounts the config map in a volume, and expands it and writes to another volume, shared with the main container:
$ cat personalized-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod-5
labels:
app: myapp
annotations:
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running and my config-map is && cat /etc/config/config.toml && sleep 3600']
volumeMounts:
- name: config-volume
mountPath: /etc/config
initContainers:
- name: expander
image: busybox
command: ['sh', '-c', 'cat /etc/config-templates/config.toml.tpl | sed "s/_HOSTNAME_/$MY_NODE_NAME/" > /etc/config/config.toml']
volumeMounts:
- name: config-tpl-volume
mountPath: /etc/config-templates
- name: config-volume
mountPath: /etc/config
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: config-tpl-volume
configMap:
name: cm
- name: config-volume
emptyDir:
$ kubctl create -f personalized-pod.yaml
$ sleep 10
$ kubectl logs myapp-pod
The app is running and my config-map is
[blah]
blah=gke-k0-default-pool-93916cec-p1p6
otherkey=othervalue
I made this a bare pod for an example. You can embed this type of pod in a DaemonSet's pod template.
Here, the Downward API is used to set the MY_NODE_NAME Environment Variable, since the Node Name is not otherwise readily available from within a container.
Note that for some reason, you can't get the spec.nodeName into a file, just an env var.
If you just need the hostname in an Env Var, then you can skip the init container.
Since the Init Container only runs once, you should not update the configMap and expect it to be reexpanded. If you need updates, you can do one of two things:
Instead of an init container, run a sidecar that watches the config map volume and re-expands when it changes (or just does it periodically). This requires that the main container also know how to watch for config file updates.
You can just make a new config map each time the config template changes, and edit the daemonSet to change the one line to point to a new config map.
And then do a rolling update to use the new config.