I am looking to pass environment variable in my openshift oc process command.
I see option for param file, but nothing for env variable.
Question: Are params = environment variables? I mean can i set env variables using this option? I tried this but I did not get any env variable set after the deployment was done.
I am going through below document.
https://docs.openshift.com/container-platform/3.11/dev_guide/templates.html
Only way I am achieving setting up my env variables is like below
oc process -f helloworld.yaml | oc create -f -
curl http://servertofetchenvironmentvariables:5005/env/dev/helloworld | oc set env dc/helloworld -
This ends up two deployments. Any lead on resolving this and making it as one command will be helpful. I have to use template to create my application.
Question: Are params = environment variables?
No, parameters only apply to your template file. Your template file contains placeholders such as "${MY_PASSWORD}", which are then replaced when using oc process.
I mean can i set env variables using this option?
You can, but you would need to edit your template file to include all these environment variables and the relevant placeholder.
Only way I am achieving setting up my env variables is like below
That should definitely work, as you would then update the created DeploymentConfig (dc/helloworld in your case) with your new environment variables.
A good alternative could be to populate your environment variables using a ConfigMap (so having them totally separate from your Deployment) using envFrom like so:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
This would also decouple your configuration from your Deployment and you could store / change your environment variables in your ConfigMap.
Source: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
Related
I have a bash script that sets a series of environment variables.
My action includes the following two steps:
- name: Set env variables
run: source ./setvars.sh
- name: dump env variables
run: env
I notice setvars.sh runs successfully, but all of the variables defined inside it are missing after the steps.
How can I use a bash .sh script to add environment variables to the context of the workflow?
I don't see environment variables defined by sourcing file in GitHub Actions workflow.
I only see them defined as map (key-value) at the job or workflow level (since oct. 2019).
See if you can cat your file and append its content to GITHUB_ENV.
I am trying to create a Git source build of this Dockerfile: https://github.com/WASdev/ci.docker/blob/master/ga/latest/full/Dockerfile.ubi.ibmjava8
I have the following configuration in my BuildConfig:
source:
git:
uri: "https://github.com/WASdev/ci.docker"
ref: "master"
contextDir: "ga/latest/full"
However, the above assumes the use of the Dockerfile filename while I want to use Dockerfile.ubi.ibmjava8 as in docker build -f Dockerfile.ubi.ibmjava8 ..
How can I use Dockerfile.ubi.ibmjava8 instead of Dockerfile in OpenShift?
TL;DR: Yes. You will not be able to use a Dockerfile with a different name than Dockerfile
On Build Strategy Options on section Dockerfile Path you will find the constrains of OCP regarding the Docker Strategy:
By default, Docker builds use a Dockerfile (named Dockerfile) located
at the root of the context specified in the
BuildConfig.spec.source.contextDir field.
The dockerfilePath field allows the build to use a different path to
locate your Dockerfile, relative to the
BuildConfig.spec.source.contextDir field. It can be simply a different
file name other than the default Dockerfile (for example,
MyDockerfile), or a path to a Dockerfile in a subdirectory (for
example, dockerfiles/app1/Dockerfile).
And they also use an expample:
strategy:
dockerStrategy:
dockerfilePath: dockerfiles/app1/Dockerfile
We have a Java web application that is supposed to be moved from a regular deployment model (install on a server) into an OpenShift environment (deployment as docker container). Currently this application consumes a set of Java key stores (.jks files) for client certificates for communicating with third party web interfaces. We have one key store per interface.
These jks files get manually deployed on production machines and are occasionally updated when third-party certificates need to be updated. Our application has a setting with a path to the key store files and on startup it will read certificates from them and then use them to communicate with the third-party systems.
Now when moving to an OpenShift deployment, we have one docker image with the application that is going to be used for all environments (development, test and production). All configuration is given as environment variables. However we cannot give jks files as environment variables these need to be mounted into the docker container's file system.
As these certificates are a secret we don't want to bake them into the image. I scanned the OpenShift documentation for some clues on how to approach this and basically found two options: using Secrets or mounting a persistent volume claim (PVC).
Secrets don't seem to work for us as they are pretty much just key-value-pairs that you can mount as a file or have handed in as environment variables. They also have a size limit to them. Using a PVC would theoretically work, however we'd need to have some way to get the JKS files into that volume in the first place. A simple way would be to just start a shell container mounting the PVC and copying the files manually into it using the OpenShift command line tools, however I was hoping for a somewhat less manual solution.
Do you have found a clever solution to this or a similar problem where you needed to get files into a container?
It turns out that I misunderstood how secrets work. They are indeed key-values pairs that you can mount as files. The value can however be any base64 encoded binary that will be mapped as the file contents. So the solution is to first encode the contents of the JKS file to base64:
cat keystore.jks| base64
Then you can put this into your secret definition:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: my-namespace
data:
keystore.jks: "<base 64 from previous command here>"
Finally you can mount this into your docker container by referencing it in the deployment configuration:
apiVersion: v1
kind: DeploymentConfig
spec:
...
template:
spec:
...
container:
- name: "my-container"
...
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: "my-secret"
items:
- key: keystore.jks
path: keystore.jks
This will mount the secret volume secrets at /mnt/secrets and makes the entry with the name keystore.jks available as file keystore.jks under /mnt/secrets.
I'm not sure if this is really a good way of doing this, but it is at least working here.
You can add and mount the secrets like stated by Jan Thomä, but it's easier like this, using the oc commandline tool:
./oc create secret generic crnews-keystore --from-file=keystore.jks=$HOME/git/crnews-service/src/main/resources/keystore.jks --from-file=truststore.jks=$HOME/git/crnews-service/src/main/resources/truststore.jks --type=opaque
This can then be added via UI: Applications->Deployments->-> "Add config files"
where you can choose what secret you want to mount where.
Note, that the name=value pairs (e.g. truststore.jks=) will be used like filename=base64decoded-Content.
My generated base64 was multiline and I was getting the same error.
Trick is, use -w0 argument in base64 so that the whole encode is in 1 line!
base64 -w0 ssl_keystore.jks > test
Above will create a file named test and will contain the base64 in one line, copy paste like this in a secret:
apiVersion: v1
kind: Secret
metadata:
name: staging-ssl-keystore-jks
namespace: staging-space
type: Opaque
data:
keystore.jsk: your-base64-in-one-line
Building upon what both #Frischling and #Jan-Thomä said, and in agreement with Frischling as his way was easier and took care of both the trust cert keystores, after adding the keystores as a secret, under Applications->Deployments->[your deployments name] Click the environment link and add the following system properties:
Name: JAVA_OPTS_APPEND and
Value -Djavax.net.ssl.keyStorePassword=changeme -Djavax.net.ssl.keyStore=/mnt/keystores/your_cert_key_store.jks -Djavax.net.ssl.trustStorePassword=changeme -Djavax.net.ssl.trustStore=/mnt/keystores/your_ca_key_store.jks
This effectively will as indicated, append the keystore file paths, passwords to the java options used by the application, for example JBoss/WildFly or Tomcat.
Is there an already built in j2 template processor in kubernetes or docker? I am doing the configuration below and wanted to plugin the values on the template.
Note that using hostPath is not an option since this is using openshift and no pv/pvc can be used.
containers:
- image: some-docker-image:latest
name: some-docker-image
volumeMounts:
- mountPath: /etc/app/conf
name: configuration-volume
.
. Do some j2 template processing here if possible.
.
volumes:
- name: configuration-volume
gitRepo:
repository: "https://gitrepo/repo/example.git
There isn't any templating support built into Kubernetes. You can easily build a templating system on top of the yaml/json files that you pass into kubectl -f create though. I know some folks that are using jsonnet to accomplish this.
The discussion around adding templates is happening in https://github.com/kubernetes/kubernetes/issues/23896 if you'd like to contribute.
I've dockerized a Meteor-app with Meteord, and that works fine, my problem is that I want to pass some settings to the app.
Meteord does not start the app with a settings-file as one would usually do to give settings to an app (meteor --settings file.json). This is also possible to do with an environement variable called METEOR_SETTINGS.
As I want the webapp to run with other services, I'm using Docker Compose.
I have my settings.json-file that I want to be read in as a environment variable, so something like:
environment:
- METEOR_SETTINGS=$cat(settings.json)
This doesn't work though.
How can I make Docker compose dynamically create this environment variable based on a JSON-file?
An easy way to do this is to load the JSON file in to a local env var, then use that in your yaml file.
In docker-compose.yml
environment:
METEOR_SETTINGS: ${METEOR_SETTINGS}
Load the settings file before invoking docker-compose:
❯ METEOR_SETTINGS=$(cat settings.json) docker-compose up
Not possible without some trickery, depending on the amount of tweakable variables in settings.json:
If it's a lot of settings it's fairly easy to template the docker-compose.yml with a simple shell script that replaces a token in the template with the contents of settings.json, much like in your example. You also want to wrap the docker-compose call in that case. Simplified example:
docker-compose.yml.template:
environment:
- METEOR_SETTINGS=##_METEOR_SETTINGS_##
dc.sh:
#!/bin/sh
# replace ##_METEOR_SETTINGS_## with contents of settings.json and output to docker-compose.yml
sed -e 's|##_METEOR_SETTINGS_##|'"$(cat ./settings.json)"'|' \
"./docker-compose.yml.template" > "./docker-compose.yml"
# wrap docker-compose, passing all arguments
docker-compose "$#"
Put the 2 files into your project root, then chmod +x dc.sh to make the wrapper executable and call ./dc.sh -h.
If it's only a few settings you could handle the templating inside the container when its starting. Simply replace tokens placed in a prepared settings.json with ENV values passed to docker before starting Meteor. This allows you to just use the docker-compose ENV features to configure Meteor.