Configure settings.xml for Openshift 3.10 S2I Maven Builds - openshift

I would like to to customize settings.xml for s2i maven builds in Openshift 3.10. While this is easily done in version 3.11 using config maps:
https://docs.openshift.com/container-platform/3.11/dev_guide/builds/build_inputs.html#using-secrets-during-build
I did not found any solution for 3.10. Is there a workaround / solution for this?
thank you!

In 3.11, you can create a ConfigMap for your settings.xml file
$ oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>
And use that to override it in your build. (Source)
source:
git:
uri: https://github.com/wildfly/quickstart.git
contextDir: helloworld
configMaps:
- configMap:
name: settings-mvn
As you point out, 3.10, there is no support for ConfigMaps in BuildConfigs, however, you can create a secret with the same content
$ oc create secret generic settings-mvn --from-file=settings.xml=<path/to/settings.xml>
And use that to override it in your build. (Source)
source:
git:
uri: https://github.com/wildfly/quickstart.git
contextDir: helloworld
secrets:
- secret:
name: settings-mvn
Alternatively, you can also include the settings.xml file in your git repo in order to override the default settings.xml. Simply placing your file at source_dir/configuration/settings.xml should be sufficient. (Source)

Related

OpenShift: How to update app based on ImageStream

I created a project on OpenShift 4.2 with an ImageStream that is pulling an Image from Quay.io:
oc new-project xxx-imagestream
oc import-image is:1.0 --from quay.io/xxx/myimage:latest --confirm --reference-policy local
Now I create a new project to host an app based on that ImageStream
oc new-project xxx-app
oc new-app --name myapp -i xxx-imagestream/is:1.0
The app is built and I can use it by exposing it. (But no Build or BuildConfig is created. Why???)
Now I update the image on Quay.io with a new version, and import the new version into the xxx-imagestream project:
oc import-image is:2.0 --from quay.io/xxx/myimage:latest --confirm --reference-policy local
The question is: how do I update my app (myapp)? In other words, how can I launch a new build of "myapp" based on the updated ImageStream?
(But no Build or BuildConfig is created. Why???)
A BuildConfig is only created when you use the "Source to Image" (S2I) functionality and is only needed when you want to create a container image from source. In your case, the image already exists, so there is no need to build anything. The only thing that oc new-app will do is deploy your existing image, there is no build necessary.
The question is: how do I update my app (myapp)? In other words, how can I launch a new build of "myapp" based on the updated ImageStream?
You are looking for "Deployment triggers", specifically the "ImageChange deployment trigger". The ImageChange trigger results in a new ReplicationController whenever the content of an imagestreamtag changes (when a new version of the image is pushed).
On a side-note, you can also periodically automate the importing of new image versions in your ImageStreams (see documentation).
The build starts automatically if your image stream has
--reference-policy source
In that case, it is correct to update the image stream using
oc -import-image [...]
To update a "local" ImageStream, instead of
oc import-image is:2.0 --from quay.io/xxx/myimage:latest --confirm --reference-policy local
you should update the existing local ImageStream tag
oc tag quay.io/xxx/myimage:latest is:2.0 --reference-policy local
This command automatically triggers a new deployment of your app.
Add this to your deploymentConfig
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- <your-container-name>
from:
kind: ImageStreamTag
name: '<image_name>:latest'
namespace: <your-namespace>
type: ImageChange

How to use Dockerfile with different name in OCP Git source build

I am trying to create a Git source build of this Dockerfile: https://github.com/WASdev/ci.docker/blob/master/ga/latest/full/Dockerfile.ubi.ibmjava8
I have the following configuration in my BuildConfig:
source:
git:
uri: "https://github.com/WASdev/ci.docker"
ref: "master"
contextDir: "ga/latest/full"
However, the above assumes the use of the Dockerfile filename while I want to use Dockerfile.ubi.ibmjava8 as in docker build -f Dockerfile.ubi.ibmjava8 ..
How can I use Dockerfile.ubi.ibmjava8 instead of Dockerfile in OpenShift?
TL;DR: Yes. You will not be able to use a Dockerfile with a different name than Dockerfile
On Build Strategy Options on section Dockerfile Path you will find the constrains of OCP regarding the Docker Strategy:
By default, Docker builds use a Dockerfile (named Dockerfile) located
at the root of the context specified in the
BuildConfig.spec.source.contextDir field.
The dockerfilePath field allows the build to use a different path to
locate your Dockerfile, relative to the
BuildConfig.spec.source.contextDir field. It can be simply a different
file name other than the default Dockerfile (for example,
MyDockerfile), or a path to a Dockerfile in a subdirectory (for
example, dockerfiles/app1/Dockerfile).
And they also use an expample:
strategy:
dockerStrategy:
dockerfilePath: dockerfiles/app1/Dockerfile

How can i disable the automatic build triggered from build configuration in openshift?

I am trying to create a cicd pipeline with openshift. Initially, when creating the application using 'oc new-app' command, it automatically triggers the build. How i need to disable the initial build other than deleting or cancel the build?
How i need to disable the initial build other than deleting or cancel the build?
oc new-app can not prevent the initial build.
It had discussed here: https://github.com/openshift/origin/issues/15429
Unfortunately it does not implement now.
But, you can prevent initial build as removing all triggers from buildConfig by modifying yaml of buildConfig manually.
First export oc new-app as yaml format.
# oc new-app --name=test \
centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git -o yaml --dry-run > test.yml
Remove all triggers as changing the configuration to triggers: [].
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: ruby-25-centos7:latest
type: Source
triggers: []
After modifying, create resources using oc create -f.
# oc create -f test.yml
imagestream.image.openshift.io/ruby-25-centos7 created
imagestream.image.openshift.io/ruby-ex created
buildconfig.build.openshift.io/ruby-ex created
deploymentconfig.apps.openshift.io/ruby-ex created
service/ruby-ex created
The build does not run until you run oc start-build <bc name> and oc rollout latest dc/<dc name>.
I hope this use case is helpful for you.

What is a good way to deploy secret Java key stores in an OpenShift environment?

We have a Java web application that is supposed to be moved from a regular deployment model (install on a server) into an OpenShift environment (deployment as docker container). Currently this application consumes a set of Java key stores (.jks files) for client certificates for communicating with third party web interfaces. We have one key store per interface.
These jks files get manually deployed on production machines and are occasionally updated when third-party certificates need to be updated. Our application has a setting with a path to the key store files and on startup it will read certificates from them and then use them to communicate with the third-party systems.
Now when moving to an OpenShift deployment, we have one docker image with the application that is going to be used for all environments (development, test and production). All configuration is given as environment variables. However we cannot give jks files as environment variables these need to be mounted into the docker container's file system.
As these certificates are a secret we don't want to bake them into the image. I scanned the OpenShift documentation for some clues on how to approach this and basically found two options: using Secrets or mounting a persistent volume claim (PVC).
Secrets don't seem to work for us as they are pretty much just key-value-pairs that you can mount as a file or have handed in as environment variables. They also have a size limit to them. Using a PVC would theoretically work, however we'd need to have some way to get the JKS files into that volume in the first place. A simple way would be to just start a shell container mounting the PVC and copying the files manually into it using the OpenShift command line tools, however I was hoping for a somewhat less manual solution.
Do you have found a clever solution to this or a similar problem where you needed to get files into a container?
It turns out that I misunderstood how secrets work. They are indeed key-values pairs that you can mount as files. The value can however be any base64 encoded binary that will be mapped as the file contents. So the solution is to first encode the contents of the JKS file to base64:
cat keystore.jks| base64
Then you can put this into your secret definition:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: my-namespace
data:
keystore.jks: "<base 64 from previous command here>"
Finally you can mount this into your docker container by referencing it in the deployment configuration:
apiVersion: v1
kind: DeploymentConfig
spec:
...
template:
spec:
...
container:
- name: "my-container"
...
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: "my-secret"
items:
- key: keystore.jks
path: keystore.jks
This will mount the secret volume secrets at /mnt/secrets and makes the entry with the name keystore.jks available as file keystore.jks under /mnt/secrets.
I'm not sure if this is really a good way of doing this, but it is at least working here.
You can add and mount the secrets like stated by Jan Thomä, but it's easier like this, using the oc commandline tool:
./oc create secret generic crnews-keystore --from-file=keystore.jks=$HOME/git/crnews-service/src/main/resources/keystore.jks --from-file=truststore.jks=$HOME/git/crnews-service/src/main/resources/truststore.jks --type=opaque
This can then be added via UI: Applications->Deployments->-> "Add config files"
where you can choose what secret you want to mount where.
Note, that the name=value pairs (e.g. truststore.jks=) will be used like filename=base64decoded-Content.
My generated base64 was multiline and I was getting the same error.
Trick is, use -w0 argument in base64 so that the whole encode is in 1 line!
base64 -w0 ssl_keystore.jks > test
Above will create a file named test and will contain the base64 in one line, copy paste like this in a secret:
apiVersion: v1
kind: Secret
metadata:
name: staging-ssl-keystore-jks
namespace: staging-space
type: Opaque
data:
keystore.jsk: your-base64-in-one-line
Building upon what both #Frischling and #Jan-Thomä said, and in agreement with Frischling as his way was easier and took care of both the trust cert keystores, after adding the keystores as a secret, under Applications->Deployments->[your deployments name] Click the environment link and add the following system properties:
Name: JAVA_OPTS_APPEND and
Value -Djavax.net.ssl.keyStorePassword=changeme -Djavax.net.ssl.keyStore=/mnt/keystores/your_cert_key_store.jks -Djavax.net.ssl.trustStorePassword=changeme -Djavax.net.ssl.trustStore=/mnt/keystores/your_ca_key_store.jks
This effectively will as indicated, append the keystore file paths, passwords to the java options used by the application, for example JBoss/WildFly or Tomcat.

Kubernetes: Dynamically create configuration json files from j2 templates

Is there an already built in j2 template processor in kubernetes or docker? I am doing the configuration below and wanted to plugin the values on the template.
Note that using hostPath is not an option since this is using openshift and no pv/pvc can be used.
containers:
- image: some-docker-image:latest
name: some-docker-image
volumeMounts:
- mountPath: /etc/app/conf
name: configuration-volume
.
. Do some j2 template processing here if possible.
.
volumes:
- name: configuration-volume
gitRepo:
repository: "https://gitrepo/repo/example.git
There isn't any templating support built into Kubernetes. You can easily build a templating system on top of the yaml/json files that you pass into kubectl -f create though. I know some folks that are using jsonnet to accomplish this.
The discussion around adding templates is happening in https://github.com/kubernetes/kubernetes/issues/23896 if you'd like to contribute.