We were able to successfully build our app using OpenShift Source To Image Patterns and create a Pod.
I have few questions around this S2I
Where can I find the Image that is built using S2I in OpenShift ?
How can I push the image that is created using S2I to private artifactory ?
If I want to rollback and go to different build it takes lot of time to build again
I could not find much resources around this.
I believed that you used BuildConfig resource to build your container image on Openshift.
If you want to push your image to an external container registry.
You need to edit your buildconfig as follows:
Before editing
spec:
output:
to:
kind: "ImageStreamTag"
name: "sample-image:latest"
After editing
spec:
output:
to:
kind: "DockerImage"
name: "my-registry.mycompany.com:5000/myimages/myimage:tag"
https://docs.openshift.com/container-platform/4.11/cicd/builds/managing-build-output.html
Related
I am using golang to programmatically create and destroy one-off Compute Engine instances using the Compute Engine API.
I can create an instance just fine, but what I'm really having trouble with is launching a container on startup.
You can do it from the Console UI:
But as far as I can tell it's extremely hard to do it programmatically, especially with Container Optimized OS as the base image. I tried doing a startup script that does a docker pull us-central1-docker.pkg.dev/project/repo/image:tag but it fails because you need to do gcloud auth configure-docker us-central1-docker.pkg.dev first for that to work and COOS doesn't have gcloud nor a package manager to get it.
All my workarounds seem hacky:
Manually create a VM template that has the desired container and create instances of the template
Put container in external registry like docker hub (not acceptable)
Use Ubuntu instead of COOS with a package manager so I can programmatically install gcloud, docker, and the container on startup
Use COOS to pull down an image from dockerhub containing gcloud, then do some sort of docker-in-docker mount to pull it down
Am I missing something or is it just really cumbersome to deploy a container to a compute engine instance without using gcloud or the Console UI?
To have a Compute Engine start a container when the Compute Engine starts, one has to define meta data for the description of the container. When the COOS starts, it appears to run an application called konlet which can be found here:
https://github.com/GoogleCloudPlatform/konlet
If we look at the documentation for this, it says:
The agent parses container declaration that is stored in VM instance metadata under gce-container-declaration key and starts the container with the declared configuration options.
Unfortunately, I haven't found any formal documentation for the structure of this metadata. While I couldn't find documentation, I did find two possible solutions:
Decipher the source code of konlet and break it apart to find out how the metadata maps to what is passed when the docker container is started
or
Create a Compute Engine by hand with the desired container definitions and then start the Compute Engine. SSH into the Compute Engine and then retrieve the current metadata. We can read about retrieving meta data here:
https://cloud.google.com/compute/docs/metadata/overview
It turns out, it's not too hard to pull down a container from Artifact Registry in Container Optimized OS:
Run docker-credential-gcr configure-docker --registries [region]-docker.pkg.dev
See: https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_images_in_or
So what you can do is put the above line along with docker pull [image] and docker run ... into a startup script. You can specify a startup script when creating an instance using the metadata field: https://cloud.google.com/compute/docs/instances/startup-scripts/linux#api
This seems the least hacky way of provisioning an instance with a container programmatically.
You mentioned you used docker-credential-gcr to solve your problem. I tried the same in my startup script:
docker-credential-gcr configure-docker --registries us-east1-docker.pkg.dev
But it returns:
ERROR: Unable to save docker config: mkdir /root/.docker: read-only file system
Is there some other step needed? Thanks.
I recently ran into the other side of these limitations (and asked a question on the topic).
Basically, I wanted to provision a COOS instance without launching a container. I was unable to, so I just launched a container from a base image and then later in my CI/CD pipeline, Dockerized my app, uploaded it to Artifact Registry and replaced the base image on the COOS instance with my newly built app.
The metadata I provided to launch the initial base image as a container:
spec:
containers:
- image: blairnangle/python3-numpy-ta-lib:latest
name: containervm
securityContext:
privileged: false
stdin: false
tty: false
volumeMounts: []
restartPolicy: Always
volumes: []
I'm a Terraform fanboi, so the metadata exists within some Terraform configuration. I have a public project with the code that achieves this if you want to take a proper look: blairnangle/dockerized-flask-on-gce.
In my BuildConfig I have specified output to:
kind: DockerImage
name: my-artifactory-repo/image-name:latest
When I look inside my-artifactory-repo/image-name:latest there are a lot of different images named with some sha256 functions. Is there some way in Openshift to get this sha256-name of the image that is uploaded to artifactory?
I've tried looking inside build details with no luck.
There is a variety of ways to get this into Openshift.
Using pure Openshift only the easiest would be an ImageStream. You could then upload the entire docker image into the Openshift ImageStream not just one sha256 layer.
Goto Build -> Image Stream then click New Image Stream
Specify the name and namespace of this image.
Click create.
Change your Build config to this name now to deploy directly into an Openshift image stream.
We also support deploying Artifactory into Openshift itself through our Certified Operator here.
I created a project on OpenShift 4.2 with an ImageStream that is pulling an Image from Quay.io:
oc new-project xxx-imagestream
oc import-image is:1.0 --from quay.io/xxx/myimage:latest --confirm --reference-policy local
Now I create a new project to host an app based on that ImageStream
oc new-project xxx-app
oc new-app --name myapp -i xxx-imagestream/is:1.0
The app is built and I can use it by exposing it. (But no Build or BuildConfig is created. Why???)
Now I update the image on Quay.io with a new version, and import the new version into the xxx-imagestream project:
oc import-image is:2.0 --from quay.io/xxx/myimage:latest --confirm --reference-policy local
The question is: how do I update my app (myapp)? In other words, how can I launch a new build of "myapp" based on the updated ImageStream?
(But no Build or BuildConfig is created. Why???)
A BuildConfig is only created when you use the "Source to Image" (S2I) functionality and is only needed when you want to create a container image from source. In your case, the image already exists, so there is no need to build anything. The only thing that oc new-app will do is deploy your existing image, there is no build necessary.
The question is: how do I update my app (myapp)? In other words, how can I launch a new build of "myapp" based on the updated ImageStream?
You are looking for "Deployment triggers", specifically the "ImageChange deployment trigger". The ImageChange trigger results in a new ReplicationController whenever the content of an imagestreamtag changes (when a new version of the image is pushed).
On a side-note, you can also periodically automate the importing of new image versions in your ImageStreams (see documentation).
The build starts automatically if your image stream has
--reference-policy source
In that case, it is correct to update the image stream using
oc -import-image [...]
To update a "local" ImageStream, instead of
oc import-image is:2.0 --from quay.io/xxx/myimage:latest --confirm --reference-policy local
you should update the existing local ImageStream tag
oc tag quay.io/xxx/myimage:latest is:2.0 --reference-policy local
This command automatically triggers a new deployment of your app.
Add this to your deploymentConfig
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- <your-container-name>
from:
kind: ImageStreamTag
name: '<image_name>:latest'
namespace: <your-namespace>
type: ImageChange
i have implemented following things in openshift
created Config Map In openshift for environmental configurations
reading those config maps as environmental variables in openshift.
I have a requirement like whenever i change values in the config maps new POD
needs to be created.
Please suggest me how i can achieve this?
Unfortunately there is no out of the box solution yet.
However i solved this issue by generating a hash of my config map "CONFIG_HASH".
This hash is then mounted in the container as an environment variable:
env:
- name: CONFIG_HASH
value: ${CONFIG_HASH}
Consequently, each time the config changes, a deployment is triggered (because the environment has changed).
You will however likely have to use a pipeline (Jenkins, GitlabCI,...) to do this...
We have a Java web application that is supposed to be moved from a regular deployment model (install on a server) into an OpenShift environment (deployment as docker container). Currently this application consumes a set of Java key stores (.jks files) for client certificates for communicating with third party web interfaces. We have one key store per interface.
These jks files get manually deployed on production machines and are occasionally updated when third-party certificates need to be updated. Our application has a setting with a path to the key store files and on startup it will read certificates from them and then use them to communicate with the third-party systems.
Now when moving to an OpenShift deployment, we have one docker image with the application that is going to be used for all environments (development, test and production). All configuration is given as environment variables. However we cannot give jks files as environment variables these need to be mounted into the docker container's file system.
As these certificates are a secret we don't want to bake them into the image. I scanned the OpenShift documentation for some clues on how to approach this and basically found two options: using Secrets or mounting a persistent volume claim (PVC).
Secrets don't seem to work for us as they are pretty much just key-value-pairs that you can mount as a file or have handed in as environment variables. They also have a size limit to them. Using a PVC would theoretically work, however we'd need to have some way to get the JKS files into that volume in the first place. A simple way would be to just start a shell container mounting the PVC and copying the files manually into it using the OpenShift command line tools, however I was hoping for a somewhat less manual solution.
Do you have found a clever solution to this or a similar problem where you needed to get files into a container?
It turns out that I misunderstood how secrets work. They are indeed key-values pairs that you can mount as files. The value can however be any base64 encoded binary that will be mapped as the file contents. So the solution is to first encode the contents of the JKS file to base64:
cat keystore.jks| base64
Then you can put this into your secret definition:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: my-namespace
data:
keystore.jks: "<base 64 from previous command here>"
Finally you can mount this into your docker container by referencing it in the deployment configuration:
apiVersion: v1
kind: DeploymentConfig
spec:
...
template:
spec:
...
container:
- name: "my-container"
...
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: "my-secret"
items:
- key: keystore.jks
path: keystore.jks
This will mount the secret volume secrets at /mnt/secrets and makes the entry with the name keystore.jks available as file keystore.jks under /mnt/secrets.
I'm not sure if this is really a good way of doing this, but it is at least working here.
You can add and mount the secrets like stated by Jan Thomä, but it's easier like this, using the oc commandline tool:
./oc create secret generic crnews-keystore --from-file=keystore.jks=$HOME/git/crnews-service/src/main/resources/keystore.jks --from-file=truststore.jks=$HOME/git/crnews-service/src/main/resources/truststore.jks --type=opaque
This can then be added via UI: Applications->Deployments->-> "Add config files"
where you can choose what secret you want to mount where.
Note, that the name=value pairs (e.g. truststore.jks=) will be used like filename=base64decoded-Content.
My generated base64 was multiline and I was getting the same error.
Trick is, use -w0 argument in base64 so that the whole encode is in 1 line!
base64 -w0 ssl_keystore.jks > test
Above will create a file named test and will contain the base64 in one line, copy paste like this in a secret:
apiVersion: v1
kind: Secret
metadata:
name: staging-ssl-keystore-jks
namespace: staging-space
type: Opaque
data:
keystore.jsk: your-base64-in-one-line
Building upon what both #Frischling and #Jan-Thomä said, and in agreement with Frischling as his way was easier and took care of both the trust cert keystores, after adding the keystores as a secret, under Applications->Deployments->[your deployments name] Click the environment link and add the following system properties:
Name: JAVA_OPTS_APPEND and
Value -Djavax.net.ssl.keyStorePassword=changeme -Djavax.net.ssl.keyStore=/mnt/keystores/your_cert_key_store.jks -Djavax.net.ssl.trustStorePassword=changeme -Djavax.net.ssl.trustStore=/mnt/keystores/your_ca_key_store.jks
This effectively will as indicated, append the keystore file paths, passwords to the java options used by the application, for example JBoss/WildFly or Tomcat.