Creating Openshift build from Dockerfile - openshift

I'd need to create an openshift build from a Docker image (tomcat9) which is extended through a Dockerfile. The Dockerfile adds an applications in the webapps folder.
Here's my attempt:
oc new-build --name=runtime --docker-image="docker.io/tomcat:9.0-jre8" \
--source-image=tomcat:9.0-jre8 \
--source-image-path=/home/carla/build/example.war:. \
--dockerfile=$'FROM tomcat:9.0-jre8\nCOPY example.war /usr/local/tomcat/webapps/example.war'
I have placed the webapplication (example.war) in the path /home/carla/build and I need to copy in in the /usr/local/tomcat/webapps folder of the Docker image.
The error I get is:
error: BuildConfig "runtime" is invalid: spec.triggers: Invalid value:
. . .
multiple ImageChange triggers refer to the same image stream tag
--> Failed
I think the problem is related to the -source-image parameter, however I cannot omit it as I have specified the --source-image-path.
Any idea how to fix it ?
Thanks

Related

mkdir /.gitlab-runner: permission denied running GitLab Runner in Kubernetes deployed via Helm

I'm trying to deploy the GitLab Runner (15.7.1) onto an on-premise Kubernetes cluster and getting the following error:
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
This is occurring with both the 15.7.1 image (Ubuntu?) and the alpine3.13-v15.7.1 image. Looking at the deployment, it looks likes it should be trying to use /home/gitlab-runner, but for some reason it is trying to use root (/), which is a protected directory.
Anyone else experience this issue or have a suggestion as to what to look at?
I am using the Helm chart (0.48.0) using a copy of the images from dockerhub (simply moved into a local repository as internet access is not available from the cluster). Connectivity to GitLab appears to be working, but the error causes the overall startup to fail. Full logs are:
Registration attempt 4 of 30
Runtime platform arch=amd64 os=linux pid=33 revision=6d480948 version=15.7.1
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
Created missing unique system ID system_id=r_Of5q3G0yFEVe
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
I have tried the 15.7.1 image, the alpine3.13-v15.7.1 image, and the gitlab-runner-ocp:amd64-v15.7.1 image and searched the values.yaml for anything relevant to the path. Looking at the deployment template, it appears that it ought to be using /home/gitlab-runner as the directory (instead of /) [though the docs suggested it was /home].
As for "what was I expecting", of course I was expecting that it would "just work" :)
So, resolved this (and other) issues with:
Updated helm deployment template to mount an empty volume at /.gitlab-runner
[separate issue] explicitly added builds_dir and environment [per gitlab-org/gitlab-runner#3511 (comment 114281106)].
These two steps appeared to be sufficient to get the Helm chart deployment working.
You can easily create and mount the emptyDir (in case you are creating gitlab-runner with kubernetes manifest *.yml file):
volumes:
- emptyDir: {}
name: gitlab-runner
volumeMounts:
- name: gitlab-runner
mountPath: /.gitlab-runner
-------------------- OR --------------------
volumeMounts:
- name: root-gitlab-runner
mountPath: /.gitlab-runner
volumes:
- name: root-gitlab-runner
emptyDir:
medium: "Memory"

Annotation Validation Error when trying to install Vault on OpenShift

Following this tutorial on installing Vault with Helm on OpenShift, I encountered the following error after executing the command:
bash
helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f my_values.yaml
For the config:
values.yaml
bash
echo '# Custom values for the Vault chart
global:
# If deploying to OpenShift
openshift: true
server:
dev:
enabled: true
serviceAccount:
create: true
name: vault-sa
injector:
enabled: true
authDelegator:
enabled: true' > my_values.yaml
The error:
$ helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f values.yaml
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
What exactly is happening, or how can I reset this specific name space to point to the right release namespace?
Have you by chance tried the exact same thing before, because that is what the error is hinting.
If we dissect the error, we get the to the root of the problem:
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists.
So something on the cluster already exists that you were trying to deploy via the helm chart.
Unable to continue with install:
Helm is aborting due to this failure
ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists
So the cluster role vault-agent-injector-clusterrole that the helm chart is supposed to put onto the cluster already exsits. ClusterRoles aren't namespace specific, hence the "namespace" is blank.
and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
The default behavior is to try to import existing resources that this chart requires, but it is not possible, because the owner of that ClusterRole is different from the deployment.
To fix this, you can remove the existing deployment of your chart and then give it an other try and it should work as expected.
Make sure all resources are gone. For this particular one you can check with kubectl get clusterroles

oc-command to forward local-ports to remote debug ports based on service-name instead of pod-name

To minimize the setup-time for attaching a debug session to the remote pod (microservice deployed on OpenShift) using intelliJ,
I am trying to get the most out of the 'Before launch'-setting of the Remote Debug-Configuration.
I use 2 steps before attaching the debugger to the JVM Socket with following command-line arguments (this setup works but needs editing every new deploy);
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000
step 1:
external tools: oc with arguments:
login
https://url.of.openshift.environment
--username=<login>
--password=<password>
step 2:
external tools: oc with arguments:
port-forward
microservice-name-65-6bhz8 -> this needs to be changed after every deploy
8000
3000
3001
background info:
this is the info in the service his YAML under spec>containers>env:
- name: JAVA_TOOL_OPTIONS
value: >-
-agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=3000
-Dcom.sun.management.jmxremote.rmi.port=3001
-Djava.rmi.server.hostname=127.0.0.1
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
As the name of the pod changes every (re-)deploy I am trying to find a oc-command which can be used to port-forward without having to provide the pod-name.(eg. based on the service-name)
Or a completely other solution that allows me to hit 1 button to setup a debug-session (preferably in intelliJ).
> Screenshot IntelliJ settings
----------------------------- edit after tips -------------------------------
For now I made a small batch-script which does the trick:
Feel free to help on a even faster solution
(I'm checking https://openshiftdo.org/)
or other intelliJent solutions
set /p _username=Type your username:
set /p _password=Type your password:
oc login replace-with-openshift-console-url --username=%_username% --password=%_password%
oc project replace-with-project-name
oc get pods --selector app=replace-with-app-name -o jsonpath={.items[?(#.status.phase=='Running')].metadata.name} > temp.txt
set /p PODNAME= <temp.txt
del temp.txt
oc port-forward %PODNAME% 8000 3000 3001
Your going to need the pod name in order to port forward but of course you can fetch that programatically consistantly so you don't need to update in place every time.
There are a number of ways you can do this, via jsonpath, go template, bash etc. An example would be to use the following, replacing your app name as required:
oc get pod -l app=replace-me -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'

setting up microcks in openshift

I am trying to set up microcks in the openshift..
I am just using the free starter from openshift at the https://console.starter-us-west-2.openshift.com/console/catalog
In the http://microcks.github.io/installing/openshift/ , the command is given as below
oc new-app --template=microcks-persistent --param=APP_ROUTE_HOSTNAME=microcks-microcks.192.168.99.100.nip.io --param=KEYCLOAK_ROUTE_HOSTNAME=keycloak-microcks.192.168.99.100.nip.io --param=OPENSHIFT_MASTER=https://192.168.99.100:8443 --param=OPENSHIFT_OAUTH_CLIENT_NAME=microcks-client
In that , how can i find the route for my project ? my project is called testcoolers .
so what will be instead microcks-microcks.192.168.99.100.nip.io? I guess something will replace 192.168.99.100.nip.io
same with keycloak hostname ?also what will be the Public OpenShift master address? Its now https://192.168.99.100:8443
Installing Microcks appears to assume some level of OpenShift familiarity. Also, there are several restrictions that make this not an ideal install for OpenShift Online Starter, but it can definitely still be made to work.
# Create the template within your namespace
oc create -f https://raw.githubusercontent.com/microcks/microcks/master/install/openshift/openshift-persistent-full-template-https.yml
# Deploy the application from the template, be sure to replace <NAMESPACE> with your proper namespace
oc new-app --template=microcks-persistent-https \
--param=APP_ROUTE_HOSTNAME=microcks-<NAMESPACE>.7e14.starter-us-west- 2.openshiftapps.com \
--param=KEYCLOAK_ROUTE_HOSTNAME=keycloak-<NAMESPACE>.7e14.starter-us-west-2.openshiftapps.com \
--param=OPENSHIFT_MASTER=https://api.starter-us-west-2.openshift.com \
--param=OPENSHIFT_OAUTH_CLIENT_NAME=microcks-client \
--param=MONGODB_VOL_SIZE=1Gi \
--param=MEMORY_LIMIT=384Mi \
--param=MONGODB_MEMORY_LIMIT=384Mi
# The ROUTE params above are still necessary for the variables, but in Starter, you can't specify a hostname in a route, so you'll have to manually create the routes
oc create route edge microcks --service=microcks --insecure-policy=Redirect
oc create route edge keycloak --service=microcks-keycloak --insecure-policy=Redirect
You should also see an error about not being able to create the OAuthClient. This is expected because you don't have permissions to create this for the whole cluster. You will instead need to manually create a user in KeyCloak.
I was able to get this to successfully deploy and logged in on OpenShift Online Starter, so use the comments if you struggle at all.

Graphhopper with hybrid mode enabled throws IllegalStateException

I'm trying to start Graphhopper in hybrid mode using the latest code from its git repo.
The config file, per its comments and documentation (and this answer) has:
prepare.ch.weightings: no
prepare.lm.weightings: fastest
I build it with docker build -t tgraphhopper:lastest . and then I start one container with docker run --name tgraphhopper -v ./data:/data -p 8989:8989 tgraphhopper:latest
The error which appear in logs is:
java.lang.IllegalStateException: Configured graph.ch.weightings: [] is
not equal to loaded [fastest|car]
at com.graphhopper.storage.GraphHopperStorage.loadExisting(GraphHopperStorage.java:254)
at com.graphhopper.GraphHopper.load(GraphHopper.java:781)
at com.graphhopper.GraphHopper.importOrLoad(GraphHopper.java:637)
at com.graphhopper.http.GraphHopperManaged.start(GraphHopperManaged.java:71)
at io.dropwizard.lifecycle.JettyManaged.doStart(JettyManaged.java:27)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
at org.eclipse.jetty.server.Server.start(Server.java:419)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.Server.doStart(Server.java:386)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at io.dropwizard.cli.ServerCommand.run(ServerCommand.java:53)
at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:44)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:87)
at io.dropwizard.cli.Cli.run(Cli.java:78)
at io.dropwizard.Application.run(Application.java:93)
at com.graphhopper.http.GraphHopperApplication.main(GraphHopperApplication.java:33)
What am I missing in tyring to start the Graphhopper in hybrid mode?
You need to remove the (potentially) created cache folder for the graph in /data
(So, if you have area-latest.osm.pbf the folder is named area-latest.osm-gh)