Openshift Configmap : create and update command - openshift

I am writing sample program to deploy into Openshift with configmap. I have the following configmap yaml in the source code folder so when devops is setup, Jenkins should pick up this yaml and create/update the configs.
apiVersion: v1
kind: ConfigMap
metadata:
name: sampleapp
data:
username: usernameTest
password: passwordTest
However, I could not find the command that would create/update if the config already exist (similar to kubectl apply command). Can you help with the correct command which would create the Resource if the job is run for the first time and update if otherwise.
I also want to create/update the Services,Routes from the yaml files in the src repository.
Thanks.

you can use "oc apply" command to update the resources already exists.
Like below Example:
#oc process -f openjdk-basic-template.yml -p APPLICATION_NAME=spring-rest -p SOURCE_REPOSITORY_URL=https://github.com/rest.git -p CONTEXT_DIR='' | oc apply -f-
service "spring-rest" configured
route "spring-rest" created
imagestream "spring-rest" configured
buildconfig "spring-rest" configured
deploymentconfig "spring-rest" configured

If you have configmap in yaml file or you store in some place
you can do replace it.
oc replace --force -f config-map.yaml this will update the existing configmap (it actually deletes and creates a new one)
After this - I executed:
oc set env --from=configmap/example-cm dc/example-dc

Related

Use kubernetes with helm and provide my predefined user and password with bitnami correctly

I am using kubernetes with helm 3.
I need to create a kubernetes pod with sql - creating:
database name: my_database
user: root
password:12345
port: 3306
The steps:
creating chart by:
helm create test
after the chart is created, change the Chart.yaml file in test folder, by adding dependencies section.
apiVersion: v2
name: test3
description: A Helm chart for Kubernetes
version: 0.1.0
appVersion: "1.16.0"
dependencies:
name: mysql
version: 8.8.23 repository: "https://charts.bitnami.com/bitnami"
run:
helm dependencies build test
After that there is a compressed file tgz.
So I extracted it and there is tar file - I extracted it too, and leave only the final extracted folder.
I presume this isn't the best approach of changing parameter in yaml for bitnami,
and using also the security.yaml - I would like knowing that better approach too.
I need to change the user + password, and link to database,
so I changed the values.yaml directly (any better approach?), for values: auth:rootPassword and auth:my_database.
the another following steps:
helm build dependencies test
helm install test --namespace test --create-namespace
after that there are two pods created.
I could check it by:
kubectl get pods -n test
and I see two pods running (maybe replication).
one of the pod: test-mysql-0 (the other is with random parse).
run:
kubectl exec --stdin --tty test-mysql-0 --namespace test-mysql -- /bin/sh
did enter the pod.
run:
mysql -rroot -p12345;
and then:
show databases;
That did showing all the database, including seeing the created database: my_database, successfully.
When I tried openning the mysql database from 'mysql workbench', and test (same user: root, and password, and port: 3306, and localhost), I couldn't run test (test connection button in database properties returns: 'failed to connect to database').
Why cannot I run properly 'mysql workbench', while in the pad itself - without any particular problem?
Is there any better approach than extrating the tgz file as I described above, and can I pass in better way (some secured yaml) the user+password?
(Right now is only the root password)
Thanks.
It sounds like you're trying to set the parameters in the dependent chart (please correct me if I'm wrong)
If this is right, all you need to do is add another section in your chart's values.yaml
name-of-dependency:
user-name: ABC
password: abcdef
the "name-of-dependency" is specified in your Chart.yaml file when you declare your chart. For example, here's my redis dependency from one of my own charts
dependencies:
- name: redis
repository: https://charts.bitnami.com/bitnami/
version: x.x.x
Then when I install the chart, I can override the redis chart's settings by doing this in my own chart's values.yaml
redis:
architecture: standalone
auth:
password: "secret-password-here"

Use case of OpenShift + buildConfig + ConfigMaps

I am trying to create and run a buildconfig yml file.
C:\OpenShift>oc version
Client Version: 4.5.31
Kubernetes Version: v1.18.3+65bd32d
Background:-
I have multiple Springboot WebUI applications which i need to deploy on OpenShift
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes),
for each and every application seems to be very inefficient.
Instead i would like to have a single set of parameterized yml files
to which i can pass on custom parameters to setup each individual application
Solution so far:-
Version One
Dockerfile-
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties
configmap/myapp-configmap created
$ oc describe cm myapp-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
APPPATH:
----
/app
ARTIFACT:
----
myapp.jar
ARTIFACTURL:
----
"https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
MY_PORT:
----
12305
Events: <none>
buildconfig.yaml snippet
strategy:
dockerStrategy:
env:
- name: GIT_SSL_NO_VERIFY
value: "true"
- name: ARTIFACTURL
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACTURL
- name: ARTIFACT
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACT
This works fine. However I somehow need to have those env: variables in file.
I am doing this to have greater flexibility, i.e. lets say I have a new variable introduced in docker file, I need NOT change the buildconfig.yml
I just add the new key:value pair to the property file, rebuild and we are good to go
This is what I do next;
Version Two
Dockerfile
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
#Intializing the variables file;
RUN ["sh", "-c", "source ./MyApp.properties"]
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties=C:\MyRepo\MyTemplates\MyApp.properties
configmap/myapp-configmap created
C:\OpenShift>oc describe configmaps test-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
MyApp.properties:
----
APPPATH=/app
ARTIFACTURL="https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
ARTIFACT=myapp.jar
MY_PORT=12035
Events: <none>
buildconfig.yaml snippet
source:
contextDir: "${param_source_contextdir}"
configMaps:
- configMap:
name: "${param_app_name}-configmap"
However the build fails
STEP 9: RUN ls ./MyApp.properties
ls: cannot access ./MyApp.properties: No such file or directory
error: build error: error building at STEP "RUN ls ./MyApp.properties": error while running runtime: exit status 2
This means that the config map file didnt get copy to folder.
Can you please suggest what to do next?
I think you are misunderstanding Openshift a bit.
The first thing you say is
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes), for each and every application seems to be very inefficient.
But that's how kubernetes/openshift works. If your resource files look the same, but only use a different git resource or image for example, then you probably are looking for Openshift Templates.
Instead i would like to have a single set of parameterized yml files to which i can pass on custom parameters to setup each individual application
Yep, I think Openshift Templates is what you are looking for. If you upload your template to the service catalog, whenever you have a new application to deploy, you can add some variables in a UI and click deploy.
An Openshift Template is just a parameterised file for all of your openshift resources (configmap, service, buildconfig, etc.).
If your application needs to be build from some git repo, using some credentials, you can parameterise those variables.
But also take a look at Openshift's Source-to-Image solution (I'm not sure what version you are using, so you'll have to google some resources). It can build and deploy your application without you having to write your own Resource files.

Issue running helm command on a schedule

I am trying to delete temporary pods and other artifacts using helm delete. I am trying to run this helm delete to run on a schedule. Here is my stand alone command which works
helm delete --purge $(helm ls -a -q temppods.*)
However if i try to run this on a schedule as below i am running into issues.
Here is what mycron.yaml looks like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronbox
spec:
serviceAccount: cron-z
successfulJobsHistoryLimit: 1
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronbox
image: alpine/helm:2.9.1
args:
- delete
- --purge
- $(helm ls -a -q temppods.*)
restartPolicy: OnFailure
I ran
oc create -f ./mycron.yaml
This created the cronjob
Every 5th minute a pod is getting created and the helm command that is part of the cron job runs.
I am expecting the artifacts/pods name beginning with temppods* to be deleted.
What i get is:
Error: pods is forbidden: User "system:serviceacount:myproject:default" cannot list pods in the namespace "kube-system": no RBAC policy matched
i then created a service account cron-z and gave edit access to it. I added this serviceAccount to my yaml thinking when my pod will be created it will have the service account cron-z associated to it. Still no luck. I see the cron-z is not getting assoicated with the pod that gets created every 5 minutes and i still see default as the service name associated with the pod.
You'll need to have a service account for helm to use tiller with as well as an actual tiller service account github.com/helm/helm/blob/master/docs/rbac.md

Openshift - Environment variable getting evaluated to hostname

I want to pass an environment variable that should get evaluated to the hostname of the running container. This is what I am trying to do
oc new-app -e DASHBOARD_PROTOCOL=http -e ADMIN_PASSWORD=abc#123 -e KEYCLOAK_URL=http://keycloak.openidp.svc:8080 -e KEYCLOAK_REALM=master -e DASHBOARD_HOSTNAME=$HOSTNAME -e GF_INSTALL_PLUGINS=grafana-simple-json-datasource,michaeldmoore-annunciator-panel,briangann-gauge-panel,savantly-heatmap-panel,briangann-datatable-panel grafana/grafana:5.2.1
How to ensure that the DASHBOARD_HOSTNAME gets evaluated to the value of the hostname of the running container image
For take the hostname value from a pod you could use the metadata.name.
follow the eg:
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
After creating the application, you could edit the deployment config (oc edit dc/<deployment_config>) or patch it to configure the DASHBOARD_HOSTNAME environment variable using the Downward API.
This may be a personal preference but as much as oc new-app is convenient I'd rather work with (declarative) configuration files that are checked in and versioned in a code repo than with imperative commands.

How to delete or overwrite a secret in OpenShift?

I'm trying to create a secret on OpenShift v3.3.0 using:
oc create secret generic my-secret --from-file=application-cloud.properties=src/main/resources/application-cloud.properties -n my-project
Because I created the same secret earlier, I get this error message:
Error from server: secrets "my-secret" already exists
I looked at oc, oc create and oc create secret options and could not find an option to overwrite the secret when creating it.
I then tried to delete the existing secret with oc delete. All the commands listed below return either No resources found or a syntax error.
oc delete secrets -l my-secret -n my-project
oc delete secret -l my-secret -n my-project
oc delete secrets -l my-secret
oc delete secret -l my-secret
oc delete pods,secrets -l my-project
oc delete pods,secrets -l my-secret
oc delete secret generic -l my-secret
Do you know how to delete a secret or overwrite a secret upon creation using the OpenShift console or the command line?
"my-secret" is the name of the secret, so you should delete it like this:
oc delete secret my-secret
Add -n option if you are not using the project where the secret was created
oc delete secret my-secret -n <namespace>
I hope by this time you might have the answer ready, just sharing if this can help others.
As on today here are the details of CLI version and Openshift version which I am working on:
$ oc version
oc v3.6.173.0.5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth
Server <SERVER-URL>
openshift v3.11.0+ec8630f-265
kubernetes v1.11.0+d4cacc0
Let's take a simple secret with a key-value pair generated using a file, will get to know the advantage if generated via a file.
$ echo -n "password" | base64
cGFzc3dvcmQ=
Will create a secret with this value:
$ cat clientSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
clienttoken: cGFzc3dvcmQ=
$ oc apply -f clientSecret.yaml
secret "test-secret" created
Let's change the password and update it in the YAML file.
$ echo -n "change-password" | base64
Y2hhbmdlLXBhc3N3b3Jk
$ cat clientSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
clienttoken: Y2hhbmdlLXBhc3N3b3Jk
From the definition of oc create command, it creates a resource if found throws an error. So this command won't fit to update a configuration of a resource, in our case its a secret.
$ oc create --help
Create a resource by filename or stdin
To make life easier, Openshift has provided oc apply command to apply a configuration to a resource if there is a change. This command is also used to create a resource, which helps a lot during automated deployments.
$ oc apply --help
Apply a configuration to a resource by filename or stdin.
$ oc apply -f clientSecret.yaml
secret "test-secret" configured
By the time you check the secret in UI, a new/updated password appears on the console.
So if you have noticed, first time apply has resulted in created - secret "test-secret" created and in subsequent apply results in configured - secret "test-secret" configured