I want to pass parameter dynamically while patching any deployment config
oc patch dc/action-msa -p "$(cat msa-patch.yml)" --param service_account=msa-service-account
spec:
template:
spec:
serviceAccountName: ${service_account}
restartPolicy: "Always"
initContainers:
- name: vault-init
image: ${init_container_image}
imagePullPolicy: Always
containers:
- name: ${SERVICE_NAME}-java-service
image: ${main_container_image}
Is there any option or way to pass service_account, init_container_image and service_name dynamically while patching using openshift oc?
You would need template layer for this solution such as Kustomize, HELM and etc.
Or you can use environment file as a source before deploying your yaml files something like below
Your deployment.yaml looks like this:
spec:
template:
spec:
serviceAccountName: {{service_account}}
restartPolicy: "Always"
initContainers:
- name: vault-init
image: {{init_container_image}}
imagePullPolicy: Always
containers:
- name: {{SERVICE_NAME}}
image: {{main_container_image}}
your env.file looks like this:
service_account="some_account"
init_container_image="some_image"
SERVICE_NAME="service_name"
Then run
oc patch dc/action-msa -p \
"$(source env.file && cat deployment.yml | \
sed "s/{{service_account}}/service_account/g"| \
sed "s/{{init_container_image}}/init_container_image/g"| \
sed "s/{{SERVICE_NAME}}/SERVICE_NAME/g")" --param service_account=msa-service-account
Hope it helps
Related
Basically, I have 3 CR's global,db2 & Mongo. I have to edit these 3 CR's manually to add/enable a service like 'oc edit global, db2 and mongo' then I'll add service details and it works.
But I need to give a single command to users like a patch command. Below is the method which is a normal process
oc edit global
spec:
Global:
backup:
enabled: true
name: backup-name
storageClassName: RWX
size: 500Gi
=====
oc edit mongo
spec:
statefulSet:
spec:
template:
spec:
containers:
- name: "mongod"
volumeMounts:
- name: backup
mountPath: '/opt/data/backup'
volumes:
- name: backup
persistentVolumeClaim:
claimName: backup-pvc
=============
oc edit db2
spec:
storage:
- claimName: backup-pvc
name: backup
spec:
resources: {}
type: existing
In terms of writing patch commands I succeeded to write JSON Merge command as below:
For Mongo:
oc patch MongoDBCommunity staging-mongodb --type=merge -p '{"spec":{"statefulSet":{"spec":{"template":{"spec":{"containers":[{"name":"mongod","volumeMounts":[{"name":"backup","mountPath":"/opt/data/backup"}]}],"volumes":[{"name":"backup-","persistentVolumeClaim":{"claimName":"backup-pvc"}}]}}}}}}'
For db2:
oc patch db2ucluster staging-db2 --type=merge -p '{"spec":{"storage":[{"claimName":"backup-pvc","name":"backup","spec":{"resources":{}},"type":"existing"}]}}'
But above JSON Merge command is replaces/overrides the previous data. So it is not desired method. I see there is a desired way to do is JSON Patch where we can use some operations like add, replace, etc., Need help on writing a JSON PATCH command for the above 3 CR's manual step.
I am on on openshift 4.6. I want to pass parameters when to a yaml file so i tried the below code but it threw an error
oc apply -f "./ETCD Backup/etcd_backup_cronjob.yaml" --param master-node = oc get nodes -o name | grep "master-0" | cut -d'/' -f2
Error: unknown flag --param
What you most likely want to use is OpenShift Templates. Using Templates you can have variables in your YAML files and then change them using oc process.
So your YAML would look like so:
kind: Template
apiVersion: v1
metadata:
name: my-template
objects:
- apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: "Replace"
startingDeadlineSeconds: 200
suspend: true
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobpi"
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(${{DIGITS}})"]
restartPolicy: OnFailure
parameters:
- name: DIGITS
displayName: Number of digits
description: Digits to compute
value: 200
required: true
Then you can use oc process like so:
oc process my-template.yml --param=DIGITS=300 | oc apply -f -
I've a kubernetes cronjob manifest file.In that file I've defined enviornment variables.I'm generating yaml using a shell script but while using the yaml using kubectl create -f. I'm getting the following validation error
error validating "cron.yaml": error validating data: [ValidationError(CronJob.spec.jobTemplate.spec.template.spec.containers[0].envFrom[0].configMapRef): invalid type for io.k8s.api.core.v1.ConfigMapEnvSource: got "array", expected "map".
Can anyone suggest me how to resolve this?
You have a mistake in the syntax.
There are two approaches, using valueFrom for individual values or envFrom for multiple values.
valueFrom is used inside the env attribute.valueFrom will inject the value of a a key from the referenced configMap.
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
args: []
env:
- name: AdSyncService
valueFrom:
configMapKeyRef:
name: ad-sync-service-configmap
key: log_level
envFrom is used direct inside the container attribute.envFrom will inject All configMap keys as environment variables
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
envFrom:
- configMapRef:
name: ad-sync-service-configmap
I have two Openshift environments that I publish to from a Gitlab pipeline. Let's call the first one DEV and it runs Openshift v3.7.1+c2ce2c0-1. The second one is INT and runs v3.6.1+008f2d5. Recently DEV got upgraded from 3.6.1 to 3.7.1 and after that I noticed a strange change in the behavior of redeployment triggers.
In short, what I see is that existing deployments in the DEV environment are triggered to redeploy with an "Image changed" message when an unchanged deployment config template is applied and while the Docker image also remains unchanged. This means that for example a MongoDB or Jenkins deployment recreates all containers and loses all data with every run of the CI pipeline.
Yes, there is the theoretically the possibility to use persistent volumes but the Openshift installations are not under my control. The point here is that redeployments happen when neither image nor configuration changes.
The command that I am using from Gitlab is this:
oc process -f openshift-mongodb-ephemeral.yml -v MONGODB_DATABASE=mydb -v DATABASE_SERVICE_NAME=mongodb -l template=northbound-mongodb | oc apply -f -
Here is one of the deployment config templates:
apiVersion: v1
kind: Template
labels:
template: mongodb-ephemeral-template
metadata:
annotations:
description: |-
MongoDB database service, without persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.
WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing
iconClass: icon-mongodb
openshift.io/display-name: MongoDB (Ephemeral)
tags: database,mongodb
creationTimestamp: 2017-03-14T11:25:13Z
name: mongodb-ephemeral
resourceVersion: "483"
selfLink: /oapi/v1/namespaces/openshift/templates/mongodb-ephemeral
uid: e41b7f8e-08a8-11e7-9120-000d3a266151
objects:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
ports:
- name: mongo
nodePort: 0
port: 27017
protocol: TCP
targetPort: 27017
selector:
name: ${DATABASE_SERVICE_NAME}
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
replicas: 1
selector:
name: ${DATABASE_SERVICE_NAME}
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: ${DATABASE_SERVICE_NAME}
spec:
containers:
- capabilities: {}
env:
- name: MONGODB_USER
value: ${MONGODB_USER}
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: myproject-secrets
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: myproject-secrets
- name: MONGODB_DATABASE
value: ${MONGODB_DATABASE}
image: ' '
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
initialDelaySeconds: 3
timeoutSeconds: 1
resources:
limits:
memory: ${MEMORY_LIMIT}
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: ${DATABASE_SERVICE_NAME}-data
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- emptyDir:
medium: ""
name: ${DATABASE_SERVICE_NAME}-data
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mongodb
from:
kind: ImageStreamTag
name: mongodb:${MONGODB_VERSION}
namespace: ${NAMESPACE}
lastTriggeredImage: ""
type: ImageChange
- type: ConfigChange
status: {}
parameters:
- description: Maximum amount of memory the container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
value: openshift
- description: The name of the OpenShift Service exposed for the database.
displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: mongodb
- description: Name of the MongoDB database accessed.
displayName: MongoDB Database Name
name: MONGODB_DATABASE
required: true
value: sampledb
- description: Version of MongoDB image to be used (2.4, 2.6, 3.2 or latest).
displayName: Version of MongoDB Image
name: MONGODB_VERSION
required: true
value: "3.2"
- description: Name of user to access MongoDB.
displayName: MongoDB user
name: MONGODB_USER
required: true
value: "mongouser"
The oc process and oc apply commands get executed with each execution of the Gitlab CI pipeline. The pipeline triggers whenever someone merges into e.g. the develop branch. I would like to keep this behavior because this guarantees if someone changes the configuration of MongoDB, Jenkins, etc. those get updated and redeployed automatically (in which case a loss of data is acceptable).
Does anybody know what change in OS could have prompted this change in behavior and how to achieve the old behavior again?
Credits go to Graham Dumpleton who answered my question, albeit through a comment. Since I did not hear back from him I am leaving this answer myself for completeness' sake.
After removing the entries
creationTimestamp
lastTriggeredImage
status
image
from the deployment config template, the unwanted redeployments stopped. I do not know which entry specifically started triggering the behavior in Openshift 3.7.1 but in any case, these four entries are information generated during an Openshift deployment process and should not be in a template to begin with.
It is how to run simple batch in kubernetes yaml (helloworld.yaml):
...
image: "ubuntu:14.04"
command: ["/bin/echo", "hello", "world"]
...
In Kubernetes i can deploy that like this:
$ kubectl create -f helloworld.yaml
Suppose i have a batch script like this (script.sh):
#!/bin/bash
echo "Please wait....";
sleep 5
Is there way to include the script.sh into kubectl create -f so it can run the script. Suppose now helloworld.yaml edited like this:
...
image: "ubuntu:14.04"
command: ["/bin/bash", "./script.sh"]
...
I'm using this approach in OpenShift, so it should be applicable in Kubernetes as well.
Try to put your script into a configmap key/value, mount this configmap as a volume and run the script from the volume.
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world-job
spec:
parallelism: 1
completions: 1
template:
metadata:
name: hello-world-job
spec:
volumes:
- name: hello-world-scripts-volume
configMap:
name: hello-world-scripts
containers:
- name: hello-world-job
image: alpine
volumeMounts:
- mountPath: /hello-world-scripts
name: hello-world-scripts-volume
env:
- name: HOME
value: /tmp
command:
- /bin/sh
- -c
- |
echo "scripts in /hello-world-scripts"
ls -lh /hello-world-scripts
echo "copy scripts to /tmp"
cp /hello-world-scripts/*.sh /tmp
echo "apply 'chmod +x' to /tmp/*.sh"
chmod +x /tmp/*.sh
echo "execute script-one.sh now"
/tmp/script-one.sh
restartPolicy: Never
---
apiVersion: v1
items:
- apiVersion: v1
data:
script-one.sh: |
echo "script-one.sh"
date
sleep 1
echo "run /tmp/script-2.sh now"
/tmp/script-2.sh
script-2.sh: |
echo "script-2.sh"
sleep 1
date
kind: ConfigMap
metadata:
creationTimestamp: null
name: hello-world-scripts
kind: List
metadata: {}
As explained here, you could use the defaultMode: 0777 property as well, an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-script
data:
test.sh: |
echo "test1"
ls
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
volumes:
- name: test-script
configMap:
name: test-script
defaultMode: 0777
containers:
- command:
- sleep
- infinity
image: ubuntu
name: locust
volumeMounts:
- mountPath: /test-script
name: test-script
You can enter into the container shell and execute the script /test-script/test.sh