Basically, I have 3 CR's global,db2 & Mongo. I have to edit these 3 CR's manually to add/enable a service like 'oc edit global, db2 and mongo' then I'll add service details and it works.
But I need to give a single command to users like a patch command. Below is the method which is a normal process
oc edit global
spec:
Global:
backup:
enabled: true
name: backup-name
storageClassName: RWX
size: 500Gi
=====
oc edit mongo
spec:
statefulSet:
spec:
template:
spec:
containers:
- name: "mongod"
volumeMounts:
- name: backup
mountPath: '/opt/data/backup'
volumes:
- name: backup
persistentVolumeClaim:
claimName: backup-pvc
=============
oc edit db2
spec:
storage:
- claimName: backup-pvc
name: backup
spec:
resources: {}
type: existing
In terms of writing patch commands I succeeded to write JSON Merge command as below:
For Mongo:
oc patch MongoDBCommunity staging-mongodb --type=merge -p '{"spec":{"statefulSet":{"spec":{"template":{"spec":{"containers":[{"name":"mongod","volumeMounts":[{"name":"backup","mountPath":"/opt/data/backup"}]}],"volumes":[{"name":"backup-","persistentVolumeClaim":{"claimName":"backup-pvc"}}]}}}}}}'
For db2:
oc patch db2ucluster staging-db2 --type=merge -p '{"spec":{"storage":[{"claimName":"backup-pvc","name":"backup","spec":{"resources":{}},"type":"existing"}]}}'
But above JSON Merge command is replaces/overrides the previous data. So it is not desired method. I see there is a desired way to do is JSON Patch where we can use some operations like add, replace, etc., Need help on writing a JSON PATCH command for the above 3 CR's manual step.
Related
I am on on openshift 4.6. I want to pass parameters when to a yaml file so i tried the below code but it threw an error
oc apply -f "./ETCD Backup/etcd_backup_cronjob.yaml" --param master-node = oc get nodes -o name | grep "master-0" | cut -d'/' -f2
Error: unknown flag --param
What you most likely want to use is OpenShift Templates. Using Templates you can have variables in your YAML files and then change them using oc process.
So your YAML would look like so:
kind: Template
apiVersion: v1
metadata:
name: my-template
objects:
- apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: "Replace"
startingDeadlineSeconds: 200
suspend: true
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobpi"
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(${{DIGITS}})"]
restartPolicy: OnFailure
parameters:
- name: DIGITS
displayName: Number of digits
description: Digits to compute
value: 200
required: true
Then you can use oc process like so:
oc process my-template.yml --param=DIGITS=300 | oc apply -f -
I've a kubernetes cronjob manifest file.In that file I've defined enviornment variables.I'm generating yaml using a shell script but while using the yaml using kubectl create -f. I'm getting the following validation error
error validating "cron.yaml": error validating data: [ValidationError(CronJob.spec.jobTemplate.spec.template.spec.containers[0].envFrom[0].configMapRef): invalid type for io.k8s.api.core.v1.ConfigMapEnvSource: got "array", expected "map".
Can anyone suggest me how to resolve this?
You have a mistake in the syntax.
There are two approaches, using valueFrom for individual values or envFrom for multiple values.
valueFrom is used inside the env attribute.valueFrom will inject the value of a a key from the referenced configMap.
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
args: []
env:
- name: AdSyncService
valueFrom:
configMapKeyRef:
name: ad-sync-service-configmap
key: log_level
envFrom is used direct inside the container attribute.envFrom will inject All configMap keys as environment variables
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
envFrom:
- configMapRef:
name: ad-sync-service-configmap
I want to pass parameter dynamically while patching any deployment config
oc patch dc/action-msa -p "$(cat msa-patch.yml)" --param service_account=msa-service-account
spec:
template:
spec:
serviceAccountName: ${service_account}
restartPolicy: "Always"
initContainers:
- name: vault-init
image: ${init_container_image}
imagePullPolicy: Always
containers:
- name: ${SERVICE_NAME}-java-service
image: ${main_container_image}
Is there any option or way to pass service_account, init_container_image and service_name dynamically while patching using openshift oc?
You would need template layer for this solution such as Kustomize, HELM and etc.
Or you can use environment file as a source before deploying your yaml files something like below
Your deployment.yaml looks like this:
spec:
template:
spec:
serviceAccountName: {{service_account}}
restartPolicy: "Always"
initContainers:
- name: vault-init
image: {{init_container_image}}
imagePullPolicy: Always
containers:
- name: {{SERVICE_NAME}}
image: {{main_container_image}}
your env.file looks like this:
service_account="some_account"
init_container_image="some_image"
SERVICE_NAME="service_name"
Then run
oc patch dc/action-msa -p \
"$(source env.file && cat deployment.yml | \
sed "s/{{service_account}}/service_account/g"| \
sed "s/{{init_container_image}}/init_container_image/g"| \
sed "s/{{SERVICE_NAME}}/SERVICE_NAME/g")" --param service_account=msa-service-account
Hope it helps
Unable to proceed with hawkular-metrics installation as hawkular_metrics_schema_job.yaml failed to find schema image.
Failed to pull image "docker.io/openshift/origin-metrics-schema-installer:v3.11.0": rpc error: code = Unknown desc = repository docker.io/openshift/origin-metrics-schema-installer not found: does not exist or no pull access
cat /tmp/openshift-metrics-ansible-ABoWRf/templates/hawkular_metrics_schema_job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: hawkular-metrics-schema
labels:
metrics-infra: hawkular-metrics
name: hawkular-metrics-schema
spec:
template:
spec:
version: v1
metadata:
labels:
metrics-infra: hawkular-metrics
#name: hawkular-metrics
containers:
- name: hawkular-metrics-schema
image: docker.io/openshift/origin-metrics-schema-installer:v3.11.0
imagePullPolicy: IfNotPresent
env:
- name: TRUSTSTORE_AUTHORITIES
value: "/hawkular-metrics-certs/tls.truststore.crt"
volumeMounts:
- mountPath: /hawkular-metrics-certs
name: hawkular-metrics-certs
- mountPath: /hawkular-account
name: hawkular-metrics-account
volumes:
- name: hawkular-metrics-certs
secret:
secretName: hawkular-metrics-certs
- name: hawkular-metrics-account
secret:
secretName: hawkular-metrics-account
restartPolicy: OnFailure
docker pull origin-metrics-schema-installer Using default tag: latest
Trying to pull repository docker.io/library/origin-metrics-schema-installer ...
repository docker.io/origin-metrics-schema-installer not found: does not exist or no pull access
On one hand, if you are using OKD v3.10, official docker metrics images for 3.10 are tagged as "v3.10.0-rc.0" (not "v3.10").
If you are using 3.11 they are well tagged: https://hub.docker.com/r/openshift/origin-metrics-hawkular-metrics/tags/
On the other hand
openshift/origin-metrics-schema-installer doesn't exist, and someone built the image to:
https://hub.docker.com/r/alv91/origin-metrics-schema-installer/ (and he/she tagged the image as "v3.10").
So in your inventory file you should have for OKD v3.10:
openshift_metrics_install_metrics=True
openshift_metrics_cassandra_image=docker.io/openshift/origin-metrics-cassandra:v3.10.0-rc.0
openshift_metrics_hawkular_metrics_image=docker.io/openshift/origin-metrics-hawkular-metrics:v3.10.0-rc.0
openshift_metrics_heapster_image=docker.io/openshift/origin-metrics-heapster:v3.10.0-rc.0
openshift_metrics_schema_installer_image=docker.io/alv91/origin-metrics-schema-installer:v3.10
And for OKD 3.11:
openshift_metrics_install_metrics=True
openshift_metrics_schema_installer_image:docker.io/alv91/origin-metrics-schema-installer:v3.10
https://github.com/openshift/openshift-ansible/issues/9948
https://github.com/openshift/origin-metrics/issues/429
I have two Openshift environments that I publish to from a Gitlab pipeline. Let's call the first one DEV and it runs Openshift v3.7.1+c2ce2c0-1. The second one is INT and runs v3.6.1+008f2d5. Recently DEV got upgraded from 3.6.1 to 3.7.1 and after that I noticed a strange change in the behavior of redeployment triggers.
In short, what I see is that existing deployments in the DEV environment are triggered to redeploy with an "Image changed" message when an unchanged deployment config template is applied and while the Docker image also remains unchanged. This means that for example a MongoDB or Jenkins deployment recreates all containers and loses all data with every run of the CI pipeline.
Yes, there is the theoretically the possibility to use persistent volumes but the Openshift installations are not under my control. The point here is that redeployments happen when neither image nor configuration changes.
The command that I am using from Gitlab is this:
oc process -f openshift-mongodb-ephemeral.yml -v MONGODB_DATABASE=mydb -v DATABASE_SERVICE_NAME=mongodb -l template=northbound-mongodb | oc apply -f -
Here is one of the deployment config templates:
apiVersion: v1
kind: Template
labels:
template: mongodb-ephemeral-template
metadata:
annotations:
description: |-
MongoDB database service, without persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.
WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing
iconClass: icon-mongodb
openshift.io/display-name: MongoDB (Ephemeral)
tags: database,mongodb
creationTimestamp: 2017-03-14T11:25:13Z
name: mongodb-ephemeral
resourceVersion: "483"
selfLink: /oapi/v1/namespaces/openshift/templates/mongodb-ephemeral
uid: e41b7f8e-08a8-11e7-9120-000d3a266151
objects:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
ports:
- name: mongo
nodePort: 0
port: 27017
protocol: TCP
targetPort: 27017
selector:
name: ${DATABASE_SERVICE_NAME}
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
replicas: 1
selector:
name: ${DATABASE_SERVICE_NAME}
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: ${DATABASE_SERVICE_NAME}
spec:
containers:
- capabilities: {}
env:
- name: MONGODB_USER
value: ${MONGODB_USER}
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: myproject-secrets
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: myproject-secrets
- name: MONGODB_DATABASE
value: ${MONGODB_DATABASE}
image: ' '
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
initialDelaySeconds: 3
timeoutSeconds: 1
resources:
limits:
memory: ${MEMORY_LIMIT}
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: ${DATABASE_SERVICE_NAME}-data
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- emptyDir:
medium: ""
name: ${DATABASE_SERVICE_NAME}-data
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mongodb
from:
kind: ImageStreamTag
name: mongodb:${MONGODB_VERSION}
namespace: ${NAMESPACE}
lastTriggeredImage: ""
type: ImageChange
- type: ConfigChange
status: {}
parameters:
- description: Maximum amount of memory the container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
value: openshift
- description: The name of the OpenShift Service exposed for the database.
displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: mongodb
- description: Name of the MongoDB database accessed.
displayName: MongoDB Database Name
name: MONGODB_DATABASE
required: true
value: sampledb
- description: Version of MongoDB image to be used (2.4, 2.6, 3.2 or latest).
displayName: Version of MongoDB Image
name: MONGODB_VERSION
required: true
value: "3.2"
- description: Name of user to access MongoDB.
displayName: MongoDB user
name: MONGODB_USER
required: true
value: "mongouser"
The oc process and oc apply commands get executed with each execution of the Gitlab CI pipeline. The pipeline triggers whenever someone merges into e.g. the develop branch. I would like to keep this behavior because this guarantees if someone changes the configuration of MongoDB, Jenkins, etc. those get updated and redeployed automatically (in which case a loss of data is acceptable).
Does anybody know what change in OS could have prompted this change in behavior and how to achieve the old behavior again?
Credits go to Graham Dumpleton who answered my question, albeit through a comment. Since I did not hear back from him I am leaving this answer myself for completeness' sake.
After removing the entries
creationTimestamp
lastTriggeredImage
status
image
from the deployment config template, the unwanted redeployments stopped. I do not know which entry specifically started triggering the behavior in Openshift 3.7.1 but in any case, these four entries are information generated during an Openshift deployment process and should not be in a template to begin with.