I am on on openshift 4.6. I want to pass parameters when to a yaml file so i tried the below code but it threw an error
oc apply -f "./ETCD Backup/etcd_backup_cronjob.yaml" --param master-node = oc get nodes -o name | grep "master-0" | cut -d'/' -f2
Error: unknown flag --param
What you most likely want to use is OpenShift Templates. Using Templates you can have variables in your YAML files and then change them using oc process.
So your YAML would look like so:
kind: Template
apiVersion: v1
metadata:
name: my-template
objects:
- apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: "Replace"
startingDeadlineSeconds: 200
suspend: true
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobpi"
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(${{DIGITS}})"]
restartPolicy: OnFailure
parameters:
- name: DIGITS
displayName: Number of digits
description: Digits to compute
value: 200
required: true
Then you can use oc process like so:
oc process my-template.yml --param=DIGITS=300 | oc apply -f -
Related
I installed the system upgrade controller and applied this plan manifest:
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: master-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: k3s-master-upgrade
operator: In
values:
- "true"
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
channel: https://update.k3s.io/v1-release/channels/stable
---
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: worker-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: k3s-worker-upgrade
operator: In
values:
- "true"
prepare:
args:
- prepare
- master-plan
image: rancher/k3s-upgrade
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
channel: https://update.k3s.io/v1-release/channels/stable
I applied and checked the labels:
$ kubectl label node crux k3s-worker-upgrade=true
$ kubectl describe nodes crux | grep k3s-worker-upgrade
k3s-worker-upgrade=true
$ kubectl label node nemo k3s-master-upgrade=true
$ kubectl describe nodes nemo | grep k3s-master-upgrade
k3s-master-upgrade=true
According to kubectl get nodes I'm still on v1.23.6+k3s1, but the stable channel is on v1.24.4+k3s1.
I get the following errors:
$ kubectl -n system-upgrade logs deployment.apps/system-upgrade-controller
time="2022-09-12T11:29:31Z" level=error msg="error syncing 'system-upgrade/apply-worker-plan-on-crux-with-4190e4adda3866e909fc7735c1-f0dff': handler system-upgrade-controller: jobs.batch \"apply-worker-plan-on-crux-with-4190e4adda3866e909fc7735c1-f0dff\" not found, requeuing"
time="2022-09-12T11:30:35Z" level=error msg="error syncing 'system-upgrade/apply-master-plan-on-nemo-with-4190e4adda3866e909fc7735c1-9cf4f': handler system-upgrade-controller: jobs.batch \"apply-master-plan-on-nemo-with-4190e4adda3866e909fc7735c1-9cf4f\" not found, requeuing"
$ kubectl -n system-upgrade get jobs -o yaml
- apiVersion: batch/v1
kind: Job
metadata:
labels:
upgrade.cattle.io/controller: system-upgrade-controller
upgrade.cattle.io/node: crux
upgrade.cattle.io/plan: worker-plan
upgrade.cattle.io/version: v1.24.4-k3s1
status:
conditions:
- lastProbeTime: "2022-09-12T12:14:31Z"
lastTransitionTime: "2022-09-12T12:14:31Z"
message: Job was active longer than specified deadline
reason: DeadlineExceeded
status: "True"
type: Failed
failed: 1
startTime: "2022-09-12T11:59:31Z"
uncountedTerminatedPods: {}
Same here.
I have managed to upgrade k3s only by using manual upgrade using the binaries.
On all nodes:
wget https://github.com/k3s-io/k3s/releases/download/v1.24.4%2Bk3s1/k3s
On the server (master):
systemctl stop k3s
cp ./k3s /usr/local/bin/
systemctl start k3s
On the agents (workers):
systemctl stop k3s-agent
cp ./k3s /usr/local/bin/
systemctl start k3s-agent
It seems much faster and easier than struggling with the automated upgrade, no drains, no cordons....
I am trying to substitute all substrings in a yaml file with yq.
File:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: <SOME_NAME>
name: <SOME_NAME>
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: <SOME_NAME>
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: <SOME_NAME>
spec:
serviceAccountName: api
containers:
- image: some-docker-repo/<SOME_NAME>:latest
Right now I am using command like this:
yq e '
.metadata.labels.app = "the-name-to-use" |
.metadata.name = "the-name-to-use" |
.spec.selector.matchLabels.app = "the-name-to-use" |
.spec.template.metadata.labels.app = "the-name-to-use" |
.spec.template.spec.containers[0].image |= sub("<SOME_NAME>", "the-name-to-use")
' template.yaml > result.yaml
But I am sure it can be done as a one-liner. I tried using different variations of
yq e '.[] |= sub("<SOME_NAME>", "the-name-to-use")' template.yaml > result.yaml
but I am getting error like
Error: cannot substitute with !!map, can only substitute strings. Hint: Most often you'll want to use '|=' over '=' for this operation.
Can you please suggest where I might have missed the point?
As an extra request, how would it look like if there would be 2 substitutions in template file?
e.x. <SOME_NAME_1> and <SOME_NAME_2> that need to be substituted with some_var_1 and some_var_2 respectively.
The trick is to use '..' to match all the nodes, then filter out to only include strings
yq e ' (.. | select(tag == "!!str")) |= sub("<SOME_NAME>", "the-name-to-use")' template.yaml
Disclosure: I wrote yq
I want to pass parameter dynamically while patching any deployment config
oc patch dc/action-msa -p "$(cat msa-patch.yml)" --param service_account=msa-service-account
spec:
template:
spec:
serviceAccountName: ${service_account}
restartPolicy: "Always"
initContainers:
- name: vault-init
image: ${init_container_image}
imagePullPolicy: Always
containers:
- name: ${SERVICE_NAME}-java-service
image: ${main_container_image}
Is there any option or way to pass service_account, init_container_image and service_name dynamically while patching using openshift oc?
You would need template layer for this solution such as Kustomize, HELM and etc.
Or you can use environment file as a source before deploying your yaml files something like below
Your deployment.yaml looks like this:
spec:
template:
spec:
serviceAccountName: {{service_account}}
restartPolicy: "Always"
initContainers:
- name: vault-init
image: {{init_container_image}}
imagePullPolicy: Always
containers:
- name: {{SERVICE_NAME}}
image: {{main_container_image}}
your env.file looks like this:
service_account="some_account"
init_container_image="some_image"
SERVICE_NAME="service_name"
Then run
oc patch dc/action-msa -p \
"$(source env.file && cat deployment.yml | \
sed "s/{{service_account}}/service_account/g"| \
sed "s/{{init_container_image}}/init_container_image/g"| \
sed "s/{{SERVICE_NAME}}/SERVICE_NAME/g")" --param service_account=msa-service-account
Hope it helps
I have a relatively simple BuildConfig for a service.
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
labels:
app: identity-server
name: identity-server
namespace: dev
spec:
failedBuildsHistoryLimit: 5
nodeSelector: {}
output:
to:
kind: ImageStreamTag
name: 'identity-server:latest'
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: master
uri: 'ssh://git#bitbucket.example.com:7999/my-project/my-repo.git'
sourceSecret:
name: bitbucket
type: Git
strategy:
dockerStrategy:
dockerfilePath: Dockerfile.openshift
from:
kind: ImageStreamTag
name: 'dotnet:2.1-sdk'
type: Docker
successfulBuildsHistoryLimit: 5
triggers:
- type: ConfigChange
- imageChange:
type: ImageChange
I can use it build an image from a tag in git like so.
$ oc start-build identity-server -n dev --commit 0.0.1
build "identity-server-22" started
This works well enough and the image is built from the commit as tagged.
However, in the last 2 steps of the build I see
Step 14/15 : ENV "OPENSHIFT_BUILD_NAME" "identity-server-25" "OPENSHIFT_BUILD_NAMESPACE" "dev" "OPENSHIFT_BUILD_SOURCE" "ssh://git#bitbucket.example.com:7999/my-project/my-repo.git" "OPENSHIFT_BUILD_REFERENCE" "master" "OPENSHIFT_BUILD_COMMIT" "6fa1c07b2e4f70bfff5ed1614907a1d00597fb51"
---> Running in 6d1e8a67976b
---> b7a869b8c133
Removing intermediate container 6d1e8a67976b
Step 15/15 : LABEL "io.openshift.build.name" "identity-server-25" "io.openshift.build.namespace" "dev" "io.openshift.build.commit.author" "Everett Toews \u003cEverett.Toews#example.com\u003e" "io.openshift.build.commit.date" "Sat Aug 4 16:46:48 2018 +1200" "io.openshift.build.commit.id" "6fa1c07b2e4f70bfff5ed1614907a1d00597fb51" "io.openshift.build.commit.ref" "master" "io.openshift.build.commit.message" "Try to build master branch" "io.openshift.build.source-location" "ssh://git#bitbucket.example.com:7999/my-project/my-repo.git"
---> Running in 3e8d8200f2cf
---> 1a591207a29f
Removing intermediate container 3e8d8200f2cf
OPENSHIFT_BUILD_REFERENCE and io.openshift.build.commit.ref are set to master as per spec.source.git.ref in the BuildConfig. I was hoping those would be set to 0.0.1 as overridden by the --commit 0.0.1 flag in the ōc start-build.
Is there a way to dynamically set OPENSHIFT_BUILD_REFERENCE and io.openshift.build.commit.ref in OpenShift?
Or would I need to template the BuildConfig and replace the spec.source.git.ref with a parameter to get the desired behaviour?
It is how to run simple batch in kubernetes yaml (helloworld.yaml):
...
image: "ubuntu:14.04"
command: ["/bin/echo", "hello", "world"]
...
In Kubernetes i can deploy that like this:
$ kubectl create -f helloworld.yaml
Suppose i have a batch script like this (script.sh):
#!/bin/bash
echo "Please wait....";
sleep 5
Is there way to include the script.sh into kubectl create -f so it can run the script. Suppose now helloworld.yaml edited like this:
...
image: "ubuntu:14.04"
command: ["/bin/bash", "./script.sh"]
...
I'm using this approach in OpenShift, so it should be applicable in Kubernetes as well.
Try to put your script into a configmap key/value, mount this configmap as a volume and run the script from the volume.
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world-job
spec:
parallelism: 1
completions: 1
template:
metadata:
name: hello-world-job
spec:
volumes:
- name: hello-world-scripts-volume
configMap:
name: hello-world-scripts
containers:
- name: hello-world-job
image: alpine
volumeMounts:
- mountPath: /hello-world-scripts
name: hello-world-scripts-volume
env:
- name: HOME
value: /tmp
command:
- /bin/sh
- -c
- |
echo "scripts in /hello-world-scripts"
ls -lh /hello-world-scripts
echo "copy scripts to /tmp"
cp /hello-world-scripts/*.sh /tmp
echo "apply 'chmod +x' to /tmp/*.sh"
chmod +x /tmp/*.sh
echo "execute script-one.sh now"
/tmp/script-one.sh
restartPolicy: Never
---
apiVersion: v1
items:
- apiVersion: v1
data:
script-one.sh: |
echo "script-one.sh"
date
sleep 1
echo "run /tmp/script-2.sh now"
/tmp/script-2.sh
script-2.sh: |
echo "script-2.sh"
sleep 1
date
kind: ConfigMap
metadata:
creationTimestamp: null
name: hello-world-scripts
kind: List
metadata: {}
As explained here, you could use the defaultMode: 0777 property as well, an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-script
data:
test.sh: |
echo "test1"
ls
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
volumes:
- name: test-script
configMap:
name: test-script
defaultMode: 0777
containers:
- command:
- sleep
- infinity
image: ubuntu
name: locust
volumeMounts:
- mountPath: /test-script
name: test-script
You can enter into the container shell and execute the script /test-script/test.sh