Why does the K3s system upgrade controller fail with "not found, requeuing"? - k3s

I installed the system upgrade controller and applied this plan manifest:
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: master-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: k3s-master-upgrade
operator: In
values:
- "true"
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
channel: https://update.k3s.io/v1-release/channels/stable
---
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: worker-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: k3s-worker-upgrade
operator: In
values:
- "true"
prepare:
args:
- prepare
- master-plan
image: rancher/k3s-upgrade
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
channel: https://update.k3s.io/v1-release/channels/stable
I applied and checked the labels:
$ kubectl label node crux k3s-worker-upgrade=true
$ kubectl describe nodes crux | grep k3s-worker-upgrade
k3s-worker-upgrade=true
$ kubectl label node nemo k3s-master-upgrade=true
$ kubectl describe nodes nemo | grep k3s-master-upgrade
k3s-master-upgrade=true
According to kubectl get nodes I'm still on v1.23.6+k3s1, but the stable channel is on v1.24.4+k3s1.
I get the following errors:
$ kubectl -n system-upgrade logs deployment.apps/system-upgrade-controller
time="2022-09-12T11:29:31Z" level=error msg="error syncing 'system-upgrade/apply-worker-plan-on-crux-with-4190e4adda3866e909fc7735c1-f0dff': handler system-upgrade-controller: jobs.batch \"apply-worker-plan-on-crux-with-4190e4adda3866e909fc7735c1-f0dff\" not found, requeuing"
time="2022-09-12T11:30:35Z" level=error msg="error syncing 'system-upgrade/apply-master-plan-on-nemo-with-4190e4adda3866e909fc7735c1-9cf4f': handler system-upgrade-controller: jobs.batch \"apply-master-plan-on-nemo-with-4190e4adda3866e909fc7735c1-9cf4f\" not found, requeuing"
$ kubectl -n system-upgrade get jobs -o yaml
- apiVersion: batch/v1
kind: Job
metadata:
labels:
upgrade.cattle.io/controller: system-upgrade-controller
upgrade.cattle.io/node: crux
upgrade.cattle.io/plan: worker-plan
upgrade.cattle.io/version: v1.24.4-k3s1
status:
conditions:
- lastProbeTime: "2022-09-12T12:14:31Z"
lastTransitionTime: "2022-09-12T12:14:31Z"
message: Job was active longer than specified deadline
reason: DeadlineExceeded
status: "True"
type: Failed
failed: 1
startTime: "2022-09-12T11:59:31Z"
uncountedTerminatedPods: {}

Same here.
I have managed to upgrade k3s only by using manual upgrade using the binaries.
On all nodes:
wget https://github.com/k3s-io/k3s/releases/download/v1.24.4%2Bk3s1/k3s
On the server (master):
systemctl stop k3s
cp ./k3s /usr/local/bin/
systemctl start k3s
On the agents (workers):
systemctl stop k3s-agent
cp ./k3s /usr/local/bin/
systemctl start k3s-agent
It seems much faster and easier than struggling with the automated upgrade, no drains, no cordons....

Related

Pre-populated mysql docker image doesn't start when setup pod template security context

I have a mysql image with a prepolutated schema, below I share the setup files.
My dockerfile:
FROM mysql:8.0.31 as builder
# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$#\"/echo \"not running $#\"/", "/usr/local/bin/docker-entrypoint.sh"]
# needed for intialization
ENV MYSQL_ROOT_PASSWORD=test
COPY ./sql-scripts /docker-entrypoint-initdb.d/
# Need to change the datadir to something else that /var/lib/mysql because the parent docker file defines it as a volume.
# https://docs.docker.com/engine/reference/builder/#volume :
# Changing the volume from within the Dockerfile: If any build steps change the data within the volume after
# it has been declared, those changes will be discarded.
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db"]
FROM mysql:8.0.31
COPY --from=builder /initialized-db /var/lib/mysql
My pod template yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
label: 'backend'
spec:
shareProcessNamespace: true
containers:
- name: "maven"
image: maven:3.6.3-openjdk-11
resources:
requests:
memory: "2Gi"
cpu: "2"
limits:
memory: "10Gi"
cpu: "10"
command: [ sleep ]
args: [ 1h ]
securityContext:
capabilities:
add:
- SYS_PTRACE
- name: mysql
image: myDockerRegistry/mysql8-integration-test:v5
env:
- name: MYSQL_USER
value: test
- name: MYSQL_PASSWORD
value: test
- name: MYSQL_ROOT_PASSWORD
value: test
securityContext:
capabilities:
add:
- SYS_PTRACE
My pipeline:
pipeline {
agent {
kubernetes {
yaml libraryResource('pod-templates/backend.yaml')
}
}
stages { ... }
}
The above setup works fine, but I want to use a dynamic PVC for the workspace, then I add the following line to my pipeline after the pod template.
workspaceVolume dynamicPVC(accessModes: 'ReadWriteOnce',requestsSize: "10Gi", storageClassName: 'premium-rwo')
But I have to add securityContext to my pod template so jenkins will able to mount the PVC in the agent:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
With those changes the pod start and the volume is mounted correctly, but the mysql container doesn't work. This is the error log:
2022-11-03 09:33:25+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.31-1.el8 started.
'/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock'
2022-11-03T09:33:25.839933Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
2022-11-03T09:33:25.842508Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 13
2022-11-03T09:33:25.845263Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root
mysqld: File './binlog.index' not found (OS errno 13 - Permission denied)
2022-11-03T09:33:25.845867Z 0 [ERROR] [MY-010119] [Server] Aborting
2022-11-03T09:33:25.846078Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL.
I asume is something related with root rights in the mysql container, but is weird because the official image runs perfectly.
Finally this is the raw yaml generated after inject the jenkins agent:
apiVersion: v1
kind: Pod
metadata:
annotations:
buildUrl: >-
http://jenkins.jenkins.svc.cluster.local:8080/job/LegacyProjects/job/my-project/job/k8s-test/79/
runUrl: job/LegacyProjects/job/my-project/job/k8s-test/79/
labels:
label: backend
jenkins/jenkins-jenkins-agent: 'true'
jenkins/label-digest: 4581eadfdfcb3d0141b8e8727b53b2ff9a3575ec
jenkins/label: LegacyProjects_my-project_k8s-test_79-xgtxd
name: my-project-k8s-test-79-xgtxd-2xw2r-8wj64
namespace: jenkins
spec:
containers:
- args:
- 1h
command:
- sleep
image: 'maven:3.6.3-openjdk-11'
name: maven
resources:
limits:
memory: 10Gi
cpu: '10'
requests:
memory: 2Gi
cpu: '2'
securityContext:
capabilities:
add:
- SYS_PTRACE
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
readOnly: false
- env:
- name: MYSQL_USER
value: test
- name: MYSQL_PASSWORD
value: test
- name: MYSQL_ROOT_PASSWORD
value: test
image: 'myDockerRegistry/mysql8-integration-test:v5'
name: mysql
securityContext:
capabilities:
add:
- SYS_PTRACE
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
readOnly: false
- env:
- name: JENKINS_SECRET
value: '********'
- name: JENKINS_TUNNEL
value: 'jenkins-agent.jenkins.svc.cluster.local:50000'
- name: JENKINS_AGENT_NAME
value: my-project-k8s-test-79-xgtxd-2xw2r-8wj64
- name: JENKINS_NAME
value: my-project-k8s-test-79-xgtxd-2xw2r-8wj64
- name: JENKINS_AGENT_WORKDIR
value: /home/jenkins/agent
- name: JENKINS_URL
value: 'http://jenkins.jenkins.svc.cluster.local:8080/'
image: 'jenkins/inbound-agent:4.11-1-jdk11'
name: jnlp
resources:
limits: {}
requests:
memory: 256Mi
cpu: 100m
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
readOnly: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Never
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsUser: 1000
shareProcessNamespace: true
volumes:
- name: workspace-volume
persistentVolumeClaim:
claimName: pvc-workspace-my-project-test-79-xgtxd-2xw2r-8wj64
readOnly: false
Any help will be appreciated
COPY command by default works only as root user, you should specify --chown=1000:1000 flag to that command to set the correct user and group (in your case - it's user with uid and gid 1000, which is specified in securityContext), see https://stackoverflow.com/a/44766666 and https://docs.docker.com/engine/reference/builder/#copy for more details
While your use case might require pre-building image with database, consider running official mysql image as db, and your application with init container with liquibase/flyway or other (even in-built) database migration toolkit, which might be a more portable solution in a long run

Unable to connect: Communications link failure

I am trying to follow the tutorial Deploying Debezium using the new KafkaConnector resource.
Based on the tutorial, I am also using minikube but with docker driver. Basically just follow exactly step by step.
However, for the step "Create the connector", after creating the connector by
cat <<EOF | kubectl -n kafka apply -f -
apiVersion: "kafka.strimzi.io/v1alpha1"
kind: "KafkaConnector"
metadata:
name: "inventory-connector"
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: io.debezium.connector.mysql.MySqlConnector
tasksMax: 1
config:
database.hostname: 192.168.99.1
database.port: "3306"
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
database.server.id: "184054"
database.server.name: "dbserver1"
database.whitelist: "inventory"
database.history.kafka.bootstrap.servers: "my-cluster-kafka-bootstrap:9092"
database.history.kafka.topic: "schema-changes.inventory"
include.schema.changes: "true"
EOF
and check by
kubectl -n kafka get kctr inventory-connector -o yaml
I got error
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I tried to change
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
to
database.user: "debezium"
database.password: "dbz"
directly and re-apply, based on the user and password info in "Secure the database credentials" step.
Also, based on the description in the tutorial
I’m using database.hostname: 192.168.99.1 as IP address for connecting to MySQL because I’m using minikube with the virtualbox VM driver If you’re using a different VM driver with minikube you might need a different IP address.
I am actually a little confused for above description. MySQL in the demo is deployed in Docker, while the rest of parts like Kafka are deployed in minikube. Why the description about database.hostname says minikube instead of Docker?
Anyway, when I run minikube ip, I got 192.168.49.2. However, after I change database.hostname to 192.168.49.2, and run kubectl get kctr inventory-connector -o yaml -n kafka, I got
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I can access MySQL by localhost as it is hosted in Docker.
However, I still same error when I changed database.hostname to localhost.
Any idea? Thanks!
The issue is related with the service in minikube failed to communicate with the MySQL in the docker.
Regarding how to access host's localhost from inside Kubernetes cluster, I found How to access host's localhost from inside kubernetes cluster
However, I end up with deploying MySQL in Kubernetes direction by
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml
(Copied from https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
with
database.hostname: "mysql.default" # service `mysql` in namespace `default`
database.port: "3306"
database.user: "root"
database.password: "password"
Now when I run
kubectl -n kafka get kctr inventory-connector -o yaml
I got a new error saying MySQL not enabling row-level binlog, however, it means it can connect the MySQL now.
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"mysql.default","database.password":"password","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"root","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T19:36:52Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "2918"
uid: 48bb46e1-42bb-4574-a3dc-221ae7d6a803
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: mysql.default
database.password: password
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: root
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T19:36:53.605Z"
status: "True"
type: Ready
connectorStatus:
connector:
state: UNASSIGNED
worker_id: 172.17.0.8:8083
name: inventory-connector
tasks:
- id: 0
state: FAILED
trace: "org.apache.kafka.connect.errors.ConnectException: The MySQL server is
not configured to use a row-level binlog, which is required for this connector
to work properly. Change the MySQL configuration to use a row-level binlog
and restart the connector.\n\tat io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:207)\n\tat
io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:49)\n\tat
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:208)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
java.lang.Thread.run(Thread.java:748)\n"
worker_id: 172.17.0.8:8083
type: source
observedGeneration: 1

How to pass param file when applying a new yaml

I am on on openshift 4.6. I want to pass parameters when to a yaml file so i tried the below code but it threw an error
oc apply -f "./ETCD Backup/etcd_backup_cronjob.yaml" --param master-node = oc get nodes -o name | grep "master-0" | cut -d'/' -f2
Error: unknown flag --param
What you most likely want to use is OpenShift Templates. Using Templates you can have variables in your YAML files and then change them using oc process.
So your YAML would look like so:
kind: Template
apiVersion: v1
metadata:
name: my-template
objects:
- apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: "Replace"
startingDeadlineSeconds: 200
suspend: true
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobpi"
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(${{DIGITS}})"]
restartPolicy: OnFailure
parameters:
- name: DIGITS
displayName: Number of digits
description: Digits to compute
value: 200
required: true
Then you can use oc process like so:
oc process my-template.yml --param=DIGITS=300 | oc apply -f -

Dynamically set OPENSHIFT_BUILD_REFERENCE and io.openshift.build.commit.ref in OpenShift?

I have a relatively simple BuildConfig for a service.
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
labels:
app: identity-server
name: identity-server
namespace: dev
spec:
failedBuildsHistoryLimit: 5
nodeSelector: {}
output:
to:
kind: ImageStreamTag
name: 'identity-server:latest'
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: master
uri: 'ssh://git#bitbucket.example.com:7999/my-project/my-repo.git'
sourceSecret:
name: bitbucket
type: Git
strategy:
dockerStrategy:
dockerfilePath: Dockerfile.openshift
from:
kind: ImageStreamTag
name: 'dotnet:2.1-sdk'
type: Docker
successfulBuildsHistoryLimit: 5
triggers:
- type: ConfigChange
- imageChange:
type: ImageChange
I can use it build an image from a tag in git like so.
$ oc start-build identity-server -n dev --commit 0.0.1
build "identity-server-22" started
This works well enough and the image is built from the commit as tagged.
However, in the last 2 steps of the build I see
Step 14/15 : ENV "OPENSHIFT_BUILD_NAME" "identity-server-25" "OPENSHIFT_BUILD_NAMESPACE" "dev" "OPENSHIFT_BUILD_SOURCE" "ssh://git#bitbucket.example.com:7999/my-project/my-repo.git" "OPENSHIFT_BUILD_REFERENCE" "master" "OPENSHIFT_BUILD_COMMIT" "6fa1c07b2e4f70bfff5ed1614907a1d00597fb51"
---> Running in 6d1e8a67976b
---> b7a869b8c133
Removing intermediate container 6d1e8a67976b
Step 15/15 : LABEL "io.openshift.build.name" "identity-server-25" "io.openshift.build.namespace" "dev" "io.openshift.build.commit.author" "Everett Toews \u003cEverett.Toews#example.com\u003e" "io.openshift.build.commit.date" "Sat Aug 4 16:46:48 2018 +1200" "io.openshift.build.commit.id" "6fa1c07b2e4f70bfff5ed1614907a1d00597fb51" "io.openshift.build.commit.ref" "master" "io.openshift.build.commit.message" "Try to build master branch" "io.openshift.build.source-location" "ssh://git#bitbucket.example.com:7999/my-project/my-repo.git"
---> Running in 3e8d8200f2cf
---> 1a591207a29f
Removing intermediate container 3e8d8200f2cf
OPENSHIFT_BUILD_REFERENCE and io.openshift.build.commit.ref are set to master as per spec.source.git.ref in the BuildConfig. I was hoping those would be set to 0.0.1 as overridden by the --commit 0.0.1 flag in the ōc start-build.
Is there a way to dynamically set OPENSHIFT_BUILD_REFERENCE and io.openshift.build.commit.ref in OpenShift?
Or would I need to template the BuildConfig and replace the spec.source.git.ref with a parameter to get the desired behaviour?

Openshift: Change in trigger behavior between 3.6.1 and 3.7.1

I have two Openshift environments that I publish to from a Gitlab pipeline. Let's call the first one DEV and it runs Openshift v3.7.1+c2ce2c0-1. The second one is INT and runs v3.6.1+008f2d5. Recently DEV got upgraded from 3.6.1 to 3.7.1 and after that I noticed a strange change in the behavior of redeployment triggers.
In short, what I see is that existing deployments in the DEV environment are triggered to redeploy with an "Image changed" message when an unchanged deployment config template is applied and while the Docker image also remains unchanged. This means that for example a MongoDB or Jenkins deployment recreates all containers and loses all data with every run of the CI pipeline.
Yes, there is the theoretically the possibility to use persistent volumes but the Openshift installations are not under my control. The point here is that redeployments happen when neither image nor configuration changes.
The command that I am using from Gitlab is this:
oc process -f openshift-mongodb-ephemeral.yml -v MONGODB_DATABASE=mydb -v DATABASE_SERVICE_NAME=mongodb -l template=northbound-mongodb | oc apply -f -
Here is one of the deployment config templates:
apiVersion: v1
kind: Template
labels:
template: mongodb-ephemeral-template
metadata:
annotations:
description: |-
MongoDB database service, without persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.
WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing
iconClass: icon-mongodb
openshift.io/display-name: MongoDB (Ephemeral)
tags: database,mongodb
creationTimestamp: 2017-03-14T11:25:13Z
name: mongodb-ephemeral
resourceVersion: "483"
selfLink: /oapi/v1/namespaces/openshift/templates/mongodb-ephemeral
uid: e41b7f8e-08a8-11e7-9120-000d3a266151
objects:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
ports:
- name: mongo
nodePort: 0
port: 27017
protocol: TCP
targetPort: 27017
selector:
name: ${DATABASE_SERVICE_NAME}
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
replicas: 1
selector:
name: ${DATABASE_SERVICE_NAME}
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: ${DATABASE_SERVICE_NAME}
spec:
containers:
- capabilities: {}
env:
- name: MONGODB_USER
value: ${MONGODB_USER}
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: myproject-secrets
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: myproject-secrets
- name: MONGODB_DATABASE
value: ${MONGODB_DATABASE}
image: ' '
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
initialDelaySeconds: 3
timeoutSeconds: 1
resources:
limits:
memory: ${MEMORY_LIMIT}
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: ${DATABASE_SERVICE_NAME}-data
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- emptyDir:
medium: ""
name: ${DATABASE_SERVICE_NAME}-data
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mongodb
from:
kind: ImageStreamTag
name: mongodb:${MONGODB_VERSION}
namespace: ${NAMESPACE}
lastTriggeredImage: ""
type: ImageChange
- type: ConfigChange
status: {}
parameters:
- description: Maximum amount of memory the container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
value: openshift
- description: The name of the OpenShift Service exposed for the database.
displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: mongodb
- description: Name of the MongoDB database accessed.
displayName: MongoDB Database Name
name: MONGODB_DATABASE
required: true
value: sampledb
- description: Version of MongoDB image to be used (2.4, 2.6, 3.2 or latest).
displayName: Version of MongoDB Image
name: MONGODB_VERSION
required: true
value: "3.2"
- description: Name of user to access MongoDB.
displayName: MongoDB user
name: MONGODB_USER
required: true
value: "mongouser"
The oc process and oc apply commands get executed with each execution of the Gitlab CI pipeline. The pipeline triggers whenever someone merges into e.g. the develop branch. I would like to keep this behavior because this guarantees if someone changes the configuration of MongoDB, Jenkins, etc. those get updated and redeployed automatically (in which case a loss of data is acceptable).
Does anybody know what change in OS could have prompted this change in behavior and how to achieve the old behavior again?
Credits go to Graham Dumpleton who answered my question, albeit through a comment. Since I did not hear back from him I am leaving this answer myself for completeness' sake.
After removing the entries
creationTimestamp
lastTriggeredImage
status
image
from the deployment config template, the unwanted redeployments stopped. I do not know which entry specifically started triggering the behavior in Openshift 3.7.1 but in any case, these four entries are information generated during an Openshift deployment process and should not be in a template to begin with.