Unable to proceed with hawkular-metrics installation as hawkular_metrics_schema_job.yaml failed to find schema image.
Failed to pull image "docker.io/openshift/origin-metrics-schema-installer:v3.11.0": rpc error: code = Unknown desc = repository docker.io/openshift/origin-metrics-schema-installer not found: does not exist or no pull access
cat /tmp/openshift-metrics-ansible-ABoWRf/templates/hawkular_metrics_schema_job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: hawkular-metrics-schema
labels:
metrics-infra: hawkular-metrics
name: hawkular-metrics-schema
spec:
template:
spec:
version: v1
metadata:
labels:
metrics-infra: hawkular-metrics
#name: hawkular-metrics
containers:
- name: hawkular-metrics-schema
image: docker.io/openshift/origin-metrics-schema-installer:v3.11.0
imagePullPolicy: IfNotPresent
env:
- name: TRUSTSTORE_AUTHORITIES
value: "/hawkular-metrics-certs/tls.truststore.crt"
volumeMounts:
- mountPath: /hawkular-metrics-certs
name: hawkular-metrics-certs
- mountPath: /hawkular-account
name: hawkular-metrics-account
volumes:
- name: hawkular-metrics-certs
secret:
secretName: hawkular-metrics-certs
- name: hawkular-metrics-account
secret:
secretName: hawkular-metrics-account
restartPolicy: OnFailure
docker pull origin-metrics-schema-installer Using default tag: latest
Trying to pull repository docker.io/library/origin-metrics-schema-installer ...
repository docker.io/origin-metrics-schema-installer not found: does not exist or no pull access
On one hand, if you are using OKD v3.10, official docker metrics images for 3.10 are tagged as "v3.10.0-rc.0" (not "v3.10").
If you are using 3.11 they are well tagged: https://hub.docker.com/r/openshift/origin-metrics-hawkular-metrics/tags/
On the other hand
openshift/origin-metrics-schema-installer doesn't exist, and someone built the image to:
https://hub.docker.com/r/alv91/origin-metrics-schema-installer/ (and he/she tagged the image as "v3.10").
So in your inventory file you should have for OKD v3.10:
openshift_metrics_install_metrics=True
openshift_metrics_cassandra_image=docker.io/openshift/origin-metrics-cassandra:v3.10.0-rc.0
openshift_metrics_hawkular_metrics_image=docker.io/openshift/origin-metrics-hawkular-metrics:v3.10.0-rc.0
openshift_metrics_heapster_image=docker.io/openshift/origin-metrics-heapster:v3.10.0-rc.0
openshift_metrics_schema_installer_image=docker.io/alv91/origin-metrics-schema-installer:v3.10
And for OKD 3.11:
openshift_metrics_install_metrics=True
openshift_metrics_schema_installer_image:docker.io/alv91/origin-metrics-schema-installer:v3.10
https://github.com/openshift/openshift-ansible/issues/9948
https://github.com/openshift/origin-metrics/issues/429
Related
My deployment is about Wordpress and MYsql. I already defined a new pool and a new namespace and I was trying to make that my pods get an ip address from this new pool defined but they never get one.
My namespace file yaml
apiVersion: v1
kind: Namespace
metadata:
name: produccion
annotations:
cni.projectcalico.org/ipv4pools: ippool
my pool code
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: ippool
spec:
cidr: 192.169.0.0/24
blockSize: 29
ipipMode: Always
natOutgoing: true
EOF
My mysql deployment is
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
namespace: produccion
labels:
app: wordpress
spec:
ports:
- port: 3306
targetPort: 3306
nodePort: 31066
selector:
app: wordpress
tier: mysql
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
namespace: produccion
annotations:
cni.projectcalico.org/ipv4pools: ippool
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: PASS
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
My wordpress deployment
apiVersion: v1
kind: Service
metadata:
name: wordpress
namespace: produccion
labels:
app: wordpress
spec:
ports:
- port: 80
nodePort: 30999
selector:
app: wordpress
tier: frontend
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: produccion
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress
name: wordpress
env:
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_DB_HOST
value: IP_Address:31066
- name: WORDPRESS_DB_USER
value: root
- name: WORDPRESS_DB_PASSWORD
value: PASS
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: "/var/www/html"
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress-persistent-storage
Additionally, I have also two PV yaml file for each service (mysql and wordpress).
When I execute the Kubectl of any deployment, they stay on ContainerCreating and the IP column stay on none.
produccion wordpress-mysql-74578f5d6d-knzzh 0/1 ContainerCreating 0 70m <none> dockerc8.tecsinfo-ec.com
If I check this pod I get the next errors:
Normal Scheduled 88s default-scheduler Successfully assigned produccion/wordpress-mysql-74578f5d6d-65jvt to dockerc8.tecsinfo-ec.com
Warning FailedCreatePodSandBox <invalid> kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "cdb1460246562cac11a57073ab12489dc169cb72aa3371cb2e72489544812a9b" network for pod "wordpress-mysql-74578f5d6d-65jvt": networkPlugin cni failed to set up pod "wordpress-mysql-74578f5d6d-65jvt_produccion" network: invalid character 'i' looking for beginning of value
Warning FailedCreatePodSandBox <invalid> kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "672a2c35c2bb99ebd5b7d180d4426184613c87f9bc606c15526c9d472b54bd6f" network for pod "wordpress-mysql-74578f5d6d-65jvt": networkPlugin cni failed to set up pod "wordpress-mysql-74578f5d6d-65jvt_produccion" network: invalid character 'i' looking for beginning of value
Warning FailedCreatePodSandBox <invalid> kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "de4d7669206f568618a79098d564e76899779f94120bddcee080c75f81243a85" network for pod "wordpress-mysql-74578f5d6d-65jvt": networkPlugin cni failed to set up pod "wordpress-mysql-74578f5d6d-65jvt_produccion" network: invalid character 'i' looking for beginning of value
I was using some guides from Internet like this one: https://www.projectcalico.org/calico-ipam-explained-and-enhanced/
but even this doesn't work on my lab.
I am pretty new using Kubernetes and I don't know what else to do or check.
Your error is due to invalid values in the YAML, according to the Project Calico documentation here: Using Kubernetes annotations
You will need to provide a list of IP pools as the value in your annotation instead of a single string. The following snippet should work for you.
cni.projectcalico.org/ipv4pools: "[\"ippool\"]"
I was using this image to run my application in docker-compose. However, when I run the same on a Kubernetes cluster I get the error
[ERROR] Could not open file '/opt/bitnami/mysql/logs/mysqld.log' for error logging: Permission denied
Here's my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
name: common-db
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: common-db
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MYSQL_DATABASE
value: "common-development"
- name: MYSQL_REPLICATION_MODE
value: "master"
- name: MYSQL_REPLICATION_PASSWORD
value: "repl_password"
- name: MYSQL_REPLICATION_USER
value: "repl_user"
image: bitnami/mysql:5.7
imagePullPolicy: ""
name: common-db
ports:
- containerPort: 3306
securityContext:
runAsUser: 0
resources:
requests:
memory: 512Mi
cpu: 500m
limits:
memory: 512Mi
cpu: 500m
volumeMounts:
- name: common-db-initdb
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
volumes:
- name: common-db-initdb
configMap:
name: common-db-config
serviceAccountName: ""
status: {}
The config map has the config my.cnf data. Any pointers on where I could be going wrong? Specially if the same image works in the docker-compose?
Try changing the file permission using init container as in official bitnami helm chart they are also updating file permissions and managing security context.
helm chart : https://github.com/bitnami/charts/blob/master/bitnami/mysql/templates/master-statefulset.yaml
UPDATE :
initContainers:
- command:
- /bin/bash
- -ec
- |
chown -R 1001:1001 /bitnami/mysql
image: docker.io/bitnami/minideb:buster
imagePullPolicy: Always
name: volume-permissions
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mysql
name: data
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
serviceAccount: mysql
You may need to use subpath. To know details about subpath click here
volumeMounts:
- name: common-db-initd
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
subPath: my_custom.cnf
Also, you can install bitnami mysql using helm chart easily.
I'm having some difficulties deploying an Openshift template, specifically with attaching a persistent volume. The template is meant to deploy Jira and a MYSQL database for persistence. I have the following persistent volume configuration deployed:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysqlpv0003
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
nfs:
path: /var/nfs/mysql
server: 192.168.0.171
persistentVolumeReclaimPolicy: Retain
Where 192.168.0.171 is a valid, working nfs server. My aim is to use this persistent volume as storage for the MYSQL server. The template I'm trying to deploy is as follows:
---
apiVersion: v1
kind: Template
labels:
app: jira-persistent
template: jira-persistent
message: |-
The following service(s) have been created in your project: ${NAME}, ${DATABASE_SERVICE_NAME}.
metadata:
annotations:
description: Deploys an instance of Jira, backed by a mysql database
iconClass: icon-perl
openshift.io/display-name: Jira + Mysql
openshift.io/documentation-url: https://github.com/sclorg/dancer-ex
openshift.io/long-description: Deploys an instance of Jira, backed by a mysql database
openshift.io/provider-display-name: ABXY Games, Inc.
openshift.io/support-url: abxygames.com
tags: quickstart,JIRA
template.openshift.io/bindable: 'false'
name: jira-persistent
objects:
# Database secrets
- apiVersion: v1
kind: Secret
metadata:
name: "${NAME}"
stringData:
database-password: "${DATABASE_PASSWORD}"
database-user: "${DATABASE_USER}"
keybase: "${SECRET_KEY_BASE}"
# application service
- apiVersion: v1
kind: Service
metadata:
annotations:
description: Exposes and load balances the application pods
service.alpha.openshift.io/dependencies: '[{"name": "${DATABASE_SERVICE_NAME}",
"kind": "Service"}]'
name: "${NAME}"
spec:
ports:
- name: web
port: 8080
targetPort: 8080
selector:
name: "${NAME}"
# application route
- apiVersion: v1
kind: Route
metadata:
name: "${NAME}"
spec:
host: "${APPLICATION_DOMAIN}"
to:
kind: Service
name: "${NAME}"
# application image
- apiVersion: v1
kind: ImageStream
metadata:
annotations:
description: Keeps track of changes in the application image
name: "${NAME}"
# Application buildconfig
- apiVersion: v1
kind: BuildConfig
metadata:
annotations:
description: Defines how to build the application
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${NAME}"
spec:
output:
to:
kind: ImageStreamTag
name: "${NAME}:latest"
source:
contextDir: "${CONTEXT_DIR}"
git:
ref: "${SOURCE_REPOSITORY_REF}"
uri: "${SOURCE_REPOSITORY_URL}"
type: Git
strategy:
dockerStrategy:
env:
- name: CPAN_MIRROR
value: "${CPAN_MIRROR}"
dockerfilePath: Dockerfile
type: Source
triggers:
- type: ImageChange
- type: ConfigChange
- github:
secret: "${GITHUB_WEBHOOK_SECRET}"
type: GitHub
# application deployConfig
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
description: Defines how to deploy the application server
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${NAME}"
spec:
replicas: 1
selector:
name: "${NAME}"
strategy:
type: Recreate
template:
metadata:
labels:
name: "${NAME}"
name: "${NAME}"
spec:
containers:
- env:
- name: DATABASE_SERVICE_NAME
value: "${DATABASE_SERVICE_NAME}"
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: database-user
name: "${NAME}"
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: "${NAME}"
- name: MYSQL_DATABASE
value: "${DATABASE_NAME}"
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
key: keybase
name: "${NAME}"
- name: PERL_APACHE2_RELOAD
value: "${PERL_APACHE2_RELOAD}"
image: " "
livenessProbe:
httpGet:
path: "/"
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 3
name: jira-mysql-persistent
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: "/"
port: 8080
initialDelaySeconds: 3
timeoutSeconds: 3
resources:
limits:
memory: "${MEMORY_LIMIT}"
triggers:
- imageChangeParams:
automatic: true
containerNames:
- jira-mysql-persistent
from:
kind: ImageStreamTag
name: "${NAME}:latest"
type: ImageChange
- type: ConfigChange
# database persistentvolumeclaim
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "${DATABASE_SERVICE_NAME}"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: "${VOLUME_CAPACITY}"
# database service
- apiVersion: v1
kind: Service
metadata:
annotations:
description: Exposes the database server
name: "${DATABASE_SERVICE_NAME}"
spec:
ports:
- name: mysql
port: 3306
targetPort: 3306
selector:
name: "${DATABASE_SERVICE_NAME}"
# database deployment config
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
description: Defines how to deploy the database
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${DATABASE_SERVICE_NAME}"
spec:
replicas: 1
selector:
name: "${DATABASE_SERVICE_NAME}"
strategy:
type: Recreate
template:
metadata:
labels:
name: "${DATABASE_SERVICE_NAME}"
name: "${DATABASE_SERVICE_NAME}"
spec:
containers:
- env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: database-user
name: "${NAME}"
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: "${NAME}"
- name: MYSQL_DATABASE
value: "${DATABASE_NAME}"
image: " "
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 3306
timeoutSeconds: 1
name: mysql
ports:
- containerPort: 3306
readinessProbe:
exec:
command:
- "/bin/sh"
- "-i"
- "-c"
- MYSQL_PWD='${DATABASE_PASSWORD}' mysql -h 127.0.0.1 -u ${DATABASE_USER}
-D ${DATABASE_NAME} -e 'SELECT 1'
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
limits:
memory: "${MEMORY_MYSQL_LIMIT}"
volumeMounts:
- mountPath: "/var/lib/mysql/data"
name: "${DATABASE_SERVICE_NAME}-data"
volumes:
- name: "${DATABASE_SERVICE_NAME}-data"
persistentVolumeClaim:
claimName: "${DATABASE_SERVICE_NAME}"
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mysql
from:
kind: ImageStreamTag
name: mysql:5.7
namespace: "${NAMESPACE}"
type: ImageChange
- type: ConfigChange
parameters:
- description: The name assigned to all of the frontend objects defined in this template.
displayName: Name
name: NAME
required: true
value: jira-persistent
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
required: true
value: openshift
- description: Maximum amount of memory the JIRA container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: Maximum amount of memory the MySQL container can use.
displayName: Memory Limit (MySQL)
name: MEMORY_MYSQL_LIMIT
required: true
value: 512Mi
- description: Volume space available for data, e.g. 512Mi, 2Gi
displayName: Volume Capacity
name: VOLUME_CAPACITY
required: true
value: 1Gi
- description: The URL of the repository with your application source code.
displayName: Git Repository URL
name: SOURCE_REPOSITORY_URL
required: true
value: https://github.com/stpork/jira.git
- description: Set this to a branch name, tag or other ref of your repository if you
are not using the default branch.
displayName: Git Reference
name: SOURCE_REPOSITORY_REF
- description: Set this to the relative path to your project if it is not in the root
of your repository.
displayName: Context Directory
name: CONTEXT_DIR
- description: The exposed hostname that will route to the jira service, if left
blank a value will be defaulted.
displayName: Application Hostname
name: APPLICATION_DOMAIN
value: ''
- description: Github trigger secret. A difficult to guess string encoded as part
of the webhook URL. Not encrypted.
displayName: GitHub Webhook Secret
from: "[a-zA-Z0-9]{40}"
generate: expression
name: GITHUB_WEBHOOK_SECRET
- displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: database
- displayName: Database Username
from: user[A-Z0-9]{3}
generate: expression
name: DATABASE_USER
- displayName: Database Password
from: "[a-zA-Z0-9]{8}"
generate: expression
name: DATABASE_PASSWORD
- displayName: Database Name
name: DATABASE_NAME
required: true
value: sampledb
- description: Set this to "true" to enable automatic reloading of modified Perl modules.
displayName: Perl Module Reload
name: PERL_APACHE2_RELOAD
value: ''
- description: Your secret key for verifying the integrity of signed cookies.
displayName: Secret Key
from: "[a-z0-9]{127}"
generate: expression
name: SECRET_KEY_BASE
- description: The custom CPAN mirror URL
displayName: Custom CPAN Mirror URL
name: CPAN_MIRROR
value: ''
When run, the deployment for the MYSQL server eventually fails with the following error:
Unable to mount volumes for pod
"database-1-qvv86_test3(54f01c55-6885-11e9-bc42-3a342852673a)":
timeout expired waiting for volumes to attach or mount for pod
"test3"/"database-1-qvv86". list of unmounted volumes=[database-data
default-token-8hjgv]. list of unattached volumes=[database-data
default-token-8hjgv]
The persistent volume claim is attaching to the persistent volume successfully, but as far as I can tell the pod is not attaching to that volume. The template is being deployed in a fresh project, and the PV is freshly created and the nfs is empty. I can't see any errors with how the pod is referencing the persistent volume claim. I'm not sure why this error is occurring, but I'm just learning templates and am clearly missing something. Does anyone see what I'm missing?
The issue was in my NFS permissions. Here is the working content of my /etc/exports file:
/var/nfs *(rw,root_squash,no_wdelay)
I have two Openshift environments that I publish to from a Gitlab pipeline. Let's call the first one DEV and it runs Openshift v3.7.1+c2ce2c0-1. The second one is INT and runs v3.6.1+008f2d5. Recently DEV got upgraded from 3.6.1 to 3.7.1 and after that I noticed a strange change in the behavior of redeployment triggers.
In short, what I see is that existing deployments in the DEV environment are triggered to redeploy with an "Image changed" message when an unchanged deployment config template is applied and while the Docker image also remains unchanged. This means that for example a MongoDB or Jenkins deployment recreates all containers and loses all data with every run of the CI pipeline.
Yes, there is the theoretically the possibility to use persistent volumes but the Openshift installations are not under my control. The point here is that redeployments happen when neither image nor configuration changes.
The command that I am using from Gitlab is this:
oc process -f openshift-mongodb-ephemeral.yml -v MONGODB_DATABASE=mydb -v DATABASE_SERVICE_NAME=mongodb -l template=northbound-mongodb | oc apply -f -
Here is one of the deployment config templates:
apiVersion: v1
kind: Template
labels:
template: mongodb-ephemeral-template
metadata:
annotations:
description: |-
MongoDB database service, without persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.
WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing
iconClass: icon-mongodb
openshift.io/display-name: MongoDB (Ephemeral)
tags: database,mongodb
creationTimestamp: 2017-03-14T11:25:13Z
name: mongodb-ephemeral
resourceVersion: "483"
selfLink: /oapi/v1/namespaces/openshift/templates/mongodb-ephemeral
uid: e41b7f8e-08a8-11e7-9120-000d3a266151
objects:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
ports:
- name: mongo
nodePort: 0
port: 27017
protocol: TCP
targetPort: 27017
selector:
name: ${DATABASE_SERVICE_NAME}
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
replicas: 1
selector:
name: ${DATABASE_SERVICE_NAME}
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: ${DATABASE_SERVICE_NAME}
spec:
containers:
- capabilities: {}
env:
- name: MONGODB_USER
value: ${MONGODB_USER}
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: myproject-secrets
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: myproject-secrets
- name: MONGODB_DATABASE
value: ${MONGODB_DATABASE}
image: ' '
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
initialDelaySeconds: 3
timeoutSeconds: 1
resources:
limits:
memory: ${MEMORY_LIMIT}
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: ${DATABASE_SERVICE_NAME}-data
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- emptyDir:
medium: ""
name: ${DATABASE_SERVICE_NAME}-data
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mongodb
from:
kind: ImageStreamTag
name: mongodb:${MONGODB_VERSION}
namespace: ${NAMESPACE}
lastTriggeredImage: ""
type: ImageChange
- type: ConfigChange
status: {}
parameters:
- description: Maximum amount of memory the container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
value: openshift
- description: The name of the OpenShift Service exposed for the database.
displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: mongodb
- description: Name of the MongoDB database accessed.
displayName: MongoDB Database Name
name: MONGODB_DATABASE
required: true
value: sampledb
- description: Version of MongoDB image to be used (2.4, 2.6, 3.2 or latest).
displayName: Version of MongoDB Image
name: MONGODB_VERSION
required: true
value: "3.2"
- description: Name of user to access MongoDB.
displayName: MongoDB user
name: MONGODB_USER
required: true
value: "mongouser"
The oc process and oc apply commands get executed with each execution of the Gitlab CI pipeline. The pipeline triggers whenever someone merges into e.g. the develop branch. I would like to keep this behavior because this guarantees if someone changes the configuration of MongoDB, Jenkins, etc. those get updated and redeployed automatically (in which case a loss of data is acceptable).
Does anybody know what change in OS could have prompted this change in behavior and how to achieve the old behavior again?
Credits go to Graham Dumpleton who answered my question, albeit through a comment. Since I did not hear back from him I am leaving this answer myself for completeness' sake.
After removing the entries
creationTimestamp
lastTriggeredImage
status
image
from the deployment config template, the unwanted redeployments stopped. I do not know which entry specifically started triggering the behavior in Openshift 3.7.1 but in any case, these four entries are information generated during an Openshift deployment process and should not be in a template to begin with.
Running under Kubernetes 1.2.0/CoreOS 991.1.0/Google Compute Engine, Heapster 0.18.2 fails due to not recognizing the source kubernetes.summary_api. How do I solve this?
The Log of the Failing Heapster Controller
I0415 07:23:58.623481 1 heapster.go:55] /heapster --source=kubernetes.summary_api:'' --sink=gcm --sink=gcmautoscaling --sink=gcl --stats_resolution=30s --sink_frequency=1m
I0415 07:23:58.623616 1 heapster.go:56] Heapster version 0.18.2
F0415 07:23:58.623654 1 heapster.go:62] Unknown source: kubernetes.summary_api
The Heapster Kubernetes Service Spec:
apiVersion: v1
kind: ReplicationController
metadata:
name: heapster-v10
namespace: kube-system
labels:
k8s-app: heapster
version: v10
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: heapster
version: v10
template:
metadata:
labels:
k8s-app: heapster
version: v10
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: gcr.io/google_containers/heapster:v0.18.2
name: heapster
resources:
limits:
cpu: 100m
memory: 300Mi
command:
- /heapster
- --source=kubernetes.summary_api:''
- --sink=gcm
- --sink=gcmautoscaling
- --sink=gcl
- --stats_resolution=30s
- --sink_frequency=1m
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs
readOnly: true
- name: usrsharecacerts
mountPath: /usr/share/ca-certificates
readOnly: true
volumes:
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: usrsharecacerts
hostPath:
path: /usr/share/ca-certificates
That's a bug in the manifest. Support for the kubelet summary API was not added until a later version (starting at v0.20.0-alpha8). You can either change to a more recent heapster version (the default manifest uses v1.0.2), or you can revert to the old (cAdvisor API) source: --source=kubernetes:''