How to use image stream in deploy configuration for OpenShift - openshift

I want my deploy configuration to use an image that was the output of a build configuration.
I am currently using something like this:
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
name: myapp
spec:
replicas: 1
selector:
app: myapp
deploymentconfig: myapp
strategy:
resources: {}
template:
metadata:
annotations:
openshift.io/container.myapp.image.entrypoint: '["python3"]'
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
deploymentconfig: myapp
spec:
containers:
- name: myapp
image: 123.123.123.123/myproject/myapp-staging:latest
resources: {}
command:
- scripts/start_server.sh
ports:
- containerPort: 8000
test: false
triggers: []
status: {}
I had to hard-code the integrated docker registry's IP address; otherwise Kubernetes/OpenShift is not able to find the image to pull down. I would like to not hard-code the integrated docker registry's IP address, and instead use something like this:
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
name: myapp
spec:
replicas: 1
selector:
app: myapp
deploymentconfig: myapp
strategy:
resources: {}
template:
metadata:
annotations:
openshift.io/container.myapp.image.entrypoint: '["python3"]'
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
deploymentconfig: myapp
spec:
containers:
- name: myapp
from:
kind: "ImageStreamTag"
name: "myapp-staging:latest"
resources: {}
command:
- scripts/start_server.sh
ports:
- containerPort: 8000
test: false
triggers: []
status: {}
But this causes Kubernetes/OpenShift to complain with:
The DeploymentConfig "myapp" is invalid.
spec.template.spec.containers[0].image: required value
How can I specify the output of a build configuration as the image to use in a deploy configuration?
Thank you for your time!
Also, oddly enough, if I link the deploy configuration to the build configuration with a trigger, Kubernetes/OpenShift knows to look in the integrated docker for the image:
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp-staging
name: myapp-staging
spec:
replicas: 1
selector:
app: myapp-staging
deploymentconfig: myapp-staging
strategy:
resources: {}
template:
metadata:
annotations:
openshift.io/container.myapp.image.entrypoint: '["python3"]'
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp-staging
deploymentconfig: myapp-staging
spec:
containers:
- name: myapp-staging
image: myapp-staging:latest
resources: {}
command:
- scripts/start_server.sh
ports:
- containerPort: 8000
test: false
triggers:
- type: "ImageChange"
imageChangeParams:
automatic: true
containerNames:
- myapp-staging
from:
kind: ImageStreamTag
name: myapp-staging:latest
status: {}
But I don't want the automated triggering...
Update 1 (11/21/2016):
Configuring the trigger but having the trigger disabled (hence manually triggering the deploy), still left the deployment unable to find the image:
$ oc describe pod myapp-1-oodr5
Name: myapp-1-oodr5
Namespace: myproject
Security Policy: restricted
Node: node.url/123.123.123.123
Start Time: Mon, 21 Nov 2016 09:20:26 -1000
Labels: app=myapp
deployment=myapp-1
deploymentconfig=myapp
Status: Pending
IP: 123.123.123.123
Controllers: ReplicationController/myapp-1
Containers:
myapp:
Container ID:
Image: myapp-staging:latest
Image ID:
Port: 8000/TCP
Command:
scripts/start_server.sh
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-goe98 (ro)
Environment Variables:
ALLOWED_HOSTS: myapp-myproject.url
Conditions:
Type Status
Ready False
Volumes:
default-token-goe98:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-goe98
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
42s 42s 1 {scheduler } Scheduled Successfully assigned myapp-1-oodr5 to node.url
40s 40s 1 {kubelet node.url} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.7" already present on machine
40s 40s 1 {kubelet node.url} implicitly required container POD Created Created with docker id d3318e880e4a
40s 40s 1 {kubelet node.url} implicitly required container POD Started Started with docker id d3318e880e4a
40s 24s 2 {kubelet node.url} spec.containers{myapp} Pulling pulling image "myapp-staging:latest"
38s 23s 2 {kubelet node.url} spec.containers{myapp} Failed Failed to pull image "myapp-staging:latest": Error: image library/myapp-staging:latest not found
35s 15s 2 {kubelet node.url} spec.containers{myapp} Back-off Back-off pulling image "myapp-staging:latest"
Update 2 (08/23/2017):
In case, this helps others, here's a summary of the solution.
triggers:
- type: "ImageChange"
imageChangeParams:
automatic: true # this is required to link the build and deployment
containerNames:
- myapp-staging
from:
kind: ImageStreamTag
name: myapp-staging:latest
With the trigger and automatic set to true, the deployment should use the build's image in the internal registry.
The other comments relating to making the build not trigger a deploy relates to a separate requirement of wanting to manually deploy images from the internal registry. Here's more information about that portion:
The build needs to trigger the deployment at least once before automatic is set to false. So far a while, I was:
setting automatic to true
initiate a build and deploy
after deployment finishes, manually change automatic to false
manually, trigger a deployment later (though I did not verify if this deployed the older, out-of-date image or not)
I was initially trying to use this manual deployment as a way for a non-developer to go into the web console and make deployments. But this requirement has since been removed, so having build trigger deployments each time works just fine for us now. Builds can build at different branches and then tag the images differently. Deployments can then just use the appropriately tagged images.
Hope that helps!

Are you constructing the resource definitions by hand?
It would be easier to use oc new-build and then oc new-app if you really need to set this up as two steps for some reason. If you just want to setup the build and deployment in one go, just use oc new-app.
For example, to setup build and deployment in one go use:
oc new-app --name myapp <repository-url>
To do it in two steps use:
oc new-build --name myapp <repository-url>
oc new-app myapp
If you still rather use hand created resources, at least use the single step variant with the --dry-run -o yaml options to see what it would create for the image stream, plus build and deployment configuration. That way you can learn from it how to do it. The bit you currently have missing is an image stream.
BTW. It looks a bit suspicious that you have the entry point set to python3. That is highly unusual. What are you trying to do as right now it looks like you may be trying to do something in a way which may not work with how OpenShift works. OpenShift is mainly about long running processes and not for doing single docker run. You can do the latter, but not how you are currently doing it.

Related

statefulset unable to rollback if the pods are not in running state

I have deployed mongo stateful pods with an auto rolling strategy and below is the template for it. The deployment is successful and the pods are into Running state.
- apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
podManagementPolicy: Parallel
replicas: 3
strategy:
type: Rolling
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:4.0
imagePullPolicy: Always
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- 0.0.0.0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
updateStrategy:
type: RollingUpdate
I am trying to update the image of the mongo using the following set command,
oc set image statefulset/mongo mongo=mongo:4.2 -n mongo-replica
While trying to update the image, the pods are into "CrashLoopBackOff" error. I am expecting the pods to be auto rolled back to the previous running version.
But the pods are struck in "CrashLoopBackOff" error state. I want the pods to be rolled back to the previous running version. Any suggestions here would be appreciated.
Statefulset unfortunately don't have a Rollback, but you can warranty your services using the probes, having a well configure Liveness and Readiness probes the changed version will only take the place of the running version with the probes answering an ok status.
In that way only one of your 3 replicas will crash in a failure, and you can work on it to solve the problem or manually rollback your changes, but without losing the delivery of your service.
More detail about this you can see on the k8s documentation:
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#forced-rollback
About the probes, you can get a good explanation about it here:
https://www.openshift.com/blog/liveness-and-readiness-probes

kubectl cannot acces pod application

A have this pod specification :
apiVersion: v1
kind: Pod
metadata:
name: wp
spec:
containers:
- image: wordpress:4.9-apache
name: wordpress
env:
- name: WORDPRESS_DB_PASSWORD
value: mysqlpwd
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: mysqlpwd
volumeMounts:
- name: data
mountPath: /var/lib/mysql
volumes:
- name: data
emptyDir: {}
I deployed it using :
kubectl create -f wordpress-pod.yaml
Now it is correctly deployed :
kubectl get pods
wp 2/2 Running 3 35h
Then when i do :
kubectl describe po/wp
Name: wp
Namespace: default
Priority: 0
Node: node3/192.168.50.12
Start Time: Mon, 13 Jan 2020 23:27:16 +0100
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.233.92.7
IPs:
IP: 10.233.92.7
Containers:
My issue is that i cannot access to the app :
wget http://192.168.50.12:8080/wp-admin/install.php
Connecting to 192.168.50.12:8080... failed: Connection refused.
Neither wget http://10.233.92.7:8080/wp-admin/install.php
works
Is there any issue in the pod description or deployment process ?
Thanks
With your current setup you need to use wget http://10.233.92.7:8080/wp-admin/install.php from within the cluster i.e by performing kubectl exec into another pod because 10.233.92.7 IP is valid only within the cluster.
You should create a service for exposing your pod. Create a cluster IP type service(default) for accessing from within the cluster. If you want to access from outside the cluster i.e from your desktop then create a NodePort or LoadBalancer type service.
Other way to access the application from your desktop will be port forwarding. In this case you don't need to create a service.
Here is a tutorial for accessing pods using NodePort service. In this case your node need to have public ip.
The problem with your configuration is lack of services that will allow external access to your WordPress.
There a lot of materials explaining what are the options and how they are strictly connected with infrastructure that Kubernetes works on.
Let me elaborate on 3 of them:
minikube
kubeadm
cloud provisioned (GKE, EKS, AKS)
The base of the WordPress configuration will be the same in each case.
Table of contents:
Running MySQL
Secret
PersistentVolumeClaim
Deployment
Service
Running WordPress
PersistentVolumeClaim
Deployment
Allowing external access
minikube
kubeadm
cloud provisioned (GKE)
There is a good tutorial on Kubernetes site: HERE!
Running MySQL
Secret
As the official Kubernetes documentation:
Kubernetes secret objects let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image.
-- Kubernetes secrets
Example below is a YAML definition of a secret used for MySQL password:
apiVersion: v1
kind: Secret
metadata:
name: mysql-password
type: Opaque
data:
password: c3VwZXJoYXJkcGFzc3dvcmQK
Take a specific look at:
password: c3VwZXJoYXJkcGFzc3dvcmQK
This password is base64 encoded.
To create this password invoke command from your terminal:
$ echo "YOUR_PASSWORD" | base64
Paste the output to the YAML definition and apply it with:
$ kubectl apply -f FILE_NAME.
You can check if it was created correctly with:
$ kubectl get secret mysql-password -o yaml
PersistentVolumeClaim
MySQL require a dedicated space for storing the data. There is an official documentation explaining it: Persistent Volumes
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Above YAML will create a storage claim for MySQL. Apply it with command:
$ kubectl apply -f FILE_NAME.
Deployment
Create a YAML definition of a deployment from the official example and adjust it if there were any changes to names of the objects:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Take a specific look on the part below, which is parsing secret password to the MySQL pod:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
Apply it with command: $ kubectl apply -f FILE_NAME.
Service
What was missing in your the configuration was service objects. This objects allows communication with other pods, external traffic etc. Look at below example:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
This definition will create a object which will point to the MySQL pod.
It will create a DNS entry with the name of wordpress-mysql and IP address of the pod.
It will not expose it to external traffic as it's not needed.
Apply it with command: $ kubectl apply -f FILE_NAME.
Running WordPress
Persistent Volume Claim
As well as MySQL, WordPress require a dedicated space for storing the data. Create it with below example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Apply it with command: $ kubectl apply -f FILE_NAME.
Deployment
Create YAML definition of WordPress as example below:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
Take a specific look at:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
This part will parse secret value to the deployment.
Below definition will tell WordPress where MySQL is located:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
Apply it with command: $ kubectl apply -f FILE_NAME.
Allowing external access
There are many different approaches for configuring external access to applications.
Minikube
Configuration could differ between different hypervisors.
For example Minikube can expose WordPress to external traffic with:
NodePort
apiVersion: v1
kind: Service
metadata:
name: wordpress-nodeport
spec:
type: NodePort
selector:
app: wordpress
tier: frontend
ports:
- name: wordpress-port
protocol: TCP
port: 80
targetPort: 80
After applying this definition you will need to enter the minikube IP address with appropriate port to the web browser.
This port can be found with command:
$ kubectl get svc wordpress-nodeport
Output of above command:
wordpress-nodeport NodePort 10.76.9.15 <none> 80:30173/TCP 8s
In this case it is 30173.
LoadBalancer
In this case it will create NodePort also!
apiVersion: v1
kind: Service
metadata:
name: wordpress-loadbalancer
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
Ingress resource
Please refer to this link: Minikube: create-an-ingress-resource
Also you can refer to this Stack Overflow post
Kubeadm
With the Kubernetes clusters provided by kubeadm there are:
NodePort
The configuration process is the same as in minikube. The only difference is that it will create NodePort on each of every node in the cluster. After that you can enter IP address of any of the node with appropriate port. Be aware that you will neeed to be in the same network without firewall blocking your access.
LoadBalancer
You can create LoadBalancer object with the same YAML definition as in minikube. The problem is that with kubeadm provisioning on a bare metal cluster the LoadBalancer will not get IP address. The one of the options is: MetalLB
Ingress
Ingress resources share the same problem as LoadBalancer in kubeadm provisioned infrastructure. As above one of the options is: MetalLB.
Cloud Provisioned
There are many options which are strictly related to cloud that Kubernetes works on. Below is example for configuring Ingress resource with NGINX controller on GKE:
Apply both of the YAML definitions:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/cloud-generic.yaml
Apply NodePort definition from minikube
Create Ingress resource:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: wordpress-nodeport
servicePort: wordpress-port
Apply it with command: $ kubectl apply -f FILE_NAME.
Check if Ingress resource got the address from cloud provider with command:
$ kubectl get ingress
The output should look like that:
NAME HOSTS ADDRESS PORTS AGE
ingress * XXX.XXX.XXX.X 80 26m
After entering the IP address from above command you should get:
Cloud provisioned example can be used for kubeadm provisioned clusters with the MetalLB configured.

Kubernetes MySql image persistent volume is non empty during init

I am working through the persistent disks tutorial found here while also creating it as a StatefulSet instead of a deployment.
When I run the yaml file into GKE the database fails to start, looking at the logs it has the following error.
[ERROR] --initialize specified but the data directory has files in it. Aborting.
Is it possible to inspect the volume created to see what is in the directory? Otherwise, what am I doing wrong that is causing the disk to be non empty?
Thanks
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datalayer-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: Service
metadata:
name: datalayer-svc
labels:
app: myapplication
spec:
ports:
- port: 80
name: dbadmin
clusterIP: None
selector:
app: database
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: datalayer
spec:
selector:
matchLabels:
app: myapplication
serviceName: "datalayer-svc"
replicas: 1
template:
metadata:
labels:
app: myapplication
spec:
terminationGracePeriodSeconds: 10
containers:
- name: database
image: mysql:5.7.22
env:
- name: "MYSQL_ROOT_PASSWORD"
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
- name: "MYSQL_DATABASE"
value: "appdatabase"
- name: "MYSQL_USER"
value: "app_user"
- name: "MYSQL_PASSWORD"
value: "app_password"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: datalayer-pv
mountPath: /var/lib/mysql
volumes:
- name: datalayer-pv
persistentVolumeClaim:
claimName: datalayer-pvc
This issue could be caused by the lost+found directory on the filesystem of the PersistentVolume.
I was able to verify this by adding a k8s.gcr.io/busybox container (in PVC set accessModes: [ReadWriteMany], OR comment out the database container):
- name: init
image: "k8s.gcr.io/busybox"
command: ["/bin/sh","-c","ls -l /var/lib/mysql"]
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
There are a few potential workarounds...
Most preferable is to use a subPath on the volumeMounts object. This uses a subdirectory of the PersistentVolume, which should be empty at creation time, instead of the volume root:
volumeMounts:
- name: database
mountPath: "/var/lib/mysql"
subPath: mysql
Less preferable workarounds include:
Use a one-time container to rm -rf /var/lib/mysql/lost+found (not a great solution, because the directory is managed by the filesystem and is likely to re-appear)
Use mysql:5 image, and add args: ["--ignore-db-dir=lost+found"] to the container (this option was removed in mysql 8)
Use mariadb image instead of mysql
More details might be available at docker-library/mysql issues: #69 and #186
You would usually see if your volumes were mounted with:
kubectl get pods # gets you all the pods on the default namespace
# and
kubectl describe pod <pod-created-by-your-statefulset>
Then you can these commands to check on your PVs and PVCs
kubectl get pv # gets all the PVs on the default namespace
kubectl get pvc # same for PVCs
kubectl describe pv <pv-name>
kubectl describe pvc <pvc-name>
Then you can to the GCP console on disks and see if your disks got created:
You can add the ignore dir lost+found
containers:
- image: mysql:5.7
name: mysql
args:
- "--ignore-db-dir=lost+found"
I use a init container to remove that file (I'm using postgres in my case):
initContainers:
- name: busybox
image: busybox:latest
args: ["rm", "-rf", "/var/lib/postgresql/data/lost+found"]
volumeMounts:
- name: nfsv-db
mountPath: /var/lib/postgresql/data

Can't Share a Persistent Volume Claim for an EBS Volume between Apps

Is it possible to share a single persistent volume claim (PVC) between two apps (each using a pod)?
I read: Share persistent volume claims amongst containers in Kubernetes/OpenShift but didn't quite get the answer.
I tried to added a PHP app, and MySQL app (with persistent storage) within the same project. Deleted the original persistent volume (PV) and created a new one with read,write,many mode. I set the root password of the MySQL database, and the database works.
Then, I add storage to the PHP app using the same persistent volume claim with a different subpath. I found that I can't turn on both apps. After I turn one on, when I try to turn on the next one, it get stuck at creating container.
MySQL .yaml of the deployment step at openshift:
...
template:
metadata:
creationTimestamp: null
labels:
name: mysql
spec:
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql
containers:
- name: mysql
...
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql/data
subPath: mysql/data
...
terminationMessagePath: /dev/termination-log
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
PHP .yaml from deployment step:
template:
metadata:
creationTimestamp: null
labels:
app: wiki2
deploymentconfig: wiki2
spec:
volumes:
- name: volume-959bo <<----
persistentVolumeClaim:
claimName: mysql
containers:
- name: wiki2
...
volumeMounts:
- name: volume-959bo
mountPath: /opt/app-root/src/w/images
subPath: wiki/images
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
The volume mount names are different. But that shouldn't make the two pods can't share the PVC. Or, the problem is that they can't both mount the same volume at the same time?? I can't get the termination log at /dev because if it can't mount the volume, the pod doesn't start, and I can't get the log.
The PVC's .yaml (oc get pvc -o yaml)
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-class: ebs
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
creationTimestamp: YYYY-MM-DDTHH:MM:SSZ
name: mysql
namespace: abcdefghi
resourceVersion: "123456789"
selfLink: /api/v1/namespaces/abcdefghi/persistentvolumeclaims/mysql
uid: ________-____-____-____-____________
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: pvc-________-____-____-____-____________
status:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
phase: Bound
kind: List
metadata: {}
resourceVersion: ""
selfLink: ""
Suspicious Entries from oc get events
Warning FailedMount {controller-manager }
Failed to attach volume "pvc-________-____-____-____-____________"
on node "ip-172-__-__-___.xx-xxxx-x.compute.internal"
with:
Error attaching EBS volume "vol-000a00a00000000a0" to instance
"i-1111b1b11b1111111": VolumeInUse: vol-000a00a00000000a0 is
already attached to an instance
Warning FailedMount {kubelet ip-172-__-__-___.xx-xxxx-x.compute.internal}
Unable to mount volumes for pod "the pod for php app":
timeout expired waiting for volumes to attach/mount for pod "the pod".
list of unattached/unmounted volumes=
[volume-959bo default-token-xxxxx]
I tried to:
turn on the MySQL app first, and then try to turn on the PHP app
found php app can't start
turn off both apps
turn on the PHP app first, and then try to turn on the MySQL app.
found mysql app can't start
The strange thing is that the event log never says it can't mount volume for the MySQL app.
The remaining volumen to mount is either default-token-xxxxx, or volume-959bo (the volume name in PHP app), but never mysql-data (the volume name in MySQL app).
So the error seems to be caused by the underlying storage you are using, in this case EBS. The OpenShift docs actually specifically state that this is the case for block storage, see here.
I know this will work for both NFS and Glusterfs storage, and have done this in numerous projects using these storage type but unfortunately, in your case it's not supported

How to setup error reporting in Stackdriver from kubernetes pods?

I'm a bit confused at how to setup error reporting in kubernetes, so errors are visible in Google Cloud Console / Stackdriver "Error Reporting"?
According to documentation
https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine
we need to enable fluentd' "forward input plugin" and then send exception data from our apps. I think this approach would have worked if we had setup fluentd ourselves, but it's already pre-installed on every node in a pod that just runs gcr.io/google_containers/fluentd-gcp docker image.
How do we enable forward input on those pods and make sure that http port available to every pod on the nodes? We also need to make sure this config is used by default when we add more nodes to our cluster.
Any help would be appreciated, may be I'm looking at all this from a wrong point?
The basic idea is to start a separate pod that receives structured logs over TCP and forwards it to Cloud Logging, similar to a locally-running fluentd agent. See below for the steps I used.
(Unfortunately, the logging support that is built into Docker and Kubernetes cannot be used - it just forwards individual lines of text from stdout/stderr as separate log entries which prevents Error Reporting from seeing complete stack traces.)
Create a docker image for a fluentd forwarder using a Dockerfile as follows:
FROM gcr.io/google_containers/fluentd-gcp:1.18
COPY fluentd-forwarder.conf /etc/google-fluentd/google-fluentd.conf
Where fluentd-forwarder.conf contains the following:
<source>
type forward
port 24224
</source>
<match **>
type google_cloud
buffer_chunk_limit 2M
buffer_queue_limit 24
flush_interval 5s
max_retry_wait 30
disable_retry_limit
</match>
Then build and push the image:
$ docker build -t gcr.io/###your project id###/fluentd-forwarder:v1 .
$ gcloud docker push gcr.io/###your project id###/fluentd-forwarder:v1
You need a replication controller (fluentd-forwarder-controller.yaml):
apiVersion: v1
kind: ReplicationController
metadata:
name: fluentd-forwarder
spec:
replicas: 1
template:
metadata:
name: fluentd-forwarder
labels:
app: fluentd-forwarder
spec:
containers:
- name: fluentd-forwarder
image: gcr.io/###your project id###/fluentd-forwarder:v1
env:
- name: FLUENTD_ARGS
value: -qq
ports:
- containerPort: 24224
You also need a service (fluentd-forwarder-service.yaml):
apiVersion: v1
kind: Service
metadata:
name: fluentd-forwarder
spec:
selector:
app: fluentd-forwarder
ports:
- protocol: TCP
port: 24224
Then create the replication controller and service:
$ kubectl create -f fluentd-forwarder-controller.yaml
$ kubectl create -f fluentd-forwarder-service.yaml
Finally, in your application, instead of using 'localhost' and 24224 to connect to the fluentd agent as described on https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine, use the values of evironment variables FLUENTD_FORWARDER_SERVICE_HOST and FLUENTD_FORWARDER_SERVICE_PORT.
To add to Boris' answer: As long as errors are logged in the right format (see https://cloud.google.com/error-reporting/docs/troubleshooting) and Cloud Logging is enabled (you can see the errors in https://console.cloud.google.com/logs/viewer) then errors will make it to Error Reporting without any further setup.
Boris' answer was great but was a lot more complicated then it really needed to be (no need to build a docker image). If you have kubectl configured on your local box (or you can use the Google Cloud Shell), copy and paste the following and it will install the forwarder in your cluster (I updated the version of fluent-gcp from the above answer). My solution uses a ConfigMap to store the file so it can be changed easily without rebuilding.
cat << EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-forwarder
data:
google-fluentd.conf: |+
<source>
type forward
port 24224
</source>
<match **>
type google_cloud
buffer_chunk_limit 2M
buffer_queue_limit 24
flush_interval 5s
max_retry_wait 30
disable_retry_limit
</match>
---
apiVersion: v1
kind: ReplicationController
metadata:
name: fluentd-forwarder
spec:
replicas: 1
template:
metadata:
name: fluentd-forwarder
labels:
app: fluentd-forwarder
spec:
containers:
- name: fluentd-forwarder
image: gcr.io/google_containers/fluentd-gcp:2.0.18
env:
- name: FLUENTD_ARGS
value: -qq
ports:
- containerPort: 24224
volumeMounts:
- name: config-vol
mountPath: /etc/google-fluentd
volumes:
- name: config-vol
configMap:
name: fluentd-forwarder
---
apiVersion: v1
kind: Service
metadata:
name: fluentd-forwarder
spec:
selector:
app: fluentd-forwarder
ports:
- protocol: TCP
port: 24224
EOF