I'm currently trying to create a CronJob in Openshift that starts every day at 3.00 am. Creating the CronJob using a schedule with step values (e. g. "0 */1 * * *") works fine and the Jobs start correctly, however when creating the CronJob with a specific schedule such as "0 3 * * *" or "0 3 */1 * *", the CronJob won't start at all.
I checked the time stamps in the monitoring section and the time stamps are displayed in the same time zone as used for the CronJob schedule.
Any ideas on how to solve/work around this issue or if it is possible that CronJob uses a different time zone setting than displayed in the logs?
I am using openshift v3.9.102 and kubernetes v1.9.1+a0ce1bc657.
My CronJob configuration:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: update-prices
spec:
schedule: "0 3 * * *"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobpcurl"
spec:
containers:
- name: curljob
image: curlimages/curl:latest
command: ["curl", "--insecure", "https://www.example.com"]
imagePullPolicy: Always
restartPolicy: Never
According to the documentation for CronJobs:
All cron job schedule times are based on the timezone of the master where the job is initiated.
So the first thing you might want to check is what timezone your OpenShift Master nodes have.
Then, check if your cronjob is running at all ("LAST SCHEDULE", here it ran 14s ago):
$ oc get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
my-cronjob 0 19 * * * False 0 14s 3m
You can also use oc describe cronjob my-cronjob to see the Events and when the CronJob last ran. Plus it will also tell you when there has been an error:
$ oc describe cronjob my-cronjob
[..]
Last Schedule Time: Thu, 30 Apr 2020 19:00:00 +0200
Active Jobs: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 80s cronjob-controller Created job my-cronjob-1588273680
Normal SawCompletedJob 70s cronjob-controller Saw completed job: my-cronjob-1588273680
If you cannot see anything in the Events, have a look at the OpenShift Controller logs, there you should see something like this:
controller.go:597] quota admission added evaluator for: {batch jobs}
event.go:221] Event(v1.ObjectReference{Kind:"CronJob", Namespace:"myproject", Name:"my-cronjob", UID:"8369a749-8b15-11ea-807e-080027e8770f", APIVersion:"batch/v1beta1", ResourceVersion:"3736", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created job my-cronjob-1588273680
controller.go:597] quota admission added evaluator for: {batch jobs}
event.go:221] Event(v1.ObjectReference{Kind:"Job", Namespace:"myproject", Name:"my-cronjob-1588273680", UID:"e9a3d2c5-8b15-11ea-807e-080027e8770f", APIVersion:"batch/v1", ResourceVersion:"3837", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-cronjob-1588273680-q2tdc
[..]
Related
Using https://github.com/helm/charts/tree/master/stable/mysql (all the code is here), it is cool being able to run mysql as part of my local kubernetes cluster (using docker kubernetes).
The problem though is that once I stop running the pod, and then run the pod again, all the data that was stored is now gone.
My question is how do I keep the data that was added to the mysql pod? I have read about persistent volumes, and the mysql helm example from github is showing that it is using PersistentVolumeClaim. I have also enabled persistence on the values.yaml file, but I cannot seem to have the same data that was saved in the database.
My docker kubernetes version is currently 1.14.6.
Please verify your msql POD You should notice volumes and volumesMount options:
volumeMounts:
- mountPath: /var/lib/mysql
name: data
.
.
.
volumes:
- name: data
persistentVolumeClaim:
claimName: msq-mysql
In additions please verify your PersistentVolume and PersistentVolumeClaim, storageClass:
kubectl get pv,pvc,pods,sc:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-2c6aa172-effd-11e9-beeb-42010a840083 8Gi RWO Delete Bound default/msq-mysql standard 24m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/msq-mysql Bound pvc-2c6aa172-effd-11e9-beeb-42010a840083 8Gi RWO standard 24m
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/msq-mysql-b5c48c888-pz6p2 1/1 Running 0 4m28s 10.0.0.8 gke-te-1-default-pool-36546f4e-5rgw <none> <none>
Please run kubectl describe persistentvolumeclaim/msq-mysql (in your example you should change the pvc name)
You can notice that pvc was provisioned successfully using gce-pd and mounted by msq-mysql POD.
Normal ProvisioningSucceeded 26m persistentvolume-controller Successfully provisioned volume pvc-2c6aa172-effd-11e9-beeb-42010a840083 using kubernetes.io/gce-pd
Mounted By: msq-mysql-b5c48c888-pz6p2
I have created table with on row, deleted the pod and verified after that (as expected everything is alright):
mysql> SELECT * FROM t;
+------+
| c |
+------+
| ala |
+------+
1 row in set (0.00 sec)
Why: all the data that was stored is now gone.
As per helm chart docs:
The MySQL image stores the MySQL data and configurations at the /var/lib/mysql path of the container.
By default a PersistentVolumeClaim is created and mounted into that directory. In order to disable this functionality you can change the values.yaml to disable persistence and use an emptyDir instead.
Mostly there is problem with pv,pvc binding. It can be also problem with user defined or non default storageClass.
So please verify pv,pvc as stated above.
Take a look at StorageClass
A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.
PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster, depending on whether the DefaultStorageClass admission plugin is turned on.
We are using Openshift Online Pro.
I am wanting to document / script how to review openshift deployment revisions on the commandline to facilitate rolling back to specific revisions. On the Web Console there is a "History" tab for the deployment that shows the revision number and how long ago it was actioned:
https://www.dropbox.com/s/12z4gmuqdzlnurg/File%2005-03-2018%2C%2007%2048%2053.jpeg?dl=0
If I used the command line oc get dc/backend it only shows the current revision.
Is there a way on the commandine to get deployment history data to make it easy to script a rollback tool that goes back to specific revisions?
(Note: I am aware that oc rollback backend will rollback the previous version but in testing there are corner cases where that won't help and we would need to skip back two or more versions.)
Use:
oc describe dc/prometheus
It will show you something like:
Deployment #11 (latest):
Name: prometheus-11
Created: 3 hours ago
Status: Complete
Replicas: 1 current / 1 desired
Selector: app=prometheus,deployment=prometheus-11,deploymentconfig=prometheus
Labels: app=prometheus,openshift.io/deployment-config.name=prometheus
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Deployment #10:
Created: 5 hours ago
Status: Complete
Replicas: 0 current / 0 desired
Deployment #9:
Created: 6 hours ago
Status: Complete
Replicas: 0 current / 0 desired
For condensed version, use:
oc rollout history dc/prometheus
This will give you:
deploymentconfigs "prometheus"
REVISION STATUS CAUSE
9 Complete manual change
10 Complete manual change
11 Complete manual change
I am trying to run an sonatype/nexus3 on openshift online v3 pro. If I just use the web console to create a new app from image it assigns it only 512Mi and it dies with OOM. It did get created though and logged a lot of java output before it died of out of memory. When using the web console there doesnt appear a way to set the memory on the image. When I try to edited the yaml of the pod it doesn't let me edited the memory limit.
Reading the docs about memory limits it suggests that I can run with this:
oc run nexus333 --image=sonatype/nexus3 --limits=memory=750Mi
Then it doesn't even start. It dies with:
{kubelet ip-172-31-59-148.ec2.internal} Error: Error response from
daemon: {"message":"create
c30deb38b3c26252bf1218cc898fbf1c68d8fc14e840076710c211d58ed87a59:
mkdir
/var/lib/docker/volumes/c30deb38b3c26252bf1218cc898fbf1c68d8fc14e840076710c211d58ed87a59:
permission denied"}
More information from oc get events:
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
16m 16m 1 nexus333-1-deploy Pod Normal Scheduled {default-scheduler } Successfully assigned nexus333-1-deploy to ip-172-31-50-97.ec2.internal
16m 16m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Pulling {kubelet ip-172-31-50-97.ec2.internal} pulling image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.6.173.0.21"
16m 16m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Pulled {kubelet ip-172-31-50-97.ec2.internal} Successfully pulled image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.6.173.0.21"
15m 15m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Created {kubelet ip-172-31-50-97.ec2.internal} Created container
15m 15m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Started {kubelet ip-172-31-50-97.ec2.internal} Started container
15m 15m 1 nexus333-1-rftvd Pod Normal Scheduled {default-scheduler } Successfully assigned nexus333-1-rftvd to ip-172-31-59-148.ec2.internal
15m 14m 7 nexus333-1-rftvd Pod spec.containers{nexus333} Normal Pulling {kubelet ip-172-31-59-148.ec2.internal} pulling image "sonatype/nexus3"
15m 10m 19 nexus333-1-rftvd Pod spec.containers{nexus333} Normal Pulled {kubelet ip-172-31-59-148.ec2.internal} Successfully pulled image "sonatype/nexus3"
15m 15m 1 nexus333-1-rftvd Pod spec.containers{nexus333} Warning Failed {kubelet ip-172-31-59-148.ec2.internal} Error: Error response from daemon: {"message":"create 3aa35201bdf81d09ef4b09bba1fc843b97d0339acfef0c30cecaa1fbb6207321: mkdir /var/lib/docker/volumes/3aa35201bdf81d09ef4b09bba1fc843b97d0339acfef0c30cecaa1fbb6207321: permission denied"}
I am not sure why if I use the web console I cannot assign more memory. I am not sure why running it with oc run dies with the mkdir error. Can anyone tell me how to run sonatype/nexus3 on openshift online pro?
Looking in the documentation I see that it is a Java VM solution.
When using Java 8, memory usage can be DRAMATICALLY IMPROVED using only the following 2 runtime Java VM options:
... "-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap" ...
I just deployed my container (Spring Boot JAR) that consumed over 650 MB RAM. With just these two (new) options RAM consumption dropped to just 270 MB!!!
So, with these 2 runtime settings all OOM's are left far behind! Enjoy!
You may want to also follow along with the tutorial that is in the OpenShift docs https://docs.openshift.com/online/dev_guide/app_tutorials/maven_tutorial.html
I have had success deploying this in OpenShift Online Pro
Okay the mkdir /var/lib/docker/volumes/ permission denied seems to be that the image needs a /nexus-data mount and that is refused. I saw that by deploying from the web console (dies with OOM) but the edit yaml for the created pod to see the generated volume mount.
Creating the image with the following yaml using cat nexus3_pod.ephemeral.yaml | oc create -f - with the volume mount and explicit memory settings the container will now start up:
apiVersion: "v1"
kind: "Pod"
metadata:
name: "nexus3"
labels:
name: "nexus3"
spec:
containers:
-
name: "nexus3"
resources:
requests:
memory: "1200Mi"
limits:
memory: "1200Mi"
image: "sonatype/nexus3"
ports:
-
containerPort: 8081
name: "nexus3"
volumeMounts:
- mountPath: /nexus-data
name: nexus3-1
volumes:
- emptyDir: {}
name: nexus3-1
Notes
The mage sets -Xmx1200m as documented at sonatype/docker-nexus3. So if you assign memory less than 1200Mi it will crash with OOM when the heap grows over the limit. You may as well set requested and max to be the max heap side anything.
When the allocated memory was too low it crashed die just as it was setting up the DB which corrupted the db log which meant it then got in a crash loop "couldn't load 4 byte from 0 byte file" when I recreated it with more memory. It seems that with an emptyDir the files hang around between crash restarts and memory changes (that's documented behaviour I think). I had to recreate a pod with a different name to get a clean emptyDir and assigned memory of 1200Mi to get it to all start.
I have a django app running on Openshift 3. I need to run certain manage.py commands on a regular basis. In Openshift 2 I used the Cron gear and now in Openshift 3 I want to use the CronJob pod type.
I want to create a pod for the cronjob, use the same source as the django app is using, but not expose it.
For example:
W1 - Django app
D1 - Postgres DB
M1 - django app for manage.py jobs, run as a cronjob pod.
Any help is appreciated.
You want to use a scheduled job.
https://docs.openshift.com/container-platform/3.5/dev_guide/cron_jobs.html
https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
https://blog.openshift.com/openshift-jobs/
Note that at this time (OpenShift 3.5), you have to use batch/v2alpha1 as the API version. Be careful of out of date documentation showing older version labels.
What I am not sure of is how you can easily reference the image associated with an existing imagestream produced when you used the S2I builder to build you application and you want to use the same image. The base Kubernetes object for this expects you to refer to the image from the image registry. You would thus need to work that out by looking at the imagestream and copying the image registry IP and image details over by hand.
UPDATE 1
See:
https://stackoverflow.com/a/45227960/128141
for details of how from OpenShift 3.6 you can have it resolve the imagestream name automatically. That mechanism is still alpha status in 3.6, but does work.
I've gotten it to work with specifying the image name in the YAML, but then tried to get it to work as part of the template, but ran into an error when trying to use the batch/v1 version on this server
Cannot create cron job "djangomanage". The API version batch/v1 for kind CronJob is not supported by this server.
My template code is
- apiVersion: batch/v1
kind: CronJob
metadata:
name: djangomanage
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: djangomanage
image: '${NAME}:latest'
env:
- name: APP_SCRIPT
value: "/opt/app-root/src/cron.sh"
restartPolicy: Never
CRON.SH
python /opt/app-root/src/manage.py
you need to update line 1 with this:
- apiVersion: batch/v1beta1
see link below:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#cronjob-v1beta1-batch
After spinning up the newest cdk v 2.4 (https://developers.redhat.com/products/cdk/download/), I get the following when trying to deploy gitlab-ce, per gitlab's directions at https://about.gitlab.com/2016/06/28/get-started-with-openshift-origin-3-and-gitlab/
Time Kind and Name Reason and Message
2:21:21 PM
Pod
gitlab-ce-redis-1-1digf
Failed scheduling
*SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "gitlab-ce-redis-data", which is unexpected.*
6 times in the last minute
2:21:20 PM
Pod
gitlab-ce-1-89qc8
Failed scheduling
SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "gitlab-ce-etc", which is unexpected.
6 times in the last minute
2:21:19 PM
Pod
gitlab-ce-postgresql-1-yatd9
Failed scheduling
SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "gitlab-ce-postgresql", which is unexpected.
They seem to be getting created and stuck in a "Pending" states:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
gitlab-ce-data Pending 5d
gitlab-ce-etc Pending 5d
gitlab-ce-postgresql Pending 5d
gitlab-ce-redis-data Pending 5d
How can I troubleshoot these errors with creating pv's?
PVs needs to be created by admin user, so log in as admin
oc login -u system:admin
You may use a token for the previous step if master is a remote host. Then just create the PV:
$cat pv.yaml
kind: PersistentVolume
metadata:
name: foobar
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /tmp/foo
oc create pv -f pv.yaml
The Retain, ReadWriteMany, 5Gi values may be different in your case. Check the docs. 'hostPath' works only in single-node cluster, otherwise you need to use NFS or similar.