Cronjob of existing Pod - openshift

I have a django app running on Openshift 3. I need to run certain manage.py commands on a regular basis. In Openshift 2 I used the Cron gear and now in Openshift 3 I want to use the CronJob pod type.
I want to create a pod for the cronjob, use the same source as the django app is using, but not expose it.
For example:
W1 - Django app
D1 - Postgres DB
M1 - django app for manage.py jobs, run as a cronjob pod.
Any help is appreciated.

You want to use a scheduled job.
https://docs.openshift.com/container-platform/3.5/dev_guide/cron_jobs.html
https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
https://blog.openshift.com/openshift-jobs/
Note that at this time (OpenShift 3.5), you have to use batch/v2alpha1 as the API version. Be careful of out of date documentation showing older version labels.
What I am not sure of is how you can easily reference the image associated with an existing imagestream produced when you used the S2I builder to build you application and you want to use the same image. The base Kubernetes object for this expects you to refer to the image from the image registry. You would thus need to work that out by looking at the imagestream and copying the image registry IP and image details over by hand.
UPDATE 1
See:
https://stackoverflow.com/a/45227960/128141
for details of how from OpenShift 3.6 you can have it resolve the imagestream name automatically. That mechanism is still alpha status in 3.6, but does work.

I've gotten it to work with specifying the image name in the YAML, but then tried to get it to work as part of the template, but ran into an error when trying to use the batch/v1 version on this server
Cannot create cron job "djangomanage". The API version batch/v1 for kind CronJob is not supported by this server.
My template code is
- apiVersion: batch/v1
kind: CronJob
metadata:
name: djangomanage
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: djangomanage
image: '${NAME}:latest'
env:
- name: APP_SCRIPT
value: "/opt/app-root/src/cron.sh"
restartPolicy: Never
CRON.SH
python /opt/app-root/src/manage.py

you need to update line 1 with this:
- apiVersion: batch/v1beta1
see link below:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#cronjob-v1beta1-batch

Related

mkdir /.gitlab-runner: permission denied running GitLab Runner in Kubernetes deployed via Helm

I'm trying to deploy the GitLab Runner (15.7.1) onto an on-premise Kubernetes cluster and getting the following error:
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
This is occurring with both the 15.7.1 image (Ubuntu?) and the alpine3.13-v15.7.1 image. Looking at the deployment, it looks likes it should be trying to use /home/gitlab-runner, but for some reason it is trying to use root (/), which is a protected directory.
Anyone else experience this issue or have a suggestion as to what to look at?
I am using the Helm chart (0.48.0) using a copy of the images from dockerhub (simply moved into a local repository as internet access is not available from the cluster). Connectivity to GitLab appears to be working, but the error causes the overall startup to fail. Full logs are:
Registration attempt 4 of 30
Runtime platform arch=amd64 os=linux pid=33 revision=6d480948 version=15.7.1
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
Created missing unique system ID system_id=r_Of5q3G0yFEVe
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
I have tried the 15.7.1 image, the alpine3.13-v15.7.1 image, and the gitlab-runner-ocp:amd64-v15.7.1 image and searched the values.yaml for anything relevant to the path. Looking at the deployment template, it appears that it ought to be using /home/gitlab-runner as the directory (instead of /) [though the docs suggested it was /home].
As for "what was I expecting", of course I was expecting that it would "just work" :)
So, resolved this (and other) issues with:
Updated helm deployment template to mount an empty volume at /.gitlab-runner
[separate issue] explicitly added builds_dir and environment [per gitlab-org/gitlab-runner#3511 (comment 114281106)].
These two steps appeared to be sufficient to get the Helm chart deployment working.
You can easily create and mount the emptyDir (in case you are creating gitlab-runner with kubernetes manifest *.yml file):
volumes:
- emptyDir: {}
name: gitlab-runner
volumeMounts:
- name: gitlab-runner
mountPath: /.gitlab-runner
-------------------- OR --------------------
volumeMounts:
- name: root-gitlab-runner
mountPath: /.gitlab-runner
volumes:
- name: root-gitlab-runner
emptyDir:
medium: "Memory"

How do I use an imagestream from another namespace in openshift?

I have been breaking my head over the following:
I have a set of buildconfigs that build images and create imagestreams for it in the "openshift" namespace. This gives me for example the netclient-userspace imagestream.
krist#MacBook-Pro netmaker % oc get is netclient-userspace
NAME IMAGE REPOSITORY TAGS UPDATED
netclient-userspace image-registry.openshift-image-registry.svc:5000/openshift/netclient-userspace latest About an hour ago
What I have however not been able to figure out is how to use this imagestream in a deployment in a different namespace.
Take for example this:
kind: Pod
apiVersion: v1
metadata:
name: netclient-test
namespace: "kvb-netclient-test"
spec:
containers:
- name: netclient
image: netclient-userspace:latest
When I deploy this I get errors...
Failed to pull image "netclient-userspace:latest": rpc error: code =
Unknown desc = reading manifest latest in
docker.io/library/netclient-userspace: errors: denied: requested
access to the resource is denied unauthorized: authentication required
So openshift goest and looks for the image on dockerhub. It shouldn't. How do I tell openshift to use the imagestream here?
When using an ImageStreamTag for a Deployment image source, you need to use the image.openshift.io/triggers annotation. It instructs OpenShift to replace the image: attribute in a Deployment with the value of an ImageStreamTag (and to redeploy it when the ImageStreamTag changes in the future).
Importantly, note the annotation and the image: ' ' with the explicit space character in the yaml string.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"content-netclient-userspace:latest","namespace":"openshift"},"fieldPath":"spec.template.spec.containers[?(#.name==\"netclient\")].image"}]'
name: netclient-test
namespace: "kvb-netclient-test"
spec:
...
template:
...
spec:
containers:
- command:
- ...
image: ' '
name: netclient
...
I will also mention that, in order to pull images from different namespaces, it may be required to authorize the Deployment's service account to do so: OpenShift Docs.

Kubernetes -- Helm -- Mysql Chart loses stored data after stopping pod

Using https://github.com/helm/charts/tree/master/stable/mysql (all the code is here), it is cool being able to run mysql as part of my local kubernetes cluster (using docker kubernetes).
The problem though is that once I stop running the pod, and then run the pod again, all the data that was stored is now gone.
My question is how do I keep the data that was added to the mysql pod? I have read about persistent volumes, and the mysql helm example from github is showing that it is using PersistentVolumeClaim. I have also enabled persistence on the values.yaml file, but I cannot seem to have the same data that was saved in the database.
My docker kubernetes version is currently 1.14.6.
Please verify your msql POD You should notice volumes and volumesMount options:
volumeMounts:
- mountPath: /var/lib/mysql
name: data
.
.
.
volumes:
- name: data
persistentVolumeClaim:
claimName: msq-mysql
In additions please verify your PersistentVolume and PersistentVolumeClaim, storageClass:
kubectl get pv,pvc,pods,sc:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-2c6aa172-effd-11e9-beeb-42010a840083 8Gi RWO Delete Bound default/msq-mysql standard 24m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/msq-mysql Bound pvc-2c6aa172-effd-11e9-beeb-42010a840083 8Gi RWO standard 24m
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/msq-mysql-b5c48c888-pz6p2 1/1 Running 0 4m28s 10.0.0.8 gke-te-1-default-pool-36546f4e-5rgw <none> <none>
Please run kubectl describe persistentvolumeclaim/msq-mysql (in your example you should change the pvc name)
You can notice that pvc was provisioned successfully using gce-pd and mounted by msq-mysql POD.
Normal ProvisioningSucceeded 26m persistentvolume-controller Successfully provisioned volume pvc-2c6aa172-effd-11e9-beeb-42010a840083 using kubernetes.io/gce-pd
Mounted By: msq-mysql-b5c48c888-pz6p2
I have created table with on row, deleted the pod and verified after that (as expected everything is alright):
mysql> SELECT * FROM t;
+------+
| c |
+------+
| ala |
+------+
1 row in set (0.00 sec)
Why: all the data that was stored is now gone.
As per helm chart docs:
The MySQL image stores the MySQL data and configurations at the /var/lib/mysql path of the container.
By default a PersistentVolumeClaim is created and mounted into that directory. In order to disable this functionality you can change the values.yaml to disable persistence and use an emptyDir instead.
Mostly there is problem with pv,pvc binding. It can be also problem with user defined or non default storageClass.
So please verify pv,pvc as stated above.
Take a look at StorageClass
A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.
PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster, depending on whether the DefaultStorageClass admission plugin is turned on.

API error (500): manifest unknown: manifest unknown

it failes to pull the image with SHA256 digest identifier
Unfortunately this is a side-effect of DockerHub removing backwards compatibility for Docker 1.9 daemons. When images are pushed using Docker 1.10, pull-by-id will fail for older daemons (which includes OpenShift masters importing metadata from the Hub). You can work around this by pulling the centos image and pushing it to the internal registry.
At the current time, using Docker 1.9 on your hosts will avoid this issue.
You can apply a workaround for this issue by removing Image Change Trigger and removing the hash from image attribute in container spec.
Modify the build config:
strategy:
dockerStrategy:
from:
kind: ImageStreamTag
name: mysql-56-centos7
Replace to:
strategy:
dockerStrategy:
from:
kind: DockerImage
name: docker.io/centos/mysql-56-centos7:latest

Container-VM Image with GPD Volumes fails with "Failed to get GCE Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead"

I currently try to switch from the "Container-Optimized Google Compute Engine Images" (https://cloud.google.com/compute/docs/containers/container_vms) to the "Container-VM" Image (https://cloud.google.com/compute/docs/containers/vm-image/#overview). In my containers.yaml, I define a volume and a container using the volume.
apiVersion: v1
kind: Pod
metadata:
name: workhorse
spec:
containers:
- name: postgres
image: postgres:9.5
imagePullPolicy: Always
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
gcePersistentDisk:
pdName: disk-name
fsType: ext4
This setup worked fine with the "Container-Optimized Google Compute Engine Images", however fails with the "Container-VM". In the logs, I can see the following error:
May 24 18:33:43 battleship kubelet[629]: E0524 18:33:43.405470 629 gce_util.go:176]
Error getting GCECloudProvider while detaching PD "disk-name":
Failed to get GCE Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead
Thanks in advance for any hint!
This happens only when kubelet is run without the --cloud-provider=gce flag. The problem, unless is something different, is dependant on how GCP is launching Container-VMs.
Please contact with google cloud platform guys.
Note if this happens to you when using GCE: Add --cloud-provider=gce flag to kubelet in all your workers. This only applies to 1.2 cluster versions because, if i'm not wrong, there is an ongoing attach/detach design targeted for 1.3 clusters which will move this business logic out of kubelet.
In case someone is interested in the attach/detach redesign here it is its corresponding github issue: https://github.com/kubernetes/kubernetes/issues/20262