How to create persistent volumes using OpenShift API - openshift

I am building an OpenShift client that is using OpenShift's REST API to perform various operations on the cluster. I would like to have this client create a persistent volume. The default developer user cannot create persistent volumes but the administrator can. I am trying to find the best way to create a persistent volume in a default OpenShift origin deployment without requiring user input and using only the REST APIs, without any oc usage.
I see two possible solutions:
Create the persistent volume as the user admin. However, I cannot figure out how to use the APIs as an admin since this user account has no token. I have tried to examine oc logs but cannot reverse engineer how it can authenticate as an admin.
Add the permission to create persistent volumes to the user developer. I would like to avoid this but if it comes to that I am willing to take this solution into consideration. Anybody knows what kind of permission the developer user needs to be able to create a persistent volume?
How to create a persistent volume using only the OpenShift API?

You can create a user or service account for persistent volume manipulation.
The roles that would need are cluster roles, and there's already one existing role called "system:persistent-volume-provisioner"
$ oc get clusterrole/system:persistent-volume-provisioner -o yaml --as system:admin
apiVersion: authorization.openshift.io/v1
kind: ClusterRole
metadata:
annotations:
authorization.openshift.io/system-only: "true"
openshift.io/reconcile-protect: "false"
creationTimestamp: 2018-03-16T13:18:45Z
name: system:persistent-volume-provisioner
resourceVersion: "134"
selfLink: /apis/authorization.openshift.io/v1/clusterroles/system%3Apersistent-volume-provisioner
uid: 8e253e28-291c-11e8-b0f7-36c91e93ae8e
rules:
- apiGroups:
- ""
attributeRestrictions: null
resources:
- persistentvolumes
verbs:
- create
- delete
- get
- list
- watch
- apiGroups:
- ""
attributeRestrictions: null
resources:
- persistentvolumeclaims
verbs:
- get
- list
- update
- watch
- apiGroups:
- storage.k8s.io
attributeRestrictions: null
resources:
- storageclasses
verbs:
- get
- list
- watch
- apiGroups:
- ""
attributeRestrictions: null
resources:
- events
verbs:
- create
- list
- patch
- update
- watch
If this cluster role has more than what you need, just create one with less permissions.

Related

Kubernetes -- Helm -- Mysql Chart loses stored data after stopping pod

Using https://github.com/helm/charts/tree/master/stable/mysql (all the code is here), it is cool being able to run mysql as part of my local kubernetes cluster (using docker kubernetes).
The problem though is that once I stop running the pod, and then run the pod again, all the data that was stored is now gone.
My question is how do I keep the data that was added to the mysql pod? I have read about persistent volumes, and the mysql helm example from github is showing that it is using PersistentVolumeClaim. I have also enabled persistence on the values.yaml file, but I cannot seem to have the same data that was saved in the database.
My docker kubernetes version is currently 1.14.6.
Please verify your msql POD You should notice volumes and volumesMount options:
volumeMounts:
- mountPath: /var/lib/mysql
name: data
.
.
.
volumes:
- name: data
persistentVolumeClaim:
claimName: msq-mysql
In additions please verify your PersistentVolume and PersistentVolumeClaim, storageClass:
kubectl get pv,pvc,pods,sc:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-2c6aa172-effd-11e9-beeb-42010a840083 8Gi RWO Delete Bound default/msq-mysql standard 24m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/msq-mysql Bound pvc-2c6aa172-effd-11e9-beeb-42010a840083 8Gi RWO standard 24m
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/msq-mysql-b5c48c888-pz6p2 1/1 Running 0 4m28s 10.0.0.8 gke-te-1-default-pool-36546f4e-5rgw <none> <none>
Please run kubectl describe persistentvolumeclaim/msq-mysql (in your example you should change the pvc name)
You can notice that pvc was provisioned successfully using gce-pd and mounted by msq-mysql POD.
Normal ProvisioningSucceeded 26m persistentvolume-controller Successfully provisioned volume pvc-2c6aa172-effd-11e9-beeb-42010a840083 using kubernetes.io/gce-pd
Mounted By: msq-mysql-b5c48c888-pz6p2
I have created table with on row, deleted the pod and verified after that (as expected everything is alright):
mysql> SELECT * FROM t;
+------+
| c |
+------+
| ala |
+------+
1 row in set (0.00 sec)
Why: all the data that was stored is now gone.
As per helm chart docs:
The MySQL image stores the MySQL data and configurations at the /var/lib/mysql path of the container.
By default a PersistentVolumeClaim is created and mounted into that directory. In order to disable this functionality you can change the values.yaml to disable persistence and use an emptyDir instead.
Mostly there is problem with pv,pvc binding. It can be also problem with user defined or non default storageClass.
So please verify pv,pvc as stated above.
Take a look at StorageClass
A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.
PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster, depending on whether the DefaultStorageClass admission plugin is turned on.

Role Openshift : Allow to rollback a previous version of a deployment without having the right to change it

I am looking for a solution to allow a user to rollback a previous version of a deployment without having the right to modify the deployment (images, environment variables, strategy,...).
I'm trying to create a role for developers who will only use CICD Pipelines to deploy a new version but I would like them to be able to go back if necessary.
Currently the role in particular:
- apiGroups:
- ""
- apps.openshift.io
attributeRestrictions: null
resources:
- deploymentconfigs
verbs:
- get
- list
- watch
- apiGroups:
- ""
- apps.openshift.io
attributeRestrictions: null
resources:
- deploymentconfigs/scale
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
- apps.openshift.io
attributeRestrictions: null
resources:
- deploymentconfigrollbacks
- deploymentconfigs/instantiate
- deploymentconfigs/rollback
verbs:
- create
- apiGroups:
- ""
- apps.openshift.io
attributeRestrictions: null
resources:
- deploymentconfigs/log
- deploymentconfigs/status
verbs:
- get
- list
- watch
I get the following error :
$ oc rollback DC_NAME --to-version=19
Error from server (Forbidden): deploymentconfigs.apps.openshift.io "DC_NAME" is forbidden: User "foo" cannot update deploymentconfigs.apps.openshift.io in the namespace "NAMESPACE_NAME": User "foo" cannot update deploymentconfigs.apps.openshift.io in project "NAMESPACE_NAME"
Or
$ oc rollout undo dc DC_NAME --to-revision=19
Error from server (Forbidden): deploymentconfigs.apps.openshift.io "DC_NAME" is forbidden: User "foo" cannot update deploymentconfigs.apps.openshift.io in the namespace "NAMESPACE_NAME": User "foo" cannot update deploymentconfigs.apps.openshift.io in project "NAMESPACE_NAME"
If I add the verb "update" to the resource "deploymentconfigs", I don't have any errors but the developer can modify the deployment.
Do you have any idea ?
Thanks
Which access level the user logged in Openshift has?
First you need to consult that with the command bellow:
oc get rolebindings | grep your-username
You need a user with roles levels like edit or admin to be able run the rollback command.

Cronjob of existing Pod

I have a django app running on Openshift 3. I need to run certain manage.py commands on a regular basis. In Openshift 2 I used the Cron gear and now in Openshift 3 I want to use the CronJob pod type.
I want to create a pod for the cronjob, use the same source as the django app is using, but not expose it.
For example:
W1 - Django app
D1 - Postgres DB
M1 - django app for manage.py jobs, run as a cronjob pod.
Any help is appreciated.
You want to use a scheduled job.
https://docs.openshift.com/container-platform/3.5/dev_guide/cron_jobs.html
https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
https://blog.openshift.com/openshift-jobs/
Note that at this time (OpenShift 3.5), you have to use batch/v2alpha1 as the API version. Be careful of out of date documentation showing older version labels.
What I am not sure of is how you can easily reference the image associated with an existing imagestream produced when you used the S2I builder to build you application and you want to use the same image. The base Kubernetes object for this expects you to refer to the image from the image registry. You would thus need to work that out by looking at the imagestream and copying the image registry IP and image details over by hand.
UPDATE 1
See:
https://stackoverflow.com/a/45227960/128141
for details of how from OpenShift 3.6 you can have it resolve the imagestream name automatically. That mechanism is still alpha status in 3.6, but does work.
I've gotten it to work with specifying the image name in the YAML, but then tried to get it to work as part of the template, but ran into an error when trying to use the batch/v1 version on this server
Cannot create cron job "djangomanage". The API version batch/v1 for kind CronJob is not supported by this server.
My template code is
- apiVersion: batch/v1
kind: CronJob
metadata:
name: djangomanage
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: djangomanage
image: '${NAME}:latest'
env:
- name: APP_SCRIPT
value: "/opt/app-root/src/cron.sh"
restartPolicy: Never
CRON.SH
python /opt/app-root/src/manage.py
you need to update line 1 with this:
- apiVersion: batch/v1beta1
see link below:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#cronjob-v1beta1-batch

RunDeck / ACL / Custom / for non-admin group

Having trouble to get user that belongs to group "user" having access (at least read) to projects. I've read and tried several examples I found on the internet, none seems to work.
What I need for now is: allow any users who belong to group "user" to read project named MYPROJECT. I have this, saved in a file named user.aclpolicy under /etc/rundeck. I have waited for 60+ seconds. I've also tried restarting RunDeck. No luck.
I keep getting:
You have no authorized access to projects.
Contact your administrator. (User roles: raka, user)
description: application access to a project
application: 'rundeck'
for:
resource:
- equals:
kind: project
deny: [create] # deny create of projects
- equals:
kind: system
allow: [read] # allow read of system info
- equals:
kind: user
deny: [admin] # allow modify user profiles
project:
- equals:
name: 'MYPROJECT'
allow: [read] # allow access
deny: [import,export,configure,delete] # deny admin actions
storage:
- deny: [read,create,update,delete] # allow access for /keys/* storage content
by:
group: user
What's wrong with YAML above? I've also checked the web.xml under /var/lib/rundeck/exp/webapp/WEB-INF to make sure role-name "user" is registered there.
My realm.properties contains this line:
raka:greentooth60,user
I've also tried this. Basically copying whatever was there for the "admin" group. And for that I also tried it putting it direcly in the admin.aclpolicy instead of separate file. Still no luck.
description: User, all access.
context:
project: '.*' # all projects
for:
resource:
- allow: '*' # allow read/create all kinds
adhoc:
- allow: '*' # allow read/running/killing adhoc jobs
job:
- allow: '*' # allow read/write/delete/run/kill of all jobs
node:
- allow: '*' # allow read/run for all nodes
by:
group: user
RunDeck version: Rundeck 2.6.9-1 cafe bonbon indigo tower 2016-08-03
This is a debian installation of RunDeck (.deb). Which log file(s) can I look at to analyze situations like this?
Thanks,
Raka
RunDeck ACLs can be counter-intuitive and take some time to get used to. For visibility, especially when you are starting out writing RunDeck ACL policies, it is better to only set what users are allowed to do, instead of denying access. By default, nothing is allowed, so you only really need to add allow statements to give users access to resources.
RunDeck needs ACL policies for both the "application" context, and the "project" context. You are specifying read access to the project in the application context, and access to all jobs by name (in your case .*) in the project context, but there you also need to give access to read the resource type job in order for jobs to be visible. See the example below.
Useful logs
For troubleshooting RunDeck I have found the following logs useful:
tail -f /var/log/rundeck/{rundeck.log,service.log}
Testing ACL policies
If you want to test specific user actions against your ACL files, you can use the tool rd-acl which is installed together with RunDeck. For example, to test that a member of group user can read the job restart some server in the project MYPROJECT, you can do:
rd-acl test -p 'MYPROJECT' -g user -c project -j 'restart some server' -a read
For more details, see the rd-acl manual.
Read-only ACL example
Here is an example (tested on Rundeck 2.6.9-1) that should give anyone in the group "user" access to read everything on your RunDeck server:
context:
application: rundeck
description: "normal users will only have read permissions"
for:
project:
- match:
name: '.*'
allow: [read]
system:
- match:
name: '.*'
allow: [read]
by:
group: user
---
context:
project: '.*'
description: "normal users will only have read permissions"
for:
resource:
- equals:
kind: 'node'
allow: [read,refresh]
- equals:
kind: 'job'
allow: [read]
- equals:
kind: 'event'
allow: [read]
job:
- match:
name: '.*'
allow: [read]
node:
- match:
nodename: '.*'
allow: [read,refresh]
by:
group: user
Another thing you can stumble upon when dealing with the "You have no authorized access to projects" are the permissions.
If for whatever reason you have created the aclpolicy file with a simple copy using the root user, you will need to change the owner and group to 'rundeck' (unless you changed the user rundeck runs under, of course).
That made me loose 30 minutes today, hope this helps someone.

Container-VM Image with GPD Volumes fails with "Failed to get GCE Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead"

I currently try to switch from the "Container-Optimized Google Compute Engine Images" (https://cloud.google.com/compute/docs/containers/container_vms) to the "Container-VM" Image (https://cloud.google.com/compute/docs/containers/vm-image/#overview). In my containers.yaml, I define a volume and a container using the volume.
apiVersion: v1
kind: Pod
metadata:
name: workhorse
spec:
containers:
- name: postgres
image: postgres:9.5
imagePullPolicy: Always
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
gcePersistentDisk:
pdName: disk-name
fsType: ext4
This setup worked fine with the "Container-Optimized Google Compute Engine Images", however fails with the "Container-VM". In the logs, I can see the following error:
May 24 18:33:43 battleship kubelet[629]: E0524 18:33:43.405470 629 gce_util.go:176]
Error getting GCECloudProvider while detaching PD "disk-name":
Failed to get GCE Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead
Thanks in advance for any hint!
This happens only when kubelet is run without the --cloud-provider=gce flag. The problem, unless is something different, is dependant on how GCP is launching Container-VMs.
Please contact with google cloud platform guys.
Note if this happens to you when using GCE: Add --cloud-provider=gce flag to kubelet in all your workers. This only applies to 1.2 cluster versions because, if i'm not wrong, there is an ongoing attach/detach design targeted for 1.3 clusters which will move this business logic out of kubelet.
In case someone is interested in the attach/detach redesign here it is its corresponding github issue: https://github.com/kubernetes/kubernetes/issues/20262