Disable delete namespace - Kubernetes - namespaces

I created a namespace called qc for qc environment.
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace.name | quote }}
kubectl create -f namespace.yaml
But I can delete this namespace anytime by running kubectl delete namespace qc.
How can I disable to delete user created namespaces?
Thank you

You do not want disable deletion of Namespaces for your kubernetes-admin user, although it could be possible. If there are other people or services interacting with your cluster, you need to define Users and/or Service Accounts for them and bind Cluster Roles to them, whitelisting their permissions. Have a look at Users in Kubernetes and Using RBAC Authorization in the official Kubernetes Documentation.

Related

how to delete project in redhat openshift web ui without permissions?

I tried openshift redhat k8s distro and now there are 2 projects that i need to delete. I can only login as user 'erjcan', this is my primary acc and it seems not to be allowed to do admin actions.
The 'delete button' is inactive in gui console, i tried to create a role for myself but can't.
I tried to create admin-like role and assume it as a user, but it is not allowed either.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: all-stuff
namespace: erjcan-stage
rules:
- apiGroups:
- ''
resources:
- '*'
verbs:
- '*'
This code above gives me RBAC not allowed error:
An error occurred
roles.rbac.authorization.k8s.io "all-stuff" is forbidden: user "erjcan"
(groups=["system:authenticated:oauth" "system:authenticated"]) is
attempting to grant RBAC permissions
not currently held: {APIGroups:[""], Resources:["*"],
Verbs:["*"]}
I tried to delete via cli, but i can only login as erjcan user.
Logged into "https://api.sandbox-m2.ll9k.p1.openshiftapps.com:6443" as "erjcan" using the token provided.
You have access to the following projects and can switch between them with 'oc project <projectname>':
erjcan-dev
* erjcan-stage
Using project "erjcan-stage".
bash-4.4 ~ $
bash-4.4 ~ $ oc delete project erjan-dev
Error from server (Forbidden): projects.project.openshift.io "erjan-dev" is forbidden: User "erjcan" cannot delete resource "projects" in API group "project.openshift.io" in the namespace "erjan-dev"
bash-4.4 ~ $ oc delete project erjcan-dev
Error from server (Forbidden): projects.project.openshift.io "erjcan-dev" is forbidden: User "erjcan" cannot delete resource "projects" in API group "project.openshift.io" in the namespace "erjcan-dev"
How to delete a project in redhat openshift gui console?
You appear to be talking about using Red Hat's developer sandbox. Which, indeed, does not allow you to delete projects. There's no way around that: RBAC is specifically set up to not allow you to create or delete projects.
You don't say why you need to delete the projects. They will go away eventually do to inactivity. But, if you just want a clean slate, or just need to remove what you have inside that project you do have permission to delete everything in the project (just not the project itself).
oc delete all --all will remove everything inside the current project. Obviously use that command with strict care: there is no confirmation or warning. (BTW, the first "all" is saying all types of objects: pods/deployments/routes/etc, the second --all is saying "yes, I'm deliberately not providing a filter or any other subset, I really mean delete all of the objects I'm specifying".
Similarly, the following two commands should clean up both of your projects. (Although they will still exist.)
oc delete all --all -n erjcan-stage
oc delete all --all -n erjcan-dev

Pod level route restriction

EDITED:
I have a service running in OpenShift on 2 pods, let's call them P1 and P2.
The service does two things:
An API
We listen to Kafka messages from a topic and then process them.
Is there a way I can restrict all calls made to API only to P1 and all calls for Kafka only to P2 ?
My suggestion may not fit with your requests, but if each one pod is running in a specific project, then it would be available as follows.
First, you should configure pod's source IP statically using Egress IP based on project level, refer Enabling Static IPs for External Project Traffic for more details.
$ oc patch netnamespace p1_project -p '{"egressIPs": ["1.1.1.1"]}'
$ oc patch netnamespace p2_project -p '{"egressIPs": ["2.2.2.2"]}'
After that, you can allow each pod IP based on whitelist, refer Route-specific IP Whitelists for more details.
kind: Route
metadata:
name: R1
annotations:
haproxy.router.openshift.io/ip_whitelist: 1.1.1.1
kind: Route
metadata:
name: R2
annotations:
haproxy.router.openshift.io/ip_whitelist: 2.2.2.2
I hope it help you.

Kubernetes -- Helm -- Mysql Chart loses stored data after stopping pod

Using https://github.com/helm/charts/tree/master/stable/mysql (all the code is here), it is cool being able to run mysql as part of my local kubernetes cluster (using docker kubernetes).
The problem though is that once I stop running the pod, and then run the pod again, all the data that was stored is now gone.
My question is how do I keep the data that was added to the mysql pod? I have read about persistent volumes, and the mysql helm example from github is showing that it is using PersistentVolumeClaim. I have also enabled persistence on the values.yaml file, but I cannot seem to have the same data that was saved in the database.
My docker kubernetes version is currently 1.14.6.
Please verify your msql POD You should notice volumes and volumesMount options:
volumeMounts:
- mountPath: /var/lib/mysql
name: data
.
.
.
volumes:
- name: data
persistentVolumeClaim:
claimName: msq-mysql
In additions please verify your PersistentVolume and PersistentVolumeClaim, storageClass:
kubectl get pv,pvc,pods,sc:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-2c6aa172-effd-11e9-beeb-42010a840083 8Gi RWO Delete Bound default/msq-mysql standard 24m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/msq-mysql Bound pvc-2c6aa172-effd-11e9-beeb-42010a840083 8Gi RWO standard 24m
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/msq-mysql-b5c48c888-pz6p2 1/1 Running 0 4m28s 10.0.0.8 gke-te-1-default-pool-36546f4e-5rgw <none> <none>
Please run kubectl describe persistentvolumeclaim/msq-mysql (in your example you should change the pvc name)
You can notice that pvc was provisioned successfully using gce-pd and mounted by msq-mysql POD.
Normal ProvisioningSucceeded 26m persistentvolume-controller Successfully provisioned volume pvc-2c6aa172-effd-11e9-beeb-42010a840083 using kubernetes.io/gce-pd
Mounted By: msq-mysql-b5c48c888-pz6p2
I have created table with on row, deleted the pod and verified after that (as expected everything is alright):
mysql> SELECT * FROM t;
+------+
| c |
+------+
| ala |
+------+
1 row in set (0.00 sec)
Why: all the data that was stored is now gone.
As per helm chart docs:
The MySQL image stores the MySQL data and configurations at the /var/lib/mysql path of the container.
By default a PersistentVolumeClaim is created and mounted into that directory. In order to disable this functionality you can change the values.yaml to disable persistence and use an emptyDir instead.
Mostly there is problem with pv,pvc binding. It can be also problem with user defined or non default storageClass.
So please verify pv,pvc as stated above.
Take a look at StorageClass
A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.
PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster, depending on whether the DefaultStorageClass admission plugin is turned on.

How to change istio ingress loadbalancer external IP

I want to change my istio ingress loadbalancer IP but when i try updating the yaml file it is not getting updated
NAME TYPE CLUSTER-IP EXTERNAL-IP
istio-ingressgateway LoadBalancer 10.123.196.149 52.174.141.126
I have to change my EXTERNAL-IP to different IP.
The easiest way is that copy the configuration of the service istio-ingressgateway and then delete the service. In the configuration file, delete the uuid, the creationTimestamp line, and delete the status property. Then recreate the service from the configuration file. It will work for you.
If the public IP that you own is A.B.C.D, you need to add this to the spec section of the istio-ingressgateway service:
loadBalancerIP: A.B.C.D
You probably need to save that service's yaml or json, add the loadBalancerIP line, then delete the service, and finally create it using the saved yaml/json.
Just run:
kubectl patch svc istio-ingressgateway --namespace istio-system --patch '{"spec": { "loadBalancerIP": "<your-reserved-static-ip>" }}'
Reference: https://knative.dev/docs/serving/gke-assigning-static-ip-address/#step-2-update-the-external-ip-of-istio-ingressgateway-service

How to deploy OpenShift Origin 10 (OKD) on one node with GlusterFS

I am able to install OKD on one node and scaleup on multiple node accordngly.
But now i want to install OKD with GlusterFS on one node and then extend this on multiple nodes.
Currently i am getting error that at least three nodes required. How i can bypass this check in ansible?
As per github documentations i have three options
Configuring a new, natively-hosted GlusterFS cluster. In this scenario, GlusterFS pods are deployed on nodes in the OpenShift cluster which are configured to provide storage.
Configuring a new, external GlusterFS cluster. In this scenario, the cluster nodes have the GlusterFS software pre-installed but have not been configured yet. The installer will take care of configuring the cluster(s) for use by OpenShift applications.
Using existing GlusterFS clusters. In this scenario, one or more GlusterFS clusters are assumed to be already setup. These clusters can be either natively-hosted or external, but must be managed by a heketi service.
Can option 2 or 3 be used to start with one node and extend accordingly? I have install glusterfs cluster on one node and extend it to second node but how to introduce in openshift?
https://imranrazakh.blogspot.com/2018/08/
I found one way to install glusterfs on one node, Find below all in one installation with glusterfs
Changed inventory file like below
[OSEv3:children]
masters
nodes
etcd
glusterfs
[OSEv3:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
ansible_ssh_user=root
openshift_deployment_type=origin
openshift_enable_origin_repo=false
openshift_disable_check=disk_availability,memory_availability
os_firewall_use_firewalld=true
openshift_public_hostname=console.1.1.0.1.nip.io
openshift_master_default_subdomain=apps.1.1.0.1.nip.io
openshift_storage_glusterfs_is_native=false
openshift_storage_glusterfs_storageclass=true
openshift_storage_glusterfs_heketi_is_native=true
openshift_storage_glusterfs_heketi_executor=ssh
openshift_storage_glusterfs_heketi_ssh_port=22
openshift_storage_glusterfs_heketi_ssh_user=root
openshift_storage_glusterfs_heketi_ssh_sudo=false
openshift_storage_glusterfs_heketi_ssh_keyfile="/root/.ssh/id_rsa
[masters]
1.1.0.1 openshift_ip=1.1.0.1 openshift_schedulable=true
[etcd]
1.1.0.1 openshift_ip=1.1.0.1
[nodes]
1.1.0.1 openshift_ip=1.1.0.1 openshift_node_group_name="node-config-all-in-one" openshift_schedulable=true
[glusterfs]
1.1.0.1 glusterfs_devices='[ "/dev/vdb" ]'
Now we have to hack ansible script as it expect three nodes by adding --durability none in following ansible script
openshift-ansible/roles/openshift_storage_glusterfs/tasks/heketi_init_db.yml
Following is updated snippet
- name: Create heketi DB volume
command: "{{ glusterfs_heketi_client }} setup-openshift-heketi-storage --image {{ glusterfs_heketi_image }} --listfile /tmp/heketi-storage.json --durability none"
register: setup_storage
As by default it create StorageClass which expect replicate environment, so we have to create custom storageclass like below with "volumetype: none"
oc create -f - <<EOT
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs-nr-storage
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
parameters:
resturl: http://heketi-storage.glusterfs.svc:8080
restuser: admin
secretName: heketi-storage-admin-secret
secretNamespace: glusterfs
volumetype: none
provisioner: kubernetes.io/glusterfs
volumeBindingMode: Immediate
EOT
Now you can create storage dynamically from webconsole :) Any suggestions for improvement are welcome.
Next i will check how i can extend it?