How to make a PSP to take priority over another PSP in kubernetes - kubernetes-psp

We have implemented priviliged and non-prviliged psps for k8s cluster. However to run Litmus-Chaos experiments in the cluster , we want to implementa 3rd psp-litmus (litmus psp official) and exclude litmus-sa from honoring unpriviliged psp.
Currently unpriviliged psp ClusterRoleBinding is applied for all ns except few system critical ns which are whitelisted under priviliged-psp
ClusterRoleBinding for unpriviliged-
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:authenticated
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts
what we want -
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:authenticated
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts except litmus-ns

Related

setting up kerberos, timezone and ansible.cfg in Customized setup of AWX-Operator in airgapped environment

Im really new to this and I'm going to setup a functional proof of concept with k3s.
im able to setup a default awx environment by the following files after doing make deploy from a downloaded version of awx-operator.
What i now need is to be able to get everything away fro UTC to Europe/Oslo as timezone,
make winRM remoting working. (unsure how as with default deployment kerberos login will fail. Might be due to timeskew but also missing krb5 config). How to configure awx operator to setup krb5 functionally?
Also id like to mount a local version of /etc/ansible/ansible.cfg persistent. So even if i restart server etc this file will still be read from host and used by awx-operator.
i saw k8tz but doesnt seem to be able to install this without internet access. Also i havent fully grasped the setup of ut yet.
I kindly ask for full yaml examples as im not that great yet at understanding the full buildup yet.
---
apiVersion: v1
kind: Secret
metadata:
name: awx-postgres-configuration
namespace: awx
stringData:
host: awx-postgres-13
port: "5432"
database: awx
username: awx
password: MyPassword
type: managed
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
name: awx-admin-password
namespace: awx
stringData:
password: MyPassword
type: Opaque
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-postgres-volume
namespace: awx
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 8Gi
storageClassName: awx-postgres-volume
hostPath:
path: /data/postgres-13
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-projects-volume
namespace: awx
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 2Gi
storageClassName: awx-projects-volume
hostPath:
path: /data/projects
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: awx-projects-claim
namespace: awx
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
storageClassName: awx-projects-volume
Then the last part to create the pods etc.
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
# Set the replicas count to scale AWX pods
#replicas: 3
admin_user: admin
admin_password_secret: awx-admin-password
ingress_type: ingress
ingress_tls_secret: awx-secret-tls
hostname: my.domain.com # Replace fqdn.awx.host.com with Host FQDN and DO NOT use IP.
postgres_configuration_secret: awx-postgres-configuration
postgres_storage_class: awx-postgres-volume
postgres_storage_requirements:
requests:
storage: 8Gi
projects_persistence: true
projects_existing_claim: awx-projects-claim
web_resource_requirements: {}
ee_resource_requirements: {}
task_resource_requirements: {}
no_log: "false"
#ldap_cacert_secret: awx-ldap-cert
UPDATE:
I got krb5 kerberos to work by creating a configmap
---
kind: ConfigMap
apiVersion: v1
metadata:
name: awx-extra-config
namespace: awx
data:
krb5.conf: |-
# To opt out of the system crypto-policies configuration of krb5, remove the
# symlink at /etc/krb5.conf.d/crypto-policies which will not be recreated.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = MYDOMAIN.COM
ticket_lifetime = 24h
renew_lifetime = 7d
[realms]
MYDOMAIN.COM = {
kdc = pdc.mydomain.com
admin_server = pdc.mydomain.com
}
[domain_realm]
.MYDOMAIN.COM = MYDOMAIN.COM
mydomain.com = MYDOMAIN.COM
then refering to this in the last deployment file:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
web_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
task_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
ee_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
extra_volumes: |
- name: krb5-conf
configMap:
defaultMode: 420
items:
- key: krb5.conf
path: krb5.conf
name: awx-extra-config
Got Timezone to work as well. Only now missing mount of hostfile /etc/ansible/ansible.cfg
here was the solution i used:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
ee_extra_env: |
- name: TZ
value: Europe/Paris
task_extra_env: |
- name: TZ
value: Europe/Paris
web_extra_env: |
- name: TZ
value: Europe/Paris

mysql service pending in kubernetes

I created .yaml file to create mysql service on kubernetes for my internal application, but it's unreachable. I can reach application and also phpmyadmin to reach database but it's not working properly. I'm stuck with pending status on mysql pod.
.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: Never
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: flaskapi-cred
key: db_root_password
ports:
- containerPort: 3306
name: db-container
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
ports:
- port: 3306
protocol: TCP
name: mysql
selector:
app: db
type: LoadBalancer
kubectl get all output is:
NAME READY STATUS RESTARTS AGE
pod/flaskapi-deployment-59bcb745ff-gl8xn 1/1 Running 0 117s
pod/mysql-99fb77bf4-sbhlj 0/1 Pending 0 118s
pod/phpmyadmin-deployment-5fc964bf9d-dk59t 1/1 Running 0 118s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flaskapi-deployment 1/1 1 1 117s
deployment.apps/mysql 0/1 1 0 118s
deployment.apps/phpmyadmin-deployment 1/1 1 1 118s
I already did docker pull mysql.
Edit
Name: mysql-99fb77bf4-sbhlj
Namespace: z2
Priority: 0
Node: <none>
Labels: app=db
pod-template-hash=99fb77bf4
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-99fb77bf4
Containers:
mysql:
Image: mysql
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'db_root_password' in secret 'flaskapi-secrets'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gmbnd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-gmbnd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gmbnd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
you are missing the volume to attach with the pod or deployment. PVC is required as your deployment configuration is using it.
you can see clearly : persistentvolumeclaim "mysql-pv-claim" not found
you can apply below YAML and try.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
As the error in the message clearly shows persistentvolumeclaim "mysql-pv-claim" not found. So you need to provision a persistentvolumeclaim (PVC).
There is a static & dynamic provisioning but I'll explain static provisioning here as it will be relatively easy for you to understand & setup. You need to create a PersistentVolume (PV) which the PVC will use. There are various types of Volumes, about which you read here.
Which type of Volume you would wanna create would be your choice depending on your environment and needs. A simple example would be of volume type hostPath.
Create a PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
namespace: cust
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
# The configuration here specifies that the volume is at /tmp/data on the cluster's Node
path: "/tmp/data"
And then create a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: cust
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: mysql-pv-volume
Once the PVC is successfully created, your deployment shall go through.

AWS EKS ALB ingress controller is not working, and HOST is not populating

I am trying to implement a simple "hello world" on eks with alb Ingress controller.
My goal is to ..
Create a cluster
Deploy an Ingress to access using ELB
Following things have been done
Created EKS cluster
added "alb ingress controller"
C:\workspace\eks>kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
alb-ingress-controller-5f96d7df77-mdrw2 1/1 Running 0 4m1s
Created application as below
apiVersion: apps/v1
kind: Deployment
metadata:
name: "2048-deployment"
namespace: "2048-game"
labels:
app: "2048"
spec:
replicas: 1
selector:
matchLabels:
app: "2048"
template:
metadata:
labels:
app: "2048"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "2048"
ports:
- containerPort: 80
Serveice is as following
apiVersion: v1
kind: Service
metadata:
name: "service-2048"
namespace: "2048-game"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "2048"
Ingress controller is as below
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "2048-ingress"
namespace: "2048-game"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: 2048-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "service-2048"
servicePort: 80
output is as below, not getting Host addess as ELB .and not able to access from outside
C:\sample>kubectl get ingress/2048-ingress -n 2048-game
NAME HOSTS ADDRESS PORTS AGE
2048-ingress * 80 71s
Update :
Found following error in alb-ingress-controller-5f96d7df77-mdrw2 logs.
Not able to find how to change
kubebuilder/controller "msg"="Reconciler error" "error"="failed to build LoadBalancer configuration due to failed to resolve 2 qualified subnet for ALB. Subnets must contains these tags: 'kubernetes.io/cluster/ascluster': ['shared' or 'owned'] and 'kubernetes.io/role/elb': ['' or '1']. See https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/config/#subnet-auto-discovery for more details. Resolved qualified subnets: '[]'" "controller"="alb-ingress-controller" "request"={"Namespace":"default","Name":"ingress-default-dev"}
The subnets where eks nodes resides should be tagged with the following
https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging
If your subnets are not tagged with kubernetes.io/cluster/<cluster-name>=shared etc....
you can also try passing subnets in ingress file annotations like below
alb.ingress.kubernetes.io/subnets: subnet-xxxxxx, subnet-xxxxxx

How to create a MySQL cluster within an Istio injected namespace?

I'm currently trying to create a 2 nodes MySQL cluster in an 1.13.10 K8s using the Oracle MySQL Operator. It works fine within a standard namespace. However, once it's created within an Istio 1.4 injected namespace, the MySQL Agent, that is in charge of setting the replication up, returns the following error:
Error bootstrapping cluster: failed to create new cluster: SystemError: RuntimeError: Dba.create_cluster: ERROR: Error starting cluster: The port '33061' for localAddress option is already in use. Specify an available port to be used with localAddress option or free port '33061'.
I was not able to find any support on this so far.
How can I configure Istio to enable the agent to manage the replication ?
Below my yaml manifests:
apiVersion: v1
kind: Namespace
metadata:
name: test
labels:
istio-injection: enabled
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysql-agent
namespace: test
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: mysql-agent
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: mysql-agent
subjects:
- kind: ServiceAccount
name: mysql-agent
namespace: test
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test-mysql-data0
labels:
namespace: test
type: data
app: mysql
spec:
storageClassName: hostpath
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
hostPath:
path: /data/test/mysql/data0
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test-mysql-data1
labels:
namespace: test
type: data
app: mysql
spec:
storageClassName: hostpath
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
hostPath:
path: /share/test/mysql/data1
type: DirectoryOrCreate
---
apiVersion: v1
kind: Secret
metadata:
name: mysql-root-user-secret
namespace: test
stringData:
password: password
---
apiVersion: mysql.oracle.com/v1alpha1
kind: Cluster
metadata:
name: mysql
namespace: test
labels:
app: mysql
namespace: test
spec:
multiMaster: true
members: 2
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "v1alpha1.mysql.oracle.com/cluster"
operator: In
values:
- mysql
topologyKey: "kubernetes.io/hostname"
rootPasswordSecret:
name: mysql-root-user-secret
volumeClaimTemplate:
metadata:
name: data
spec:
storageClassName: hostpath
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
selector:
matchLabels:
namespace: test
type: data
app: mysql

Error when upload a yml with cluster policy bindings in openshift "already exist"

When I try to execute this:
oc create -f custom_clusterPolicyBinding.yml
Error from server: error when creating "custom_clusterPolicyBinding.yml": clusterpolicybindings ":default" already exists
oc version
oc v1.4.1
kubernetes v1.4.0+776c994
features: Basic-Auth GSSAPI Kerberos SPNEGO
This is the custom_clusterPolicyBinding.yml
apiVersion: v1
kind: ClusterPolicyBinding
metadata:
name: custom
policyRef:
name: custom
roleBindings:
- name: custom:label-nodos
roleBinding:
groupNames:
- pachi
metadata:
name: custom:label-nodos
roleRef:
name: custom:label-nodos
subjects:
- kind: Group
name: pachi
userNames: null
The cluster role binding custom:label-nodos already exist
oc get clusterroleBinding | grep custom:label-nodos
custom:label-nodos /custom:label-nodos
And the content of cluster role binding yaml is:
apiVersion: v1
groupNames: null
kind: ClusterRoleBinding
metadata:
name: custom:label-nodos
roleRef:
name: custom:label-nodos
subjects: []
userNames: null
Any idea?
Do not directly edit policy. There is only one cluster policy and cluster policy binding.
Instead you would want to create a clusterrole with content similar to this (edit it to grant the permissions you want to give out):
apiVersion: v1
kind: ClusterRole
metadata:
name: some-user
rules:
- apiGroups:
- project.openshift.io
- ""
resources:
- projects
verbs:
- list
And a clusterrolebinding with content like this (edit it to bind to the correct subjects):
apiVersion: v1
kind: ClusterRoleBinding
metadata:
name: some-users
roleRef:
name: some-user
subjects:
- kind: User
name: foo
You can also use the oadm policy add-*role-to-* commands to help with binding roles:
add-cluster-role-to-group
add-cluster-role-to-user
add-role-to-group
add-role-to-user