What does attributeRestrictions mean in Openshift Container platform object schema definition? - openshift

apiVersion:
kind:
spec:
status:
evaluationError:
rules:
- apiGroups:
attributeRestrictions:
Raw:
nonResourceURLs:
resourceNames:
resources:
verbs:

Related

setting up kerberos, timezone and ansible.cfg in Customized setup of AWX-Operator in airgapped environment

Im really new to this and I'm going to setup a functional proof of concept with k3s.
im able to setup a default awx environment by the following files after doing make deploy from a downloaded version of awx-operator.
What i now need is to be able to get everything away fro UTC to Europe/Oslo as timezone,
make winRM remoting working. (unsure how as with default deployment kerberos login will fail. Might be due to timeskew but also missing krb5 config). How to configure awx operator to setup krb5 functionally?
Also id like to mount a local version of /etc/ansible/ansible.cfg persistent. So even if i restart server etc this file will still be read from host and used by awx-operator.
i saw k8tz but doesnt seem to be able to install this without internet access. Also i havent fully grasped the setup of ut yet.
I kindly ask for full yaml examples as im not that great yet at understanding the full buildup yet.
---
apiVersion: v1
kind: Secret
metadata:
name: awx-postgres-configuration
namespace: awx
stringData:
host: awx-postgres-13
port: "5432"
database: awx
username: awx
password: MyPassword
type: managed
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
name: awx-admin-password
namespace: awx
stringData:
password: MyPassword
type: Opaque
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-postgres-volume
namespace: awx
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 8Gi
storageClassName: awx-postgres-volume
hostPath:
path: /data/postgres-13
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-projects-volume
namespace: awx
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 2Gi
storageClassName: awx-projects-volume
hostPath:
path: /data/projects
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: awx-projects-claim
namespace: awx
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
storageClassName: awx-projects-volume
Then the last part to create the pods etc.
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
# Set the replicas count to scale AWX pods
#replicas: 3
admin_user: admin
admin_password_secret: awx-admin-password
ingress_type: ingress
ingress_tls_secret: awx-secret-tls
hostname: my.domain.com # Replace fqdn.awx.host.com with Host FQDN and DO NOT use IP.
postgres_configuration_secret: awx-postgres-configuration
postgres_storage_class: awx-postgres-volume
postgres_storage_requirements:
requests:
storage: 8Gi
projects_persistence: true
projects_existing_claim: awx-projects-claim
web_resource_requirements: {}
ee_resource_requirements: {}
task_resource_requirements: {}
no_log: "false"
#ldap_cacert_secret: awx-ldap-cert
UPDATE:
I got krb5 kerberos to work by creating a configmap
---
kind: ConfigMap
apiVersion: v1
metadata:
name: awx-extra-config
namespace: awx
data:
krb5.conf: |-
# To opt out of the system crypto-policies configuration of krb5, remove the
# symlink at /etc/krb5.conf.d/crypto-policies which will not be recreated.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = MYDOMAIN.COM
ticket_lifetime = 24h
renew_lifetime = 7d
[realms]
MYDOMAIN.COM = {
kdc = pdc.mydomain.com
admin_server = pdc.mydomain.com
}
[domain_realm]
.MYDOMAIN.COM = MYDOMAIN.COM
mydomain.com = MYDOMAIN.COM
then refering to this in the last deployment file:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
web_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
task_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
ee_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
extra_volumes: |
- name: krb5-conf
configMap:
defaultMode: 420
items:
- key: krb5.conf
path: krb5.conf
name: awx-extra-config
Got Timezone to work as well. Only now missing mount of hostfile /etc/ansible/ansible.cfg
here was the solution i used:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
ee_extra_env: |
- name: TZ
value: Europe/Paris
task_extra_env: |
- name: TZ
value: Europe/Paris
web_extra_env: |
- name: TZ
value: Europe/Paris

Traefik IngressRoute redirect doesn’t work

I have setup following IngressRoute for default path and wp-*
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: external-1
namespace: marketing
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`example.com`) || Host(`www.example.com`)
kind: Rule
services:
- name: wordpress
port: 80
middlewares:
- name: https-redirect
tls:
secretName: prod-cert
and
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: wp-admin-1
namespace: marketing
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`example.com`) || Host(`www.example.com`) && PathPrefix(`/wp-login.php`,`/wp-login.php/`, `/wp-admin/`)
kind: Rule
services:
- name: wordpress
port: 80
middlewares:
- name: secured-restricted
tls:
secretName: prod-cert
Middleware :
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: secured-restricted
namespace: marketing
spec:
chain:
middlewares:
- name: https-redirect
- name: permited-ips
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-redirect
namespace: marketing
spec:
redirectScheme:
scheme: https
permanent: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: permited-ips
namespace: marketing
spec:
ipWhiteList:
sourceRange:
- #.#.#.#/28
---
https://www.example.com --> works
https://example.com --> Get Forbidden
https://example.com works only when I try to access it from whitelisted IP (#.#.#.#/28)
So looks like external-1 IngressRoute is not getting hit.
What is wrong with this setup ?
Splitting the rules in following way fixed the issue.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: external-1
namespace: marketing
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`example.com`)
kind: Rule
services:
- name: wordpress
port: 80
- match: Host(`www.example.com`)
kind: Rule
services:
- name: wordpress
port: 80
middlewares:
- name: https-redirect
tls:
secretName: prod-cert
and
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: wp-admin-1
namespace: marketing
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`example.com`) && PathPrefix(`/wp-login.php`,`/wp-login.php/`, `/wp-admin/`)
kind: Rule
services:
- name: wordpress
port: 80
- match: Host(`www.example.com`) && PathPrefix(`/wp-login.php`,`/wp-login.php/`, `/wp-admin/`)
kind: Rule
services:
- name: wordpress
port: 80
middlewares:
- name: secured-restricted
tls:
secretName: prod-cert
You probably forgot a bracket, and your original configuration might just work by changing to this:
(Host(`example.com`) || Host(`www.example.com`)) && PathPrefix(`/wp-login.php`,`/wp-login.php/`, `/wp-admin/`)

mysql service pending in kubernetes

I created .yaml file to create mysql service on kubernetes for my internal application, but it's unreachable. I can reach application and also phpmyadmin to reach database but it's not working properly. I'm stuck with pending status on mysql pod.
.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: Never
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: flaskapi-cred
key: db_root_password
ports:
- containerPort: 3306
name: db-container
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
ports:
- port: 3306
protocol: TCP
name: mysql
selector:
app: db
type: LoadBalancer
kubectl get all output is:
NAME READY STATUS RESTARTS AGE
pod/flaskapi-deployment-59bcb745ff-gl8xn 1/1 Running 0 117s
pod/mysql-99fb77bf4-sbhlj 0/1 Pending 0 118s
pod/phpmyadmin-deployment-5fc964bf9d-dk59t 1/1 Running 0 118s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flaskapi-deployment 1/1 1 1 117s
deployment.apps/mysql 0/1 1 0 118s
deployment.apps/phpmyadmin-deployment 1/1 1 1 118s
I already did docker pull mysql.
Edit
Name: mysql-99fb77bf4-sbhlj
Namespace: z2
Priority: 0
Node: <none>
Labels: app=db
pod-template-hash=99fb77bf4
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-99fb77bf4
Containers:
mysql:
Image: mysql
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'db_root_password' in secret 'flaskapi-secrets'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gmbnd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-gmbnd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gmbnd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
you are missing the volume to attach with the pod or deployment. PVC is required as your deployment configuration is using it.
you can see clearly : persistentvolumeclaim "mysql-pv-claim" not found
you can apply below YAML and try.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
As the error in the message clearly shows persistentvolumeclaim "mysql-pv-claim" not found. So you need to provision a persistentvolumeclaim (PVC).
There is a static & dynamic provisioning but I'll explain static provisioning here as it will be relatively easy for you to understand & setup. You need to create a PersistentVolume (PV) which the PVC will use. There are various types of Volumes, about which you read here.
Which type of Volume you would wanna create would be your choice depending on your environment and needs. A simple example would be of volume type hostPath.
Create a PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
namespace: cust
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
# The configuration here specifies that the volume is at /tmp/data on the cluster's Node
path: "/tmp/data"
And then create a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: cust
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: mysql-pv-volume
Once the PVC is successfully created, your deployment shall go through.

How to create a MySQL cluster within an Istio injected namespace?

I'm currently trying to create a 2 nodes MySQL cluster in an 1.13.10 K8s using the Oracle MySQL Operator. It works fine within a standard namespace. However, once it's created within an Istio 1.4 injected namespace, the MySQL Agent, that is in charge of setting the replication up, returns the following error:
Error bootstrapping cluster: failed to create new cluster: SystemError: RuntimeError: Dba.create_cluster: ERROR: Error starting cluster: The port '33061' for localAddress option is already in use. Specify an available port to be used with localAddress option or free port '33061'.
I was not able to find any support on this so far.
How can I configure Istio to enable the agent to manage the replication ?
Below my yaml manifests:
apiVersion: v1
kind: Namespace
metadata:
name: test
labels:
istio-injection: enabled
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysql-agent
namespace: test
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: mysql-agent
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: mysql-agent
subjects:
- kind: ServiceAccount
name: mysql-agent
namespace: test
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test-mysql-data0
labels:
namespace: test
type: data
app: mysql
spec:
storageClassName: hostpath
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
hostPath:
path: /data/test/mysql/data0
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test-mysql-data1
labels:
namespace: test
type: data
app: mysql
spec:
storageClassName: hostpath
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
hostPath:
path: /share/test/mysql/data1
type: DirectoryOrCreate
---
apiVersion: v1
kind: Secret
metadata:
name: mysql-root-user-secret
namespace: test
stringData:
password: password
---
apiVersion: mysql.oracle.com/v1alpha1
kind: Cluster
metadata:
name: mysql
namespace: test
labels:
app: mysql
namespace: test
spec:
multiMaster: true
members: 2
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "v1alpha1.mysql.oracle.com/cluster"
operator: In
values:
- mysql
topologyKey: "kubernetes.io/hostname"
rootPasswordSecret:
name: mysql-root-user-secret
volumeClaimTemplate:
metadata:
name: data
spec:
storageClassName: hostpath
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
selector:
matchLabels:
namespace: test
type: data
app: mysql

How to set INGRESS_HOST and INGRESS_PORT and access GATEWAY_URL

How to set INGRESS_HOST and INGRESS_PORT for a sample yaml file which is having its istio file created using automatic side car injection
I am using window 10 - Docker - kubernetes -Istio configuration.Installed kubectl,istioctl verions respectievly
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
selector:
app: helloworld
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v1
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v2
labels:
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v2
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v2
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5
010
Getting 503 Service Temporarily Unavailable when trying to hit my sample created service
Please first verify your selector labels are perfect and your service is connected to deployment[POD].
You have 'version: v1' and 'version: v2' in deployment selector but it's not at service. That's why service is giving output 503 unavailable. if the issue in pod or service then i will be giving 502 bad gateway or something.
Istio traffic work like
ingress-gateway -> virtual-service -> destination-rule [optional] -> service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5000 <--- change
Welcome to SO #Sreedhar!
How to set INGRESS_HOST and INGRESS_PORT
these two environment variables are not adjustable inside of manifest files (static files), that you use to create Deployments->Pods on K8S cluster. They serve just as a placeholders to ease the end-users an access to the application just deployed on Istio-enabled Kubernetes cluster from outside. Values of INGRESS_HOST/INGRESS_PORT are filled out based on information, which is auto-generated by cluster during creation of cluster resources, and available only in live objects.
Where the ingress takes its IP address from, you can read in official documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
For the Bad gateway issue, as suggested previously by #Harsh Manvar, you have specified invalid port in VirtualService (5000 => 5010)