setting up kerberos, timezone and ansible.cfg in Customized setup of AWX-Operator in airgapped environment - k3s

Im really new to this and I'm going to setup a functional proof of concept with k3s.
im able to setup a default awx environment by the following files after doing make deploy from a downloaded version of awx-operator.
What i now need is to be able to get everything away fro UTC to Europe/Oslo as timezone,
make winRM remoting working. (unsure how as with default deployment kerberos login will fail. Might be due to timeskew but also missing krb5 config). How to configure awx operator to setup krb5 functionally?
Also id like to mount a local version of /etc/ansible/ansible.cfg persistent. So even if i restart server etc this file will still be read from host and used by awx-operator.
i saw k8tz but doesnt seem to be able to install this without internet access. Also i havent fully grasped the setup of ut yet.
I kindly ask for full yaml examples as im not that great yet at understanding the full buildup yet.
---
apiVersion: v1
kind: Secret
metadata:
name: awx-postgres-configuration
namespace: awx
stringData:
host: awx-postgres-13
port: "5432"
database: awx
username: awx
password: MyPassword
type: managed
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
name: awx-admin-password
namespace: awx
stringData:
password: MyPassword
type: Opaque
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-postgres-volume
namespace: awx
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 8Gi
storageClassName: awx-postgres-volume
hostPath:
path: /data/postgres-13
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-projects-volume
namespace: awx
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 2Gi
storageClassName: awx-projects-volume
hostPath:
path: /data/projects
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: awx-projects-claim
namespace: awx
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
storageClassName: awx-projects-volume
Then the last part to create the pods etc.
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
# Set the replicas count to scale AWX pods
#replicas: 3
admin_user: admin
admin_password_secret: awx-admin-password
ingress_type: ingress
ingress_tls_secret: awx-secret-tls
hostname: my.domain.com # Replace fqdn.awx.host.com with Host FQDN and DO NOT use IP.
postgres_configuration_secret: awx-postgres-configuration
postgres_storage_class: awx-postgres-volume
postgres_storage_requirements:
requests:
storage: 8Gi
projects_persistence: true
projects_existing_claim: awx-projects-claim
web_resource_requirements: {}
ee_resource_requirements: {}
task_resource_requirements: {}
no_log: "false"
#ldap_cacert_secret: awx-ldap-cert
UPDATE:
I got krb5 kerberos to work by creating a configmap
---
kind: ConfigMap
apiVersion: v1
metadata:
name: awx-extra-config
namespace: awx
data:
krb5.conf: |-
# To opt out of the system crypto-policies configuration of krb5, remove the
# symlink at /etc/krb5.conf.d/crypto-policies which will not be recreated.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = MYDOMAIN.COM
ticket_lifetime = 24h
renew_lifetime = 7d
[realms]
MYDOMAIN.COM = {
kdc = pdc.mydomain.com
admin_server = pdc.mydomain.com
}
[domain_realm]
.MYDOMAIN.COM = MYDOMAIN.COM
mydomain.com = MYDOMAIN.COM
then refering to this in the last deployment file:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
web_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
task_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
ee_extra_volume_mounts: |
- name: krb5-conf
mountPath: /etc/krb5.conf
subPath: krb5.conf
extra_volumes: |
- name: krb5-conf
configMap:
defaultMode: 420
items:
- key: krb5.conf
path: krb5.conf
name: awx-extra-config
Got Timezone to work as well. Only now missing mount of hostfile /etc/ansible/ansible.cfg
here was the solution i used:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
ee_extra_env: |
- name: TZ
value: Europe/Paris
task_extra_env: |
- name: TZ
value: Europe/Paris
web_extra_env: |
- name: TZ
value: Europe/Paris

Related

Mysql deployment deleting my database in kubernetes

I created a mysql deployment where I connect with other pods. I do remote access to create the database and tables but I saw that at some point in the mysql lifecycle it deletes my database how can I make it not deleted?
I thought about creating a static pod but I don't know if this solves my problem, it follows my structure below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: rafaelribeirosouza86/shopping:myql
name: mysql
imagePullPolicy: Always
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
# secret:
# secretName: mysql-pass
# items:
# - key: password
persistentVolumeClaim:
claimName: mysql-pv-claim
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
# clusterIP: None
ports:
- port: 3306
selector:
app: mysql
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Does anyone have an idea how I can resolve this?

How to make My Sql Pod to save data in Persistent Volume

I started to use Kubernetes to understant concepts like pods, objects and so on. I started to learn about Persistent Volume and Persistent Volume Claim, from my understanding, if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...
I have a spring boot pod where i save data in mysql pod, data is saved, i can retreived, but when i restart my pods, delete or replace them, that saved data is lost, so i think i messed up something, can you give me a hint, please? Thanks...
Bellow are my Kubernetes files:
Mysql pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels: #must match Service and DeploymentLabels
app: mysql
spec:
containers:
- image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
name: mysql #name of the db
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret #name of the secret obj
key: password #which value from inside the secret to take
- name: MYSQL_ROOT_USER
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: db-config
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts: #mount volume obtained from PVC
- name: mysql-persistent-storage
mountPath: /var/lib/mysql #mounting in the container will be here
volumes:
- name: mysql-persistent-storage #obtaining volume from PVC
persistentVolumeClaim:
claimName: mysql-pv-claim # can use the same claim in different pods
apiVersion: v1
kind: Service
metadata:
name: mysql #DNS name
labels:
app: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector: #mysql pod should contain same label
app: mysql
clusterIP: None # we use DNS
Persistent Volume and Persistent Volume Claim files:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim #name of our pvc
labels:
app: mysql
spec:
volumeName: host-pv #claim that volume created with this name
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 1Gi
apiVersion: v1 #version of our PV
kind: PersistentVolume #kind of obj we gonna create
metadata:
name: host-pv # name of our PV
spec: #spec of our PV
capacity: #size
storage: 4Gi
volumeMode: Filesystem #storage Type, File and Blcok
storageClassName: standard
accessModes:
- ReadWriteOnce # can be mount from multiple pods on a single nod, cam be use by multiple pods, multiple pods can use this pv but only from a single node
# - ReadOnlyMany # on multiple nodes
# - WriteOnlyMany # doar pt multiple nods, nu hostPath type
hostPath: #which type of pv
path: "/mnt/data"
type: DirectoryOrCreate
persistentVolumeReclaimPolicy: Retain
My Spring book K8 file:
apiVersion: v1
kind: Service
metadata:
name: book-service
spec:
selector:
app: book-example
ports:
- protocol: 'TCP'
port: 8080
targetPort: 8080
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: book-deployment
spec:
replicas: 1
selector:
matchLabels:
app: book-example
template:
metadata:
labels:
app: book-example
spec:
containers:
- name: book-container
image: cinevacineva/kubernetes_book_pv:latest
imagePullPolicy: Always
# ports:
# - containerPort: 8080
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
# & minikube -p minikube docker-env | Invoke-Expression links docker images we create with minikube, nu mai trebe sa ppusham
...if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...
Your previous data will not be available when the pod switch node. To use hostPath you don't really need PVC/PV. Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
...
spec:
...
template:
...
spec:
...
nodeSelector: # <-- make sure your pod runs on the same node
<node label>: <value unique to the mysql node>
volumes: # <-- mount the data path on the node, no pvc/pv required.
- name: mysql-persistent-storage
hostPath:
path: /mnt/data
type: DirectoryOrCreate
containers:
- name: mysql
...
volumeMounts: # <-- let mysql write to it
- name: mysql-persistent-storage
mountPath: /var/lib/mysql

mount file (.sql) from minikube/host to deployment (MySQL)

I'm trying to mount file from the host running minikube cluster with Hyper-V and pass in into MySQL Container with deployment yaml , I tried to add the file to the minikube vm ( with ssh) and then mount it to the deployment with PV and Claim , I tried to mount from the localhost that running the minikube ( my computer ) but still I don't see the file.
Current Configuration is : I have on the Hyper-V VM running minikube folder named data , and inside this folder i Have the file i want to transfer to the container ( pod ) .
PV Yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: sqlvolume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data
claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: sqlvolume
name: sqlvolume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
deployment.yaml (MySQL)
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
labels:
io.kompose.service: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mysql
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
creationTimestamp: null
labels:
io.kompose.service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: crud
- name: MYSQL_ROOT_PASSWORD
value: root
image: mysql
name: mysql
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /data
name: sqlvolume
# resources:
# requests:
# memory: "64Mi"
# cpu: "250m"
# limits:
# memory: "128Mi"
# cpu: "500m"
hostname: mysql
restartPolicy: Always
volumes:
- name: sqlvolume
persistentVolumeClaim:
claimName: sqlvolume
status: {}
I Don't mind how to achieve that just , I have Hyper-V Minikube running on my computer and I want to transfer file mysql.sql from the host ( or from the PV I created ) to the pod.
how can I achieve that ?
You can try with a hostPath type PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/<file_name>"
PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
volumeName: "pv-volume"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Deployment ( changed pvc name )
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
labels:
io.kompose.service: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mysql
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
creationTimestamp: null
labels:
io.kompose.service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: crud
- name: MYSQL_ROOT_PASSWORD
value: root
image: mysql
name: mysql
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /data
name: sqlvolume
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
hostname: mysql
restartPolicy: Always
volumes:
- name: sqlvolume
persistentVolumeClaim:
claimName: pv-claim
status: {}

mysql service pending in kubernetes

I created .yaml file to create mysql service on kubernetes for my internal application, but it's unreachable. I can reach application and also phpmyadmin to reach database but it's not working properly. I'm stuck with pending status on mysql pod.
.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: Never
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: flaskapi-cred
key: db_root_password
ports:
- containerPort: 3306
name: db-container
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
ports:
- port: 3306
protocol: TCP
name: mysql
selector:
app: db
type: LoadBalancer
kubectl get all output is:
NAME READY STATUS RESTARTS AGE
pod/flaskapi-deployment-59bcb745ff-gl8xn 1/1 Running 0 117s
pod/mysql-99fb77bf4-sbhlj 0/1 Pending 0 118s
pod/phpmyadmin-deployment-5fc964bf9d-dk59t 1/1 Running 0 118s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flaskapi-deployment 1/1 1 1 117s
deployment.apps/mysql 0/1 1 0 118s
deployment.apps/phpmyadmin-deployment 1/1 1 1 118s
I already did docker pull mysql.
Edit
Name: mysql-99fb77bf4-sbhlj
Namespace: z2
Priority: 0
Node: <none>
Labels: app=db
pod-template-hash=99fb77bf4
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-99fb77bf4
Containers:
mysql:
Image: mysql
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'db_root_password' in secret 'flaskapi-secrets'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gmbnd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-gmbnd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gmbnd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
you are missing the volume to attach with the pod or deployment. PVC is required as your deployment configuration is using it.
you can see clearly : persistentvolumeclaim "mysql-pv-claim" not found
you can apply below YAML and try.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
As the error in the message clearly shows persistentvolumeclaim "mysql-pv-claim" not found. So you need to provision a persistentvolumeclaim (PVC).
There is a static & dynamic provisioning but I'll explain static provisioning here as it will be relatively easy for you to understand & setup. You need to create a PersistentVolume (PV) which the PVC will use. There are various types of Volumes, about which you read here.
Which type of Volume you would wanna create would be your choice depending on your environment and needs. A simple example would be of volume type hostPath.
Create a PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
namespace: cust
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
# The configuration here specifies that the volume is at /tmp/data on the cluster's Node
path: "/tmp/data"
And then create a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: cust
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: mysql-pv-volume
Once the PVC is successfully created, your deployment shall go through.

How to create a MySQL cluster within an Istio injected namespace?

I'm currently trying to create a 2 nodes MySQL cluster in an 1.13.10 K8s using the Oracle MySQL Operator. It works fine within a standard namespace. However, once it's created within an Istio 1.4 injected namespace, the MySQL Agent, that is in charge of setting the replication up, returns the following error:
Error bootstrapping cluster: failed to create new cluster: SystemError: RuntimeError: Dba.create_cluster: ERROR: Error starting cluster: The port '33061' for localAddress option is already in use. Specify an available port to be used with localAddress option or free port '33061'.
I was not able to find any support on this so far.
How can I configure Istio to enable the agent to manage the replication ?
Below my yaml manifests:
apiVersion: v1
kind: Namespace
metadata:
name: test
labels:
istio-injection: enabled
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysql-agent
namespace: test
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: mysql-agent
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: mysql-agent
subjects:
- kind: ServiceAccount
name: mysql-agent
namespace: test
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test-mysql-data0
labels:
namespace: test
type: data
app: mysql
spec:
storageClassName: hostpath
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
hostPath:
path: /data/test/mysql/data0
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test-mysql-data1
labels:
namespace: test
type: data
app: mysql
spec:
storageClassName: hostpath
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
hostPath:
path: /share/test/mysql/data1
type: DirectoryOrCreate
---
apiVersion: v1
kind: Secret
metadata:
name: mysql-root-user-secret
namespace: test
stringData:
password: password
---
apiVersion: mysql.oracle.com/v1alpha1
kind: Cluster
metadata:
name: mysql
namespace: test
labels:
app: mysql
namespace: test
spec:
multiMaster: true
members: 2
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "v1alpha1.mysql.oracle.com/cluster"
operator: In
values:
- mysql
topologyKey: "kubernetes.io/hostname"
rootPasswordSecret:
name: mysql-root-user-secret
volumeClaimTemplate:
metadata:
name: data
spec:
storageClassName: hostpath
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
selector:
matchLabels:
namespace: test
type: data
app: mysql