How do i use mysql operaor with minikube - mysql

I went through the mysql operator documentation on github and i followed through but i kept getting this error
Patching failed with inconsistencies: (('remove', ('status', 'kopf'), {'dummy': '2022-07-14T15:51:30.145945'}, None),)
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Back-off restarting failed container
here are the kubernetes file used
kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-crds.yaml
kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-operator.yaml
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mycluster
spec:
secretName: mypwds
tlsUseSelfSigned: true
instances: 3
router:
instances: 1
I dont know what to do. Any help will be much appreciated

Related

Unable to run MYSQL 8.0 on AKS, getting the above errors [MY-012574] [InnoDB] Unable to lock ./#innodb_redo/#ib_redo0 error & Unable to open

Unable to run MYSQL :8.0 on AKS using PVC, below is my manifest files
[
I've just encountered a similar issue myself (when switching from mysql:5.5 to mysql:8).
The key to resolving it when using Azure Files appears to be setting the - nobrl setting on the StorageClass :
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mysql-sc-azurefile
provisioner: file.csi.azure.com
allowVolumeExpansion: true
mountOptions:
- file_mode=0777
- mfsymlinks
- uid=999
- dir_mode=0777
- gid=999
- actimeo=30
- cache=strict
- nobrl
parameters:
skuName: Standard_LRS
Additionally, you may need to set the securityContext on the deployment to use 999 (to prevent mysql attempting to switch the user at startup) :
securityContext:
runAsUser: 999
runAsGroup: 999
fsGroup: 999

Docker Desktop Nginx ingress has no external IP sometimes

I reset my entire Docker Desktop from factory settings and enable kubernetes.
Then, I run kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml and wait for the ingress to be ready.
Then, I deploy my application, which includes several services and an ingress definition.
The ingress is as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
spec:
ingressClassName: nginx
rules:
- host: test.project.com
http:
paths:
- path: "/.*"
pathType: "Prefix"
backend:
service:
name: test-frontend
port:
number: 80
Checking on the service, I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-frontend ClusterIP 10.104.106.210 <none> 80/TCP 40m
kubectl get services -n ingress-nginx returns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.44.33 <pending> 80:30753/TCP,443:31632/TCP 51m
ingress-nginx-controller-admission ClusterIP 10.97.85.58 <none> 443/TCP 51m
kubectl get ingresses returns
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress nginx test.project.com 80 31m
As you can see, Docker Desktop or the Ingress is not properly binding the ingress to localhost, as it usually does. What I've been doing for the last several weeks is constantly stopping, restarting, rebuilding and resetting my deployments, services, ingresses, nodes, my computer, and Docker desktop until it suddenly starts working. I have never been able to find out what actually fixes it, it seems almost random whether it works or not, and when it stops working.
The only interesting thing I can find involves the events of the test-ingress:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 35m (x3 over 42m) nginx-ingress-controller Scheduled for sync
Normal Sync 27m (x2 over 28m) nginx-ingress-controller Scheduled for sync
Normal Sync 7m55s (x2 over 14m) nginx-ingress-controller Scheduled for sync
Edit: It started working again after a restart of my desktop. Leaving this up for any ideas as to how to prevent this or how to fix it faster next time, as this is the 5th or 6th time this has happened.
may be try
kubectl expose deployment test-ingress-deployment --type=NodePort --port=8080 --name=test-ingress-service -n demo --dry-run=1 -o yaml > mypod-service.yaml
to get the yaml template generate for the service
then start the service by apply that yaml file
then apply the ingress yaml file
on Window 10 and that will assign a random port 9999 that can be access from the "minikube ip":9999/* url
the host name is not really set but in the host file. Ingress can be access via the ip. Ingress is end point access to multiple services regardless of namespaces but the service have to be exposed directly.
if the host file is not update with the minikube ip and the host name then ingress is Scheduled for sync.
it should work with Hyper VM
https://local/hello

How to delete pending pods in kubernetes?

I have two pending pods which I cannot delete by any means. Could you help?
OS: Cent OS 7.8
Docker: 1.13.1
kubenetes: "v1.20.1"
[root#master-node ~]# k get pods --all-namespaces (note: k = kubectl alias)
NAMESPACE NAME READY STATUS RESTARTS AGE
**default happy-panda-mariadb-master-0 0/1 Pending** 0 11m
**default happy-panda-mariadb-slave-0 0/1 Pending** 0 49m
default whoami 1/1 Running 0 5h13m
[root#master-node ~]# k describe pod/happy-panda-mariadb-master-0
Name: happy-panda-mariadb-master-0
Namespace: default
Priority: 0
Node: <none>
Labels: app=mariadb
chart=mariadb-7.3.14
component=master
controller-revision-hash=happy-panda-mariadb-master-7b55b457c9
release=happy-panda
statefulset.kubernetes.io/pod-name=happy-panda-mariadb-master-0
IPs: <none>
Controlled By: StatefulSet/happy-panda-mariadb-master
Containers:
mariadb:
Image: docker.io/bitnami/mariadb:10.3.22-debian-10-r27
Port: 3306/TCP
Host Port: 0/TCP
Liveness: exec [sh -c password_aux="${MARIADB_ROOT_PASSWORD:-}"
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-happy-panda-mariadb-master-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: happy-panda-mariadb-master
Optional: false
default-token-wpvgf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wpvgf
Optional: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 15m default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 15m default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
[root#master-node ~]# k get events
LAST SEEN TYPE REASON OBJECT MESSAGE
105s Normal FailedBinding persistentvolumeclaim/data-happy-panda-mariadb-master-0 no persistent volumes available for this claim and no storage class is set
105s Normal FailedBinding persistentvolumeclaim/data-happy-panda-mariadb-slave-0 no persistent volumes available for this claim and no storage class is set
65m Warning FailedScheduling pod/happy-panda-mariadb-master-0 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
I already tried delete by various ways but nothing worked (I also tried to delete from the dashboard)
**kubectl delete pod happy-panda-mariadb-master-0 --namespace="default"
k delete deployment mysql-1608901361
k delete pod/happy-panda-mariadb-master-0 -n default --grace-period 0 --force**
Could you advise me on this?
kubectl delete rc replica set names
Or You forgot to specify storageClassName: manual in PersistentVolumeClaim.
You should delete the statefulset which controls the pods instead of deleting the pods directly. The reason pods are not getting deleted is because statefulset controller is recreating them after you delete it.
kubectl delete statefulset happy-panda-mariadb-master

MountVolume.setup failed for volume "...": mount failed: exit status 32

using openshift, and one pod keep pending, because nfs server cannot be mounted (nfs server is able to be mounted by mannually using command line, but cannot be mounted from the Pod)
I have installed nfs-common, so it's not the root cause. I trying to install nfs-utils, but I was failed, the error message is:
E: Unable to locate package: nfs-utils.
I also tried libnfs12 and libnfs-utils, they were the same as nfs-utils. I also used apt-get install upgade and update to solve the package locating problem, but they were useless.
I'm going to show the yaml file for connecting the nfs server
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test01
lables:
disktype: baas
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: /baas
server: 9.111.140.47
readOnly: false
persistentVolumeReclaimPolicy: Recycle
after using "oc describe pod/mypod" for the pending Pod, below is the feedback:
Warning FailedMount 14s kubelet, localhost MountVolume.SetUp failed for volume "pv-test01" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/cluster-up/root/openshift.local.clusterup/openshift.local.volumes/pods/267db6f2-d875-11e9-80ba-005056bc3ce0/volumes/kubernetes.io~nfs/pv-test01 --scope -- mount -t nfs 9.111.140.47:/baas /var/lib/origin/cluster-up/root/openshift.local.clusterup/openshift.local.volumes/pods/267db6f2-d875-11e9-80ba-005056bc3ce0/volumes/kubernetes.io~nfs/pv-test01
Output: Running scope as unit run-28094.scope.
mount: wrong fs type, bad option, bad superblock on 9.111.140.47:/baas,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
so how can I mount to nfs server from the Pod? should I keep installing nfs-utils? If yes, how can I install it?

Kubenetes doesn't recover service after minion failure

I am testing Kubernetes redundancy features with a testbed made of one master and three minions.
Case: I am running a service with 3 replicas on minions 1 and 2 and minion3 stopped
[root#centos-master ajn]# kubectl get nodes
NAME STATUS AGE
centos-minion3 NotReady 14d
centos-minion1 Ready 14d
centos-minion2 Ready 14d
[root#centos-master ajn]# kubectl describe pods $MYPODS | grep Node:
Node: centos-minion2/192.168.0.107
Node: centos-minion1/192.168.0.155
Node: centos-minion2/192.168.0.107
Test: After starting minion3 and stopping minion2 (on which 2 pods are running)
[root#centos-master ajn]# kubectl get nodes
NAME STATUS AGE
centos-minion3 Ready 15d
centos-minion1 Ready 14d
centos-minion2 NotReady 14d
Result: The service kind doesn't recover from minion failure and Kubernetes continue showing pods on the failed minion.
[root#centos-master ajn]# kubectl describe pods $MYPODS | grep Node:
Node: centos-minion2/192.168.0.107
Node: centos-minion1/192.168.0.155
Node: centos-minion2/192.168.0.107
Expected result (at least in my understanding): the service should have been built on the currently available minion 1 and 3
As far as I understand, the role of service kind is to make the deployment "globally" available so we can refer to them independently of where deployments are in the cluster.
Am I doing something wrong?
I'm using the follwoing yaml spec:
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-www 
spec:
  replicas: 3
  selector:
    app:  nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
It looks like you're always trying to read the same pods that are referenced in $MYPODS. Pod names are created dynamically by the ReplicationController, so instead of kubectl describe pods $MYPODS try this instead:
kubectl get pods -l app=nginx -o wide
This will always give you the currently scheduled pods for your app.