How can K3S work with an embedded ETCD store? - k3s

According to K3S docs it is possible to run K3S with an embedded etcd store. As I understand it this means etcd is running in the Kubernetes cluster. This poses a bootstrap challenge:
When K3S first starts, it has no data store so it doesn't know the desired state and which pods to run
If it doesn't run any pods, it has no etcd and doesn't know it should start etcd pods
This looks like a chicken and egg problem - is embedded etcd really running in the cluster as a pod and how is it started?

Related

Openshift OKD 4.5 on VMware

I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.

MySQL with Docker/Kubernetes

I am trying to deploy a MySQL Docker Image to Kubernetes. I mostly managed all tasks, Docker Image up and running in Docker, one final thing is missing from Kubernetes deployment.
MySQL has one configuration which is stating which user can log on from which Host 'MYSQL_ROOT_HOST' to configure that for Docker is no problem, Docker Networking is using '172.17.0.1' for bridging.
The problem with Kubernetes, this must be the IP of the Pod trying to connect MySQL Pod and every time a Pod starts, this IP changes.
I try to put the Label of the Pod connecting to the MySQL Pod but it is still looking the IP of the Pod instead of DNS name.
Do you have an idea how I can overcome this problem, I can't even figure out how this should work if I set AutoScaling for the Pod that is trying to connect MySQL, replicas will all have a different IP.
Thx for answers....
As #RyanDowson and #siloko mentioned, you should use Service, Ingress or Helm Charts for these purposes.
Additional information you can find on Service, Ingress and Helm Charts pages.

Are pods managed by a Deployment restarted when updating a Kubernetes cluster

The documentation says that only pods that are managed by a Replication Controller will be restarted after a Kubernetes cluster update on Google Container Engine.
What about the pods that are managed by a Deployment?
In this case the language is too precise. Any pods that are managed by a controller (Replication Controller, Replica Set, Daemon Set, Deployment, etc) will be restarted. The warning is for folks that have created Pods without a corresponding controller. Because nodes are replaced with new nodes (rather than upgraded in place), Pods without a controller ensuring that they remain running will just disappear.

Kubernetes: run persistent pods cassandra/mysql in Ubuntu servers

I'm newbie at kubernetes and I'm having problem to understand how I can run persistent pods (Cassandras ones or mysql ones) in ubuntu servers.
Correct me if I'm wrong, kubernetes can scale up or down the pods when it sees that we need more CPU but we are not talking about static code but data that are present in other nodes. So what will do the pod when it receive the request from the balancer? Also, kubernetes has the power to destroy nodes when it sees that the traffic has reduced, so how we can not lose data and not disturb the environment?
You should use volumes to map a directory in the container to persistent disks on the host or other storage

Kubernetes on GCE

I am trying to setup Kubernetes on GCE. My question is , lets say there are 20 minions part of the cluster in Kubernetes, and two services deployed with type LoadBalancer with replicas as 2 each. So K8S will basically put 2 pods on two different minions per service. My question is, would the rest of the minions which are not running any pods also get the iptables rule in chain KUBE-PORTALS-CONTAINER and KUBE-PORTALS-HOST for these two services added? At least that is my observation but would like to confirm if this is just how Kubernetes works on GCE or this is how K8S behaves irrespective of where it is deployed. Is the reason that any service should be reachable from any minion no matter whether the minion is part of that service or not ? Let me know if there is a better community for this question ?
Would the rest of the minions which are not running any pods also get the iptables rule in chain KUBE-PORTALS-CONTAINER and KUBE-PORTALS-HOST for these two services added?
Yes. The load balancer can forward external traffic to any node in the cluster, so every node needs to be able to receive traffic for a service and forward it to the appropriate pod.
I would like to confirm if this is just how Kubernetes works on GCE or this is how K8S behaves irrespective of where it is deployed.
The iptables rule for services within the cluster is the same regardless of where Kubernetes is deployed. Making a service externally accessible differs slightly depending on where you deploy your cluster (e.g. on AWS you'd create the service as type NodePort instead of type LoadBalancer), so the iptables rule for services that are externalized can vary a bit.