Kubernetes dependencies kubelet and master processes - fedora

On CoreOS, Kuberenetes master processes (apiserver, kube-proxy, controller-manager and podmaster) run in Docker, while the kubelet process runs as a systemd process outside Docker.
Would it be recommended to run the master processes V1.1+ and kubelet V1.0.3 together on the master host?
The reason I am asking is that CentOS Atomic Host ships with Kubernetes V1.0.3, but we would like to upgrade the master processes to V1.1.+ by running it in Docker instead of as system services directly on the opsys (CentOS intends to run all components as systemd services).
Thanks,
Andrej

I'm an advocate of running all Kubernetes services directly on the OS so forgive me if my answer is very opinionionated.
You have to ask yourself if running everything in a container makes sense at such a low level, considering that you have to mount so many libs from your host and can't benefit from systemd's journal while your services run in containers. In my case the benefit was not obvious.
On top of that, as you mentioned, running kubelet inside a container is not 100% supported yet. Running Kubernetes using systemd services is also a totally valid pattern technically speaking, so you shouldn't avoid updates invoking the reason that you can't run everything inside a container. However you should not mix versions (1.0 and 1.1)

Related

Openshift OKD 4.5 on VMware

I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.

Can't run docker containers for Jenkins and MySQL at the same time on EC2

I'm testing to set up an environment on AWS EC2
with two docker containers for Jenkins and MySQL respectively.
But when I try to run a MySQL container, the Jenkins container gets killed.
So I tried to run the Jenkins docker again, but then EC2 just stopped completely.
I guess this is because I'm using the free tier one, but could anyone possibly explain what's causing this issue?
I'd really appreciate it!
Can you share the commands or configuration files you're using to run these two containers? I suspect that it was a coincidence you faced both when the Jenkins container failed and the EC2 instance stopped working. In the event that Jenkins and Docker both have the same container name attributed to them, Docker will throw an error. In any other event, Docker will simply create the new container which will be entirely indifferent and agnostic about the other one.
When you say you're using the Free tier what do you mean by this? The AWS Free tier? It is unlikely that using that had any impact on the software running on your instance.
If you can provide this additional information I'd he more than happy to help you continue troubleshooting this issue.
EDIT: Removed claim that AWS Free Tier may cause container interruptions. The Linux Out of Memory Killer does, in fact, make this a possibility as noted in the comments by #akazuko. Could you please also provide the output for journalctl -xeu docker in your response? Doing so will indicate whether or not the OOM Killer is responsible. Be sure to trigger the error once or twice before running that command as it produces log files.

Is it possible to containerize a cluster?

Is it possible to containerize a minishift or minikube cluster? So that I can docker run -it the container, and oc/kubectl get the resources inside?
The Dockerfile could be like:
FROM alpine:latest
RUN **minishift installation**
ENTRYPOINT ['minishift', 'start']
We currently have a product that has a minishift cluster in a VM, so I was wondering if we can transition it from VM to Container.
Theoretically, yes this might be possible. But this should not be done this way.
While I do understand running Minikube or Minishift inside a Virtual Machine, I cannot understand why you would like to run in inside a container. Both of those are just one node easy to use Kubernetes or OpenShift.
If you already have a cluster why not used it to run the app that you want.
By deploying Minikube or MiniShift into a container, you are creating a huge overhead for your application.
You might be interested in reading this blog Running systemd within a Docker Container. It is a bit dated but it might be something that you are looking for. Daniel Walsh also posted an update in 2019 for that post from 2014, you can find it here

IPFS nodes in Docker and Web UI

This question is referred to the following project which is about Integrating Fabric Blockchain and IPFS.
The architecture basically comprises a swarm of docker containers that should communicate with each other (Three containers: Two peer nodes and one Server node). Every container is an IPFS node and has a separate configuration.
I am trying to run a dockerized environment of an IPFS cluster of nodes and view the WEB UI that comes with it. I set up the app by running all the steps described and then supposedly i would be able to see the WebUI in this address:
http://127.0.0.1:5001
Everything seem to be setup and configured as they should (I checked every docker logs <container>). Nevertheless all i get is an empty page.
When i try to view my local IPFS repository via
https://webui.ipfs.io/#/welcome
I get a message that this is probably caused by a CORS error (it makes sense) and it is suggested to change the IPFS configuration in order to by-pass the CORS error. See this!
Screenshot
I try to implement the solution by changing the Headers in the configuration but it doesn't seem to have any effect.
The confusion relies on the fact that after setting up the containers we have 3 different containers with 3 configurations and in addition the IPFS daemon is running in each one of them. Outside the containers the IPFS Daemon is not running.
I don't know if the IPFS daemon outside the containers should be running.
I'm not sure which configuration (if not all) should i modify.
Should i use a reverse proxy to solve this?
Useful Info
The development is done in a Linux-Ubuntu VM that meets all the necessary requirements.

Scaling mysql in Docker

I'm looking to scale mysql on a swarm that could potentially be involve multiple servers. What is the best way to ensure that the data is in sync between the containers on the different servers?
I realise on a standard configuration without docker I'd have to set up replication. I'm wondering if there is a way to do it which is more suitable and easy to deploy with docker.
Docker-compose and docker swarm are great tool for scaling in docker environments. But Currently
MySQL-DB scaling is not possible in docker-compose or docker-swarm. Reasons:
Scaling fit for stateless containers
Configuration not possible for master-slave in docker-swarm
No replication method available in docker overlay network
May be in future we have such tech to enable RDBMS scaling.
This is what database replication is for.
https://dev.mysql.com/doc/refman/5.7/en/replication.html
You can try mariadb galera cluster, setup under docker, however you need other steps to provision and load balancer, and a node monitors the state of your containers (many works and it's not easy)
And, if you have multiple Nodes on docker swarm, your need to setup NFS server for docker to share files.
There is a tool called Cluster Control, free and paid version
https://severalnines.com/product/clustercontrol/docker-mysql-database-management