Restart Docker Containers in Sequence after Server Reboot - mysql

There are 3 docker containers that need to be restarted automatically whenever the server reboot.
We can start the containers using restart policies, such as
sudo docker run --restart=always -d your_image
but because one container is linked to another, they need to be started in sequence.
Questioin: Is there a way to automatically restart Docker containers in sequence?

Docker doesn't have an option for this, and doing so is an anti-pattern for microservices. Instead, each container should gracefully return errors when it's dependencies aren't available, or as a fall back, you can use something like a wait-for-it command in your container's entrypoint to wait for your dependencies to be available. I'd also recommend against using "links" and instead place all your services on their own docker network, letting the built in dns resolution handle service discovery for you.

Related

Running two docker containers on one Azure App service plan

I have two containers that need to run on the same machine.
The first container is the server and the second one is the agent.
Server image tag name: local-wptserver
Server image tag name: local-wptagent
I can get it to work locally so I am trying to deploy it to the cloud (Azure) for the team's consumption.
This is how I am running it locally::
docker run -d -p 4000:80 local-wptserver
docker run -d -p 4001:80 --network="host" -e "SERVER_URL=http://localhost:4000/work/" -e "LOCATION=EastUS_wptdriver" local-wptagent
This basically sets up the server and the agent to talk to each other so once I start making API calls to the server it schedules the job with the agent and returns me the results.
However since my images are now in Azure Container registry, how do I get the container to instantiate with those extra parameters (--network="host" -e "SERVER_URL=http://localhost:4000/work/" -e "LOCATION=EastUS_wptdriver") when it gets deployed to a web app?
Is this something I can add in the docker file, prior to creating the image? If yes, how?
Note: I am using the same Azure app service plan to ensure that the two web apps (built form two different repositories in the Azure container registry: server and agent) are on the same machine.
When you want to deploy multiple containers in the Azure Web App, then you need to use the docker-compose file to deploy the multiple containers. You can follow the steps in the example.
And then, the --network does not support in the docker-compose in Azure Web App, you can see all supported compose options here. But don't worry, the containers can communicate with each other through the ports.
According to the message that you push the images to Azure Container Registry. Then you should use the ACR as the docker registry. So you need to set the environment variables for your ACR and the details here. And there are also many details you cloud know in the link I provided.

Is it possible to containerize a cluster?

Is it possible to containerize a minishift or minikube cluster? So that I can docker run -it the container, and oc/kubectl get the resources inside?
The Dockerfile could be like:
FROM alpine:latest
RUN **minishift installation**
ENTRYPOINT ['minishift', 'start']
We currently have a product that has a minishift cluster in a VM, so I was wondering if we can transition it from VM to Container.
Theoretically, yes this might be possible. But this should not be done this way.
While I do understand running Minikube or Minishift inside a Virtual Machine, I cannot understand why you would like to run in inside a container. Both of those are just one node easy to use Kubernetes or OpenShift.
If you already have a cluster why not used it to run the app that you want.
By deploying Minikube or MiniShift into a container, you are creating a huge overhead for your application.
You might be interested in reading this blog Running systemd within a Docker Container. It is a bit dated but it might be something that you are looking for. Daniel Walsh also posted an update in 2019 for that post from 2014, you can find it here

Host multicontainer application on Bluemix?

I have an application that makes use docker-compose file to stand up on docker environment. Is there a way i can port/publish this multi-container application to IBM Bluemix?
The IBM Containers service has two distinct flavors you can use presently. You can either use Container Groups (backed by docker containers, the service also supports docker-compose files).
Your comment above seems to indicate that you want to create a docker container? You can do that from the service too. If you want to run docker machine, you will not be able to do that on the first service with container groups, or on the kubernetes service (currently. It is still in beta).
The new version of the service is container orchestration backed by Kubernetes, and managed by SoftLayer. You can use this in much the same way you use docker-compose, except your docker container cloud is managed by kubernetes rather than you, the user.
Sure! Try out this tutorial to get started:
https://console.ng.bluemix.net/docs/containers/container_single_ui.html#container_compose_intro

mysql docker container start with a fixed ip

hi I have a mysql container running as a service, and for other services connect it with a jdbc url, with a ip:port.
and for sometimes the server needs to reboot. and the ip addr of mysql container will change, for every service needs to connect to mysql, the jdbc url needs to be modified.
is there a way to 'docker start' a container with a fixed ip address?
I've tried --ip but it's not working
docker version 1.11.2
You can preset an IP to a container, but this must be done when you create the container (in the docker run).
https://docs.docker.com/engine/reference/run/
To preset an IP to a container you ahve to add the switch --ip="desired_ip_here" in the docker run
Also you can use tools like supervisord to manage you processes and restart services without stopping the container.

Kubernetes dependencies kubelet and master processes

On CoreOS, Kuberenetes master processes (apiserver, kube-proxy, controller-manager and podmaster) run in Docker, while the kubelet process runs as a systemd process outside Docker.
Would it be recommended to run the master processes V1.1+ and kubelet V1.0.3 together on the master host?
The reason I am asking is that CentOS Atomic Host ships with Kubernetes V1.0.3, but we would like to upgrade the master processes to V1.1.+ by running it in Docker instead of as system services directly on the opsys (CentOS intends to run all components as systemd services).
Thanks,
Andrej
I'm an advocate of running all Kubernetes services directly on the OS so forgive me if my answer is very opinionionated.
You have to ask yourself if running everything in a container makes sense at such a low level, considering that you have to mount so many libs from your host and can't benefit from systemd's journal while your services run in containers. In my case the benefit was not obvious.
On top of that, as you mentioned, running kubelet inside a container is not 100% supported yet. Running Kubernetes using systemd services is also a totally valid pattern technically speaking, so you shouldn't avoid updates invoking the reason that you can't run everything inside a container. However you should not mix versions (1.0 and 1.1)