Host multicontainer application on Bluemix? - containers

I have an application that makes use docker-compose file to stand up on docker environment. Is there a way i can port/publish this multi-container application to IBM Bluemix?

The IBM Containers service has two distinct flavors you can use presently. You can either use Container Groups (backed by docker containers, the service also supports docker-compose files).
Your comment above seems to indicate that you want to create a docker container? You can do that from the service too. If you want to run docker machine, you will not be able to do that on the first service with container groups, or on the kubernetes service (currently. It is still in beta).
The new version of the service is container orchestration backed by Kubernetes, and managed by SoftLayer. You can use this in much the same way you use docker-compose, except your docker container cloud is managed by kubernetes rather than you, the user.

Sure! Try out this tutorial to get started:
https://console.ng.bluemix.net/docs/containers/container_single_ui.html#container_compose_intro

Related

Running two docker containers on one Azure App service plan

I have two containers that need to run on the same machine.
The first container is the server and the second one is the agent.
Server image tag name: local-wptserver
Server image tag name: local-wptagent
I can get it to work locally so I am trying to deploy it to the cloud (Azure) for the team's consumption.
This is how I am running it locally::
docker run -d -p 4000:80 local-wptserver
docker run -d -p 4001:80 --network="host" -e "SERVER_URL=http://localhost:4000/work/" -e "LOCATION=EastUS_wptdriver" local-wptagent
This basically sets up the server and the agent to talk to each other so once I start making API calls to the server it schedules the job with the agent and returns me the results.
However since my images are now in Azure Container registry, how do I get the container to instantiate with those extra parameters (--network="host" -e "SERVER_URL=http://localhost:4000/work/" -e "LOCATION=EastUS_wptdriver") when it gets deployed to a web app?
Is this something I can add in the docker file, prior to creating the image? If yes, how?
Note: I am using the same Azure app service plan to ensure that the two web apps (built form two different repositories in the Azure container registry: server and agent) are on the same machine.
When you want to deploy multiple containers in the Azure Web App, then you need to use the docker-compose file to deploy the multiple containers. You can follow the steps in the example.
And then, the --network does not support in the docker-compose in Azure Web App, you can see all supported compose options here. But don't worry, the containers can communicate with each other through the ports.
According to the message that you push the images to Azure Container Registry. Then you should use the ACR as the docker registry. So you need to set the environment variables for your ACR and the details here. And there are also many details you cloud know in the link I provided.

Is it possible to containerize a cluster?

Is it possible to containerize a minishift or minikube cluster? So that I can docker run -it the container, and oc/kubectl get the resources inside?
The Dockerfile could be like:
FROM alpine:latest
RUN **minishift installation**
ENTRYPOINT ['minishift', 'start']
We currently have a product that has a minishift cluster in a VM, so I was wondering if we can transition it from VM to Container.
Theoretically, yes this might be possible. But this should not be done this way.
While I do understand running Minikube or Minishift inside a Virtual Machine, I cannot understand why you would like to run in inside a container. Both of those are just one node easy to use Kubernetes or OpenShift.
If you already have a cluster why not used it to run the app that you want.
By deploying Minikube or MiniShift into a container, you are creating a huge overhead for your application.
You might be interested in reading this blog Running systemd within a Docker Container. It is a bit dated but it might be something that you are looking for. Daniel Walsh also posted an update in 2019 for that post from 2014, you can find it here

Create a Docker container with MySQL/MariaDB database

I'm planning to migrate my application stack to Docker. Let me describe the services I'm currently using:
HAProxy, which is used for SSL termination on all service's connections (HTTP and raw TCP connections), and forwards traffic to the services below.
Nginx, which serves static files, like updates and some information pages.
Node.js, which runs the main applications.
MySQL (MariaDB), the database used and shared by all the applications.
My question is about the database.
What's the proper way of running MariaDB in this case?
Install and run it inside my container, along with the other services?
Run the official image in a separate container, and link my container to it with the --link option of Docker's run command?
Does the first option have any disadvantage?
TeamSpeak docker container uses the second option and that's what made me question myself about the correct way of running the database in my case, but I particularly feel more inclined to package all the services inside my own image.
Docker Philosophy: Split your application into microservices and use a container for each microservice.
In your case, I recommend a MariaDB container, Using official (Library) Image gives you easier update management, but feel free to use your custom image.
An HAProxy Container, A nginx container and a nodejs container.
This way you divided your application into microservices, and you can upgrade, manage and troubleshoot them easier in an isolated environment.
If you are thinking about delivering your application to end users via docker, a simple docker-compose file will do the trick for easy launching the required containers.

How to connect Wordpress and MySql running on independant containers

Wordpress is running inside a Docker container on hostA and MySQL is running inside a Docker container on hostB. Is it possible to link these two containers to communicate to each other? Is this even possible to do something like this?
Any help on this is much appreciated as am pretty new to Docker
I can not answer you question but there is a part in the documentation about this:
https://docs.docker.com/engine/userguide/networking/default_network/container-communication/
You will find a section called: Communication between containers
Yes this is possible with docker overlay network.
The setup is not as easy as setting up a link or private network on the same host.
You will have to configure a key value store to get this working.
Here is the relevant docker documentation.
An overlay network:
https://docs.docker.com/engine/userguide/networking/dockernetworks/#an-overlay-network
Here are the steps for setup
https://docs.docker.com/engine/userguide/networking/get-started-overlay/
In my opinion, its not bad to isolate the app and database containers and connect outside the docker network. If you end up adding the key/value store like consul, you can always leverage the service discovery that comes along with it to dynamically discover the services.
I would go for https://github.com/weaveworks/weave.
Weave Net creates a virtual network that connects Docker containers across multiple hosts and enables their automatic discovery.
It might be overkill for your usecase. But it would be very helpful if you want to move the containers around in the future.

Docker-compose Kubernetes ENV properties interoperability

I'm building my staging environment using docker-compose, with application that was previously ran in Google Cloud using Kubernetes.
My application was configured, using ENV properties provided inside Kubernetes container, and now after switching to docker-composite, I have different naming convention for linked services.
I can think of few solutions, for my problem:
Change my application, to support alternative configurations, so it would support both docker-composite & Kubernetes
Create aliases in docker-compose or Kubernetes so that configuration would always be available in single format in both environments, and I would not need to touch my application configurations.
Maybe some other way, which I don't see
I want to go with the 2nd solution, but I don't know how exactly to configure it. Have ideas?
You could use the environment section to define 'docker-compose' variables like PARAM1=${PARAM2}. In this case, docker-compose will have the same variables that Kubernetes has.