I am new to containerisation and starting to use on one of the clouds (AWS or Azure or GCP). While reading the difference between VM and containers, I understood that we should use either VM or Containers for app deployment.
So if I setup my own container environment on cloud (instead of using AWS/Azure Container service), I eventually end up creating containers on top of VMs. This defeats the whole purpose of containerisation!
Is my understanding correct? Below is the image of VM, Container and Container on VM.
VMs, Containers and 'Containers on VM'
Your understanding is pretty correct. However such cloud providers usually also install their container setup on a vm.
A VM has a higher overhead than a container but also gives you better security. (It's more easy to break out of a container than of a real virtualization platform.)
Because of security issues with containers providers usually setup their container environment on a vm and only provision containers of the same customer on this vm.
Related
I would like to know the best way to monitor Linux containers in Azure Web App. Main parameters I wanted to monitor was the containers memory usage, CPU, health of the containers etc.
I tried with Azure Monitor's Container section, I don't see any containers being listed from my Azure App Service. I think Azure Monitor is mainly for containers from AKS, Container Instance.
In Diagnose and Solve problems blade in Azure portal we can see memory usage, CPU etc.. Container wise we don't have a monitoring because there are multiple containers running for a single webapp like the main app container, kudu container, middleware container if enabled, msi container if enabled..
Please let me know if you have further queries.
I have deployed two containers on a pod. One is privileged and the other is a normal user container. I want to restrict communication between the two containers so that they cannot access each other or talk to each other on local host.
Containers on the same Pod share the same network namespace and can reach each other on localhost. It as if the pods are on the same computer/machine/vm.
See here:
https://kubernetes.io/docs/concepts/workloads/pods/pod/#resource-sharing-and-communication
and here:
https://kubernetes.io/docs/concepts/cluster-administration/networking/
As long as this privileged container has no applications listening, there shouldn't be any reason for these two containers to talk to each other. In case it has any kind of application listening, make sure to add any kind of authentication so it prevents any unwanted communication.
I have an application that makes use docker-compose file to stand up on docker environment. Is there a way i can port/publish this multi-container application to IBM Bluemix?
The IBM Containers service has two distinct flavors you can use presently. You can either use Container Groups (backed by docker containers, the service also supports docker-compose files).
Your comment above seems to indicate that you want to create a docker container? You can do that from the service too. If you want to run docker machine, you will not be able to do that on the first service with container groups, or on the kubernetes service (currently. It is still in beta).
The new version of the service is container orchestration backed by Kubernetes, and managed by SoftLayer. You can use this in much the same way you use docker-compose, except your docker container cloud is managed by kubernetes rather than you, the user.
Sure! Try out this tutorial to get started:
https://console.ng.bluemix.net/docs/containers/container_single_ui.html#container_compose_intro
I'm planning to migrate my application stack to Docker. Let me describe the services I'm currently using:
HAProxy, which is used for SSL termination on all service's connections (HTTP and raw TCP connections), and forwards traffic to the services below.
Nginx, which serves static files, like updates and some information pages.
Node.js, which runs the main applications.
MySQL (MariaDB), the database used and shared by all the applications.
My question is about the database.
What's the proper way of running MariaDB in this case?
Install and run it inside my container, along with the other services?
Run the official image in a separate container, and link my container to it with the --link option of Docker's run command?
Does the first option have any disadvantage?
TeamSpeak docker container uses the second option and that's what made me question myself about the correct way of running the database in my case, but I particularly feel more inclined to package all the services inside my own image.
Docker Philosophy: Split your application into microservices and use a container for each microservice.
In your case, I recommend a MariaDB container, Using official (Library) Image gives you easier update management, but feel free to use your custom image.
An HAProxy Container, A nginx container and a nodejs container.
This way you divided your application into microservices, and you can upgrade, manage and troubleshoot them easier in an isolated environment.
If you are thinking about delivering your application to end users via docker, a simple docker-compose file will do the trick for easy launching the required containers.
Wordpress is running inside a Docker container on hostA and MySQL is running inside a Docker container on hostB. Is it possible to link these two containers to communicate to each other? Is this even possible to do something like this?
Any help on this is much appreciated as am pretty new to Docker
I can not answer you question but there is a part in the documentation about this:
https://docs.docker.com/engine/userguide/networking/default_network/container-communication/
You will find a section called: Communication between containers
Yes this is possible with docker overlay network.
The setup is not as easy as setting up a link or private network on the same host.
You will have to configure a key value store to get this working.
Here is the relevant docker documentation.
An overlay network:
https://docs.docker.com/engine/userguide/networking/dockernetworks/#an-overlay-network
Here are the steps for setup
https://docs.docker.com/engine/userguide/networking/get-started-overlay/
In my opinion, its not bad to isolate the app and database containers and connect outside the docker network. If you end up adding the key/value store like consul, you can always leverage the service discovery that comes along with it to dynamically discover the services.
I would go for https://github.com/weaveworks/weave.
Weave Net creates a virtual network that connects Docker containers across multiple hosts and enables their automatic discovery.
It might be overkill for your usecase. But it would be very helpful if you want to move the containers around in the future.