Monitoring of Azure Web App Linux based Container - containers

I would like to know the best way to monitor Linux containers in Azure Web App. Main parameters I wanted to monitor was the containers memory usage, CPU, health of the containers etc.
I tried with Azure Monitor's Container section, I don't see any containers being listed from my Azure App Service. I think Azure Monitor is mainly for containers from AKS, Container Instance.

In Diagnose and Solve problems blade in Azure portal we can see memory usage, CPU etc.. Container wise we don't have a monitoring because there are multiple containers running for a single webapp like the main app container, kudu container, middleware container if enabled, msi container if enabled..
Please let me know if you have further queries.

Related

What is the difference between application console vs cluster console?

What is the difference between application console vs cluster console in openshift enterprise version. I am new to openshift and confused with terminologies. I feel that openshift is like linux kernel in our system(an analogy). On top of that are containers and to orchestrate we have kubernetes. However , the architecture of openshift is exact opposite. Please correct me.
OpenShift is just one of the available Kubernetes distributions, which adds enterprise-level services like authentication, authorization and multitenancy.
The web console provides two perspectives: Administrator and Developer. The Developer perspective provides workflows specific to developer use cases like create, deploy and monitor applications, while Administrator perspective is responsible for managing the cluster resources, users, and projects. Depending on the user's role, you will see a different set of views available in the main menu.

Containerisation setup on Cloud

I am new to containerisation and starting to use on one of the clouds (AWS or Azure or GCP). While reading the difference between VM and containers, I understood that we should use either VM or Containers for app deployment.
So if I setup my own container environment on cloud (instead of using AWS/Azure Container service), I eventually end up creating containers on top of VMs. This defeats the whole purpose of containerisation!
Is my understanding correct? Below is the image of VM, Container and Container on VM.
VMs, Containers and 'Containers on VM'
Your understanding is pretty correct. However such cloud providers usually also install their container setup on a vm.
A VM has a higher overhead than a container but also gives you better security. (It's more easy to break out of a container than of a real virtualization platform.)
Because of security issues with containers providers usually setup their container environment on a vm and only provision containers of the same customer on this vm.

Create a Docker container with MySQL/MariaDB database

I'm planning to migrate my application stack to Docker. Let me describe the services I'm currently using:
HAProxy, which is used for SSL termination on all service's connections (HTTP and raw TCP connections), and forwards traffic to the services below.
Nginx, which serves static files, like updates and some information pages.
Node.js, which runs the main applications.
MySQL (MariaDB), the database used and shared by all the applications.
My question is about the database.
What's the proper way of running MariaDB in this case?
Install and run it inside my container, along with the other services?
Run the official image in a separate container, and link my container to it with the --link option of Docker's run command?
Does the first option have any disadvantage?
TeamSpeak docker container uses the second option and that's what made me question myself about the correct way of running the database in my case, but I particularly feel more inclined to package all the services inside my own image.
Docker Philosophy: Split your application into microservices and use a container for each microservice.
In your case, I recommend a MariaDB container, Using official (Library) Image gives you easier update management, but feel free to use your custom image.
An HAProxy Container, A nginx container and a nodejs container.
This way you divided your application into microservices, and you can upgrade, manage and troubleshoot them easier in an isolated environment.
If you are thinking about delivering your application to end users via docker, a simple docker-compose file will do the trick for easy launching the required containers.

How to connect Wordpress and MySql running on independant containers

Wordpress is running inside a Docker container on hostA and MySQL is running inside a Docker container on hostB. Is it possible to link these two containers to communicate to each other? Is this even possible to do something like this?
Any help on this is much appreciated as am pretty new to Docker
I can not answer you question but there is a part in the documentation about this:
https://docs.docker.com/engine/userguide/networking/default_network/container-communication/
You will find a section called: Communication between containers
Yes this is possible with docker overlay network.
The setup is not as easy as setting up a link or private network on the same host.
You will have to configure a key value store to get this working.
Here is the relevant docker documentation.
An overlay network:
https://docs.docker.com/engine/userguide/networking/dockernetworks/#an-overlay-network
Here are the steps for setup
https://docs.docker.com/engine/userguide/networking/get-started-overlay/
In my opinion, its not bad to isolate the app and database containers and connect outside the docker network. If you end up adding the key/value store like consul, you can always leverage the service discovery that comes along with it to dynamically discover the services.
I would go for https://github.com/weaveworks/weave.
Weave Net creates a virtual network that connects Docker containers across multiple hosts and enables their automatic discovery.
It might be overkill for your usecase. But it would be very helpful if you want to move the containers around in the future.

Google Container Engine Architecture

I was exploring the architecture of Google's IaaS/PaaS oferings, and I am confused as to how GKE (Google Container Engine) runs in Google data centers. From this article (http://www.wired.com/2012/07/google-compute-engine/) and also from some of the Google IO 2012 sessions, I gathered that GCE (Google Compute Engine) runs the provisioned VMs using KVM (Kernel-based Virtual Machine); these VMs run inside Google's cgroups-based containers (this allows Google to schedule user VMs the same way they schedule their existing container-based workloads; probably using Borg/Omega). Now how does Kubernetes figure into this, given that it makes you run Docker containers on GCE provisioned VMs, and not on bare metal? If my understanding is correct, then Kubernetes-scheduled Docker containers run inside KVM VMs which themselves run inside Google cgroups containers scheduled by Borg/Omega...
Also, how does Kubernetes networking fit into Google's existing GCE Andromeda software-defined networking?
I understand that this is a very low-level architectural question, but I feel understanding of the internals will ameliorate my understanding of how user workloads eventually run on bare metal. Also, I'm curious, if the whole running containers on VMs inside containers is necessary from a performance point of view? E.g. doesn't networking performance degrade by having multiple layers? Google mentions in its Borg paper (http://research.google.com/pubs/archive/43438.pdf) that they run their container-based workloads without a VM (they don't want to pay the "cost of virtualization"); I understand the logic of running public external workloads in VMs (better isolation, more familiar model, heteregeneous workloads, etc.), but with Kubernetes, can not our workloads be scheduled directly on bare metal, just like Google's own workloads?
It is possible to run Kubernetes on both virtual and physical machines see this link. Google's Cloud Platform only offers virtual machines as a service, and that is why Google Container Engine is built on top of virtual machines.
In Borg, containers allow arbitrary sizes, and they don't pay any resource penalties for odd-sized tasks.