Deploying one instance of Docker container on Apache Mesos/Marathon - containers

I have tried using Marathon framework to deploy only one instance of MySQL container on the web UI to test the functions of Apache Mesos. The problem is that it run and deployed so many containers at a time even though I've stated only one instance. But after letting the process "sleep for 10s" to find out about the problem, I found out that it actually run 4 containers at a time. Any help?

Related

Can't run docker containers for Jenkins and MySQL at the same time on EC2

I'm testing to set up an environment on AWS EC2
with two docker containers for Jenkins and MySQL respectively.
But when I try to run a MySQL container, the Jenkins container gets killed.
So I tried to run the Jenkins docker again, but then EC2 just stopped completely.
I guess this is because I'm using the free tier one, but could anyone possibly explain what's causing this issue?
I'd really appreciate it!
Can you share the commands or configuration files you're using to run these two containers? I suspect that it was a coincidence you faced both when the Jenkins container failed and the EC2 instance stopped working. In the event that Jenkins and Docker both have the same container name attributed to them, Docker will throw an error. In any other event, Docker will simply create the new container which will be entirely indifferent and agnostic about the other one.
When you say you're using the Free tier what do you mean by this? The AWS Free tier? It is unlikely that using that had any impact on the software running on your instance.
If you can provide this additional information I'd he more than happy to help you continue troubleshooting this issue.
EDIT: Removed claim that AWS Free Tier may cause container interruptions. The Linux Out of Memory Killer does, in fact, make this a possibility as noted in the comments by #akazuko. Could you please also provide the output for journalctl -xeu docker in your response? Doing so will indicate whether or not the OOM Killer is responsible. Be sure to trigger the error once or twice before running that command as it produces log files.

IPFS nodes in Docker and Web UI

This question is referred to the following project which is about Integrating Fabric Blockchain and IPFS.
The architecture basically comprises a swarm of docker containers that should communicate with each other (Three containers: Two peer nodes and one Server node). Every container is an IPFS node and has a separate configuration.
I am trying to run a dockerized environment of an IPFS cluster of nodes and view the WEB UI that comes with it. I set up the app by running all the steps described and then supposedly i would be able to see the WebUI in this address:
http://127.0.0.1:5001
Everything seem to be setup and configured as they should (I checked every docker logs <container>). Nevertheless all i get is an empty page.
When i try to view my local IPFS repository via
https://webui.ipfs.io/#/welcome
I get a message that this is probably caused by a CORS error (it makes sense) and it is suggested to change the IPFS configuration in order to by-pass the CORS error. See this!
Screenshot
I try to implement the solution by changing the Headers in the configuration but it doesn't seem to have any effect.
The confusion relies on the fact that after setting up the containers we have 3 different containers with 3 configurations and in addition the IPFS daemon is running in each one of them. Outside the containers the IPFS Daemon is not running.
I don't know if the IPFS daemon outside the containers should be running.
I'm not sure which configuration (if not all) should i modify.
Should i use a reverse proxy to solve this?
Useful Info
The development is done in a Linux-Ubuntu VM that meets all the necessary requirements.

Understanding Docker for providing services like web, mysql or similar

I have several questions regarding Docker.
First my project:
I have a blog on a shared host and want to move it to the cloud to have all the server sides in my hands and to have the possibility to scale my server on my needs.
My first intend was to setup a nice ubuntu 14 lts as a server with nginx, php 7 and mysql. But I think it's not that easy to transfer such a server to another cloud i.e. from gce to aws. I then thought about using docker, as a friend told me how easy it is to setup containers and how easy it is to move them from one server to another.
I then read a lot about docker but stumbled upon a few things I wondered about.
In my understanding docker runs just services like php, mysql or similar, but doesn't hold data, right?
Where would I store all the data like database, nginx.conf, php.ini and all the Files I want to serve with nginx (ie. /var/www/)?
Are they stored on the host system? If yes, it would not be easier to move a docker setup then move a whole server, no?
Do I really have an advantage of using Docker to serve a Wordpress Blog or another Website using MySQL and so on?
Thanks in advance
Your data is either stored on the host machine or you data is attached to the docker containers remotely (using a network-attached block device).
When you store your data on the host machine, you have a number of options.
The data can be 'inside' one of your containers (e.g. your mysql databases live inside your mysql container).
You can mount one or more directories from your host machine inside your containers. So then the data lives on your host.
You can create Docker volumes or Docker volume containers that are used to store your data. These volumes or volume containers are mounted inside the container with your application. The data then lives in directories managed by Docker.
For details of these options, see dockervolumes
The last option is that you mount remote storage to your docker containers. Flocker is one of the options you have for this.
At my work I've set up a host (i.e. server) that runs a number of services in docker containers. The data for each of these services 'lives' in a Docker data volume container.
This way, the data and the services are completely separated. That allows me to start, stop, upgrade and delete the containers that are running my services without affecting the data.
I have also made separate Docker containers that are started by cron and these back up the data from the data volume containers.
For mysql, the backup container connects to the mysql container and executes mysqldump remotely.
I can also run the (same) containers that are running my services on my development machine, using the data that I backed up from the production server.
This is useful, for instance, to test upgrading mysql from 5.6 to 5.7.

Create a Docker container with MySQL/MariaDB database

I'm planning to migrate my application stack to Docker. Let me describe the services I'm currently using:
HAProxy, which is used for SSL termination on all service's connections (HTTP and raw TCP connections), and forwards traffic to the services below.
Nginx, which serves static files, like updates and some information pages.
Node.js, which runs the main applications.
MySQL (MariaDB), the database used and shared by all the applications.
My question is about the database.
What's the proper way of running MariaDB in this case?
Install and run it inside my container, along with the other services?
Run the official image in a separate container, and link my container to it with the --link option of Docker's run command?
Does the first option have any disadvantage?
TeamSpeak docker container uses the second option and that's what made me question myself about the correct way of running the database in my case, but I particularly feel more inclined to package all the services inside my own image.
Docker Philosophy: Split your application into microservices and use a container for each microservice.
In your case, I recommend a MariaDB container, Using official (Library) Image gives you easier update management, but feel free to use your custom image.
An HAProxy Container, A nginx container and a nodejs container.
This way you divided your application into microservices, and you can upgrade, manage and troubleshoot them easier in an isolated environment.
If you are thinking about delivering your application to end users via docker, a simple docker-compose file will do the trick for easy launching the required containers.

how to run mysql container using Apache Mesos/Marathon

I'm trying to use Apache Marathon to run my container based application.
For this I've installed Mesos, Zookeeper, marathon and Docker. Is there anything other than that I need to install.
I'm trying Simple docker-based application in this
https://mesosphere.github.io/marathon/docs/application-basics.html
I am not able to run this, it is only showing deploying
maratho giving INFO delaying /basic-3 due to backoff.
Is the procedure I followed correct. Any help is much appreciated. I've installed my master and slave on same machine
thanks
Could you first check whether your cluster is that up correctly?
Check whether in the Mesos UI (hostname:5050 by default) whether the slaves are registered
Can you run a simple marathon job such as 'sleep 30' to check the marathon configuration?
Joerg
P.S. You could also check whether Mesos is currently pulling the docker (?) image which might take while. Therefore you might want to look into the Mesos log...