I'm looking to scale mysql on a swarm that could potentially be involve multiple servers. What is the best way to ensure that the data is in sync between the containers on the different servers?
I realise on a standard configuration without docker I'd have to set up replication. I'm wondering if there is a way to do it which is more suitable and easy to deploy with docker.
Docker-compose and docker swarm are great tool for scaling in docker environments. But Currently
MySQL-DB scaling is not possible in docker-compose or docker-swarm. Reasons:
Scaling fit for stateless containers
Configuration not possible for master-slave in docker-swarm
No replication method available in docker overlay network
May be in future we have such tech to enable RDBMS scaling.
This is what database replication is for.
https://dev.mysql.com/doc/refman/5.7/en/replication.html
You can try mariadb galera cluster, setup under docker, however you need other steps to provision and load balancer, and a node monitors the state of your containers (many works and it's not easy)
And, if you have multiple Nodes on docker swarm, your need to setup NFS server for docker to share files.
There is a tool called Cluster Control, free and paid version
https://severalnines.com/product/clustercontrol/docker-mysql-database-management
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have a 96G RAM server, I would like to run a few spring boot applications on it. They all need MySql DB.
I have a hard time to decide what is the best way to utilize the server to obtain the best isolation, and performance.
I am thinking of the following:
Create a VM just for the MySql server
A VM for each spring boot application
Now should I run the mysql/spring boot directly within the VM, or run them in docker? I can see no immediate benefit by doing this. But if later on if I need to create a cluster for my apps, then to have a docker images would be better?
Or, if you were me, what would you do?
Thanks
You want best isolation and performance?
Trust the isolation provided by your Docker container. It's a primary design objective.
Don't add unnecessary layers (i.e. a VM inside which to host your Docker container) — adding a VM layer would incur performance impact, and it sounds like you don't have to.
Containerising MySQL requires thought, since it's inherently stateful.
If you wanted to do this: I'd at least store the state (data and maybe config) outside of the container.
You could get away with not containerising MySQL. I don't feel that databases are a good fit for the containerisation use-case, because:
they're stateful
scaling is not as trivial as "spin up another instance" (because you have to establish slaving, and synchronise and store a lot of state)
they don't undergo updates often
updates are not as trivial as "swap to the newer version of the container"
there's less requirement to "use the same version in all environments" (i.e. devs using MariaDB 5.7 locally, despite production's using MySQL 5.6… this is moreorless fine)
You should also consider using a managed database such as Amazon RDS. I recognise that you've a high-performance computer that you want to make use of, but it's worth weighing that up against the operational costs of maintaining and scaling the infrastructure yourself.
And yes: I'd make a container per Spring Boot application, and run those containers directly. As I said: trust Docker's isolation — or at least look up whether it's been breached, and whether that's an acceptable risk according to your threat model (and whether a VM would've saved you in any reported cases of vulnerability).
As for where to deploy those Docker containers (i.e. locally on your Fast Computer, versus deploying to the cloud): depends whether you want to optimize for operational costs (i.e. it's easier to manage everything on the cloud and not have to interact with any physical machinery) or try to make the most of your Fast Computer (and deploy everything directly to that computer).
Presumably there's some way to remotely manage the orchestration of the Docker containers on your Fast Computer. That could give you a lot of the benefits of deploying to cloud.
What you are looking for is called Docker Swarm. It allows you to deploy dockers (efficient virtual containers) and scale them with any effort.
To "dockerize" your spring boot applications, you only have to build an image with a Dockerfile, like this:
FROM java:8
VOLUME /tmp
ADD spring-boot-0.0.1-SNAPSHOT.jar springboot-appname.jar
RUN bash -c 'touch /springboot-appname.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/springboot-appname.jar"]
To build this image, execute:
docker build -t name-application-img .
To deploy the image as a service inside the Docker Swarm, use:
docker service create -p {exposed-port}:{private-port} --name {service-name} --replicas 1 name-application-img
You can create Docker images of Spring Boot applications, they are so easy to build and scale up and down. Why not even move MySQL too as a Docker image and map the volume to your disk. If you have all applications inside docker they will be easy to manage ( through a docker-compose).
However, the downside is that if you have more than one container of MySQL-DB then you have to worry about data replication and maintaining the same state of DB across multiple DB containers
If I was you I would just dockerize the Spring Boot apps!
I have an application that makes use docker-compose file to stand up on docker environment. Is there a way i can port/publish this multi-container application to IBM Bluemix?
The IBM Containers service has two distinct flavors you can use presently. You can either use Container Groups (backed by docker containers, the service also supports docker-compose files).
Your comment above seems to indicate that you want to create a docker container? You can do that from the service too. If you want to run docker machine, you will not be able to do that on the first service with container groups, or on the kubernetes service (currently. It is still in beta).
The new version of the service is container orchestration backed by Kubernetes, and managed by SoftLayer. You can use this in much the same way you use docker-compose, except your docker container cloud is managed by kubernetes rather than you, the user.
Sure! Try out this tutorial to get started:
https://console.ng.bluemix.net/docs/containers/container_single_ui.html#container_compose_intro
I have several questions regarding Docker.
First my project:
I have a blog on a shared host and want to move it to the cloud to have all the server sides in my hands and to have the possibility to scale my server on my needs.
My first intend was to setup a nice ubuntu 14 lts as a server with nginx, php 7 and mysql. But I think it's not that easy to transfer such a server to another cloud i.e. from gce to aws. I then thought about using docker, as a friend told me how easy it is to setup containers and how easy it is to move them from one server to another.
I then read a lot about docker but stumbled upon a few things I wondered about.
In my understanding docker runs just services like php, mysql or similar, but doesn't hold data, right?
Where would I store all the data like database, nginx.conf, php.ini and all the Files I want to serve with nginx (ie. /var/www/)?
Are they stored on the host system? If yes, it would not be easier to move a docker setup then move a whole server, no?
Do I really have an advantage of using Docker to serve a Wordpress Blog or another Website using MySQL and so on?
Thanks in advance
Your data is either stored on the host machine or you data is attached to the docker containers remotely (using a network-attached block device).
When you store your data on the host machine, you have a number of options.
The data can be 'inside' one of your containers (e.g. your mysql databases live inside your mysql container).
You can mount one or more directories from your host machine inside your containers. So then the data lives on your host.
You can create Docker volumes or Docker volume containers that are used to store your data. These volumes or volume containers are mounted inside the container with your application. The data then lives in directories managed by Docker.
For details of these options, see dockervolumes
The last option is that you mount remote storage to your docker containers. Flocker is one of the options you have for this.
At my work I've set up a host (i.e. server) that runs a number of services in docker containers. The data for each of these services 'lives' in a Docker data volume container.
This way, the data and the services are completely separated. That allows me to start, stop, upgrade and delete the containers that are running my services without affecting the data.
I have also made separate Docker containers that are started by cron and these back up the data from the data volume containers.
For mysql, the backup container connects to the mysql container and executes mysqldump remotely.
I can also run the (same) containers that are running my services on my development machine, using the data that I backed up from the production server.
This is useful, for instance, to test upgrading mysql from 5.6 to 5.7.
I'm planning to migrate my application stack to Docker. Let me describe the services I'm currently using:
HAProxy, which is used for SSL termination on all service's connections (HTTP and raw TCP connections), and forwards traffic to the services below.
Nginx, which serves static files, like updates and some information pages.
Node.js, which runs the main applications.
MySQL (MariaDB), the database used and shared by all the applications.
My question is about the database.
What's the proper way of running MariaDB in this case?
Install and run it inside my container, along with the other services?
Run the official image in a separate container, and link my container to it with the --link option of Docker's run command?
Does the first option have any disadvantage?
TeamSpeak docker container uses the second option and that's what made me question myself about the correct way of running the database in my case, but I particularly feel more inclined to package all the services inside my own image.
Docker Philosophy: Split your application into microservices and use a container for each microservice.
In your case, I recommend a MariaDB container, Using official (Library) Image gives you easier update management, but feel free to use your custom image.
An HAProxy Container, A nginx container and a nodejs container.
This way you divided your application into microservices, and you can upgrade, manage and troubleshoot them easier in an isolated environment.
If you are thinking about delivering your application to end users via docker, a simple docker-compose file will do the trick for easy launching the required containers.
On CoreOS, Kuberenetes master processes (apiserver, kube-proxy, controller-manager and podmaster) run in Docker, while the kubelet process runs as a systemd process outside Docker.
Would it be recommended to run the master processes V1.1+ and kubelet V1.0.3 together on the master host?
The reason I am asking is that CentOS Atomic Host ships with Kubernetes V1.0.3, but we would like to upgrade the master processes to V1.1.+ by running it in Docker instead of as system services directly on the opsys (CentOS intends to run all components as systemd services).
Thanks,
Andrej
I'm an advocate of running all Kubernetes services directly on the OS so forgive me if my answer is very opinionionated.
You have to ask yourself if running everything in a container makes sense at such a low level, considering that you have to mount so many libs from your host and can't benefit from systemd's journal while your services run in containers. In my case the benefit was not obvious.
On top of that, as you mentioned, running kubelet inside a container is not 100% supported yet. Running Kubernetes using systemd services is also a totally valid pattern technically speaking, so you shouldn't avoid updates invoking the reason that you can't run everything inside a container. However you should not mix versions (1.0 and 1.1)