I have several questions regarding Docker.
First my project:
I have a blog on a shared host and want to move it to the cloud to have all the server sides in my hands and to have the possibility to scale my server on my needs.
My first intend was to setup a nice ubuntu 14 lts as a server with nginx, php 7 and mysql. But I think it's not that easy to transfer such a server to another cloud i.e. from gce to aws. I then thought about using docker, as a friend told me how easy it is to setup containers and how easy it is to move them from one server to another.
I then read a lot about docker but stumbled upon a few things I wondered about.
In my understanding docker runs just services like php, mysql or similar, but doesn't hold data, right?
Where would I store all the data like database, nginx.conf, php.ini and all the Files I want to serve with nginx (ie. /var/www/)?
Are they stored on the host system? If yes, it would not be easier to move a docker setup then move a whole server, no?
Do I really have an advantage of using Docker to serve a Wordpress Blog or another Website using MySQL and so on?
Thanks in advance
Your data is either stored on the host machine or you data is attached to the docker containers remotely (using a network-attached block device).
When you store your data on the host machine, you have a number of options.
The data can be 'inside' one of your containers (e.g. your mysql databases live inside your mysql container).
You can mount one or more directories from your host machine inside your containers. So then the data lives on your host.
You can create Docker volumes or Docker volume containers that are used to store your data. These volumes or volume containers are mounted inside the container with your application. The data then lives in directories managed by Docker.
For details of these options, see dockervolumes
The last option is that you mount remote storage to your docker containers. Flocker is one of the options you have for this.
At my work I've set up a host (i.e. server) that runs a number of services in docker containers. The data for each of these services 'lives' in a Docker data volume container.
This way, the data and the services are completely separated. That allows me to start, stop, upgrade and delete the containers that are running my services without affecting the data.
I have also made separate Docker containers that are started by cron and these back up the data from the data volume containers.
For mysql, the backup container connects to the mysql container and executes mysqldump remotely.
I can also run the (same) containers that are running my services on my development machine, using the data that I backed up from the production server.
This is useful, for instance, to test upgrading mysql from 5.6 to 5.7.
Related
Yesterday I finished my Drupal and mySQL on Docker backend with NextJS Typescript front end app on my local machine. I have pushed my front end up to the linode server I have running, and it works without any issues using github push and pull. Is it this easy to push and pull docker containers from Docker Hub, or am I doing this all wrong?
I already have docker set up on my linode server. I docker pushed my two containers mySQL and Drupal up to docker hub from my local machine, and then docker pulled them to my linode server. They were pulled as images, so I ran them as containers, and I now have both of these containers running in my linode server. I have no clue if this is transferring all of the data or not, and whether I have to transfer other files over as well to make this work whatsoever.
The problem is, I don't have any clue where to go from here, and I'm pretty sure this isn't at all right. On my localhost I had to have my mySQL container running otherwise my drupal container wouldn't run, but they can both run independently on the server as I have things now.
I also tested just building the mySQL and drupal containers from scratch on the server and they worked the same way as they did on my local machine. I just have no clue how to make them work the same after pulling them from the hub.
Is there a way to link these two pulled containers in the same way and get the drupal container to run on the mySQL container? Or am I completely misunderstanding how all of this stuff works?
Also, I'm not even sure if I have all the files and configs and modules from my local host containers pushed up to the hub. Is pushing and pulling the containers doing this, or am I again completely misunderstanding how this stuff works?
I have a golang web application associated with MySQL database. I need to deploy that web application in number of servers provided by different vendors. So I am going to used docker images to deploy this web app. The thing I need to know is, it is okay to keep Mysql server on same docker image or should I make a separate docker image to deploy MySQL on those servers.
A rule of thumb with Docker which you should follow is "One application, one container" It's always the best practice to have separate containers for different parts of your application. The main reason is that down the line if you want to replace MySQL with some NoSQL database, you could simply kill the container and spin up a new one and not worry about it affecting your golang application
I am running MySQL database on different VM (separate from web server).
Because of separate VM I can protect database by giving access permission to only server and closing all other ports other than 3306.
Now, with docker I can set up LAMP server in one container and MySQL in other. How secure and scalable is this solution?
I am not sure how this type of things work with container services!
protect database by giving access permission to only server and closing all other ports other than 3306.
You can do that with Docker containers. Take a look at Docker Expose.
Not sure what you mean by "scalable" here. See the documentation for scaling containers in general. Usually it's not very difficult.
I'm planning to migrate my application stack to Docker. Let me describe the services I'm currently using:
HAProxy, which is used for SSL termination on all service's connections (HTTP and raw TCP connections), and forwards traffic to the services below.
Nginx, which serves static files, like updates and some information pages.
Node.js, which runs the main applications.
MySQL (MariaDB), the database used and shared by all the applications.
My question is about the database.
What's the proper way of running MariaDB in this case?
Install and run it inside my container, along with the other services?
Run the official image in a separate container, and link my container to it with the --link option of Docker's run command?
Does the first option have any disadvantage?
TeamSpeak docker container uses the second option and that's what made me question myself about the correct way of running the database in my case, but I particularly feel more inclined to package all the services inside my own image.
Docker Philosophy: Split your application into microservices and use a container for each microservice.
In your case, I recommend a MariaDB container, Using official (Library) Image gives you easier update management, but feel free to use your custom image.
An HAProxy Container, A nginx container and a nodejs container.
This way you divided your application into microservices, and you can upgrade, manage and troubleshoot them easier in an isolated environment.
If you are thinking about delivering your application to end users via docker, a simple docker-compose file will do the trick for easy launching the required containers.
I'm looking to scale mysql on a swarm that could potentially be involve multiple servers. What is the best way to ensure that the data is in sync between the containers on the different servers?
I realise on a standard configuration without docker I'd have to set up replication. I'm wondering if there is a way to do it which is more suitable and easy to deploy with docker.
Docker-compose and docker swarm are great tool for scaling in docker environments. But Currently
MySQL-DB scaling is not possible in docker-compose or docker-swarm. Reasons:
Scaling fit for stateless containers
Configuration not possible for master-slave in docker-swarm
No replication method available in docker overlay network
May be in future we have such tech to enable RDBMS scaling.
This is what database replication is for.
https://dev.mysql.com/doc/refman/5.7/en/replication.html
You can try mariadb galera cluster, setup under docker, however you need other steps to provision and load balancer, and a node monitors the state of your containers (many works and it's not easy)
And, if you have multiple Nodes on docker swarm, your need to setup NFS server for docker to share files.
There is a tool called Cluster Control, free and paid version
https://severalnines.com/product/clustercontrol/docker-mysql-database-management