Mysql in container or different VM - mysql

I am running MySQL database on different VM (separate from web server).
Because of separate VM I can protect database by giving access permission to only server and closing all other ports other than 3306.
Now, with docker I can set up LAMP server in one container and MySQL in other. How secure and scalable is this solution?
I am not sure how this type of things work with container services!

protect database by giving access permission to only server and closing all other ports other than 3306.
You can do that with Docker containers. Take a look at Docker Expose.
Not sure what you mean by "scalable" here. See the documentation for scaling containers in general. Usually it's not very difficult.

Related

Docker MySQL Container Incoming Connection from Gateway Address Instead of Source Container

I have a basic stack of containers on their own user-defined network with a subnet of 172.21.0.0/16. My MySQL container's address is 172.21.0.2 and the PHP/Apache container's address is 172.21.0.3.
Until this point I had to permit MySQL to allow incoming connections via PHP from 172.21.0.3, which made perfect sense. Now, it seems as though the connections are coming from 172.21.0.1, the gateway, and this doesn't make much sense to me. My (basic to intermediate) understanding suggests that the gateway should only be used when traffic is destined for an address outside of its local network - but obviously in this case MySQL and PHP/Apache are on the same network.
Two of our environments have started acting like this, and while it's a simple fix to permit connections from the gateway address, I'm hesitant to proceed without an understanding as to what has happened and why. This also seems to add extra delay to database queries within the application.
Logging in to an affected environment via phpMyAdmin displays "User: root#172.21.0.1" in the "Database Server" information pane. An unaffected environment displays "root#phpmyadmin_1.test_default" (user#[container].[network]).
Both environments are using the exact same images, and the same version of Docker - 18.06.1-ce. Other than a version upgrade of Docker, nothing else has changed with regards to the docker-compose.yml I was using.
Why has my environment started acting like this? Should I prefer the connection coming in from the actual source, and not via the gateway? How can I return to that way of operation?
Thank you for any guidance or knowledge.
For anyone else that experiences a similar rut, I'm of the mind that this was caused by an upgrade of Docker from 18.03.1-ce to 18.06.1-ce via Docker's own repository. Performing a server reboot after this operation has (for now) restored sense to the networking of the stack.
The connection to my MySQL container is now correctly coming from the PHP/Apache container and not from the gateway address of the bridge network. The lag this introduced is gone, and I'm able to remove the privilege associated with the gateway address.

Why do I need to use localhost to connect to a Docker DB?

I'm currently in the process of learning Docker (using it on Windows and Linux). There is one thing I cannot understand, and I think it is better explained with an example.
I run a MySQL container expose ports and then I connect to it via MySQL client such as MySQL Workbench. On Linux/Ubuntu I am able to connect to the DB running inside a container via its IP address, which I obtain by running "docker inspect CONTAINER_NAME". This makes perfect sense to me, and this is how I would connect to a database running on a server.
However, on Windows this approach doesn't work. I actually have to connect to localhost instead of the container's IP address. I understand that this has something to do with the fact that on Windows containers are running inside a Linux VM, but in this case I should be using the VM's IP address to connect to it.
Why does this work the way it works? I struggle to understand it (I'm still a junior developer), and I would rather understand how it works than just memorise commands/IP addresses for different OSes.

Understanding Docker for providing services like web, mysql or similar

I have several questions regarding Docker.
First my project:
I have a blog on a shared host and want to move it to the cloud to have all the server sides in my hands and to have the possibility to scale my server on my needs.
My first intend was to setup a nice ubuntu 14 lts as a server with nginx, php 7 and mysql. But I think it's not that easy to transfer such a server to another cloud i.e. from gce to aws. I then thought about using docker, as a friend told me how easy it is to setup containers and how easy it is to move them from one server to another.
I then read a lot about docker but stumbled upon a few things I wondered about.
In my understanding docker runs just services like php, mysql or similar, but doesn't hold data, right?
Where would I store all the data like database, nginx.conf, php.ini and all the Files I want to serve with nginx (ie. /var/www/)?
Are they stored on the host system? If yes, it would not be easier to move a docker setup then move a whole server, no?
Do I really have an advantage of using Docker to serve a Wordpress Blog or another Website using MySQL and so on?
Thanks in advance
Your data is either stored on the host machine or you data is attached to the docker containers remotely (using a network-attached block device).
When you store your data on the host machine, you have a number of options.
The data can be 'inside' one of your containers (e.g. your mysql databases live inside your mysql container).
You can mount one or more directories from your host machine inside your containers. So then the data lives on your host.
You can create Docker volumes or Docker volume containers that are used to store your data. These volumes or volume containers are mounted inside the container with your application. The data then lives in directories managed by Docker.
For details of these options, see dockervolumes
The last option is that you mount remote storage to your docker containers. Flocker is one of the options you have for this.
At my work I've set up a host (i.e. server) that runs a number of services in docker containers. The data for each of these services 'lives' in a Docker data volume container.
This way, the data and the services are completely separated. That allows me to start, stop, upgrade and delete the containers that are running my services without affecting the data.
I have also made separate Docker containers that are started by cron and these back up the data from the data volume containers.
For mysql, the backup container connects to the mysql container and executes mysqldump remotely.
I can also run the (same) containers that are running my services on my development machine, using the data that I backed up from the production server.
This is useful, for instance, to test upgrading mysql from 5.6 to 5.7.

Create a Docker container with MySQL/MariaDB database

I'm planning to migrate my application stack to Docker. Let me describe the services I'm currently using:
HAProxy, which is used for SSL termination on all service's connections (HTTP and raw TCP connections), and forwards traffic to the services below.
Nginx, which serves static files, like updates and some information pages.
Node.js, which runs the main applications.
MySQL (MariaDB), the database used and shared by all the applications.
My question is about the database.
What's the proper way of running MariaDB in this case?
Install and run it inside my container, along with the other services?
Run the official image in a separate container, and link my container to it with the --link option of Docker's run command?
Does the first option have any disadvantage?
TeamSpeak docker container uses the second option and that's what made me question myself about the correct way of running the database in my case, but I particularly feel more inclined to package all the services inside my own image.
Docker Philosophy: Split your application into microservices and use a container for each microservice.
In your case, I recommend a MariaDB container, Using official (Library) Image gives you easier update management, but feel free to use your custom image.
An HAProxy Container, A nginx container and a nodejs container.
This way you divided your application into microservices, and you can upgrade, manage and troubleshoot them easier in an isolated environment.
If you are thinking about delivering your application to end users via docker, a simple docker-compose file will do the trick for easy launching the required containers.

Gear to gear connection (Please read the full description first)

I have checked almost all solutions both in Openshift forum and here in stackoverflow but couldn't solve the problem.
Here is the situation
I have a php server with load balancing in one gear.
I have a second gear for mysql server along with PhpMyAdmin. At present OpenShift does not support load balancing for PhpMyAdmin, so my second gear does not have any scaling feature.
Now I want to host a php app in first gear and the database in the second gear. So how do I connect them internally (would be better if I could do it without port forwarding)? I need all the commands from the beginning to the end unfortunately.
Thank you.
You should just add the mysql cartridge to your scaled application. It will still put the mysql database on it's own gear, but it will be accessible from your scaled application using the standard mysql environment variables. You can view those variables by sshing into your application and running env | grep mysql. If you decide to run your own second gear for the mysql database (you still had to install a web cartridge anyways to do that right?) then you will either HAVE to use port forwarding for direct access, or you will have to write an API on that server that will allow your application to access the mysql database.