Docker: split mysql databases into different data volume containers? - mysql

I have the following Docker containers:
web (nginx)
db (mysql)
The web container is linked to the db container. All standard stuff.
For data persistence, I want to take the data volume container approach.
I want to be able to run several websites using these 2 containers as the main application containers (well, technically, the web container runs the main user-facing application and the db sits behind that).
Let's say I have siteA, siteB, and siteC using databases A, B, and C respectively in mysql.
I would like to be able to partition these site's database data into 3 different data volume containers (dataA, dataB, dataC) for portability, and bring them together in a single deployment (one host running web, db, dataA, dataB, dataC all linked appropriately) when needed.
Is there a way to partition the separate mysql databases into their own data volume containers for portability? AFAIK, mysql stores all of it's database data in /var/lib/mysql in some fashion that is not transparent in terms of the databases that mysql is storing.
For example, it would be nice if the different databases being stored in mysql were mapped to known directories - in this case, /var/lib/mysql/A, /var/lib/mysql/B, and /var/lib/mysql/C. This way, all I would have to do persist those directories and mount them in the db container. I don't think this is the case though.

Why not just run multiple instances of the db container? Mapping the dir's as needed.
docker run -d -v /var/lib/mysql/A:/var/lib/mysql --name dbA my_docker/my_awesome_mysql
docker run -d -v /var/lib/mysql/B:/var/lib/mysql --name dbB my_docker/my_awesome_mysql
docker run -d -v /var/lib/mysql/C:/var/lib/mysql --name dbC my_docker/my_awesome_mysql
This would help if you ever needed to move dbA only to another host.

Related

Implicit per-container storage in Docker with MySQL

I have a container with MySQL that is configured to start with "-v /data:/var/lib/mysql" and therefore persists data between container restarts in the separate folder. Although this approach has some drawbacks, in particular, the user may not have write permissions for a specified directory. How exactly container should be reconfigured in order to use Docker's implicit per-container storage to save MySQL data in the /var/lib/docker/volumes in order to reuse it after the container is stopped and started again? Or is it better to consider another persistence options?
What you show is called bind mounts.
What you request is called volumes.
Just create volume and connect it
docker volume create foo
docker run ... -v foo:/var/lib/mysql <image> <command>
And you've done it! You can connect it to many containers at will.

Two mariadb instance using same persistent storage in docker

I want to start two maria-db pod with same persistent storage and any point of time I should be able to access both the instance and data should be in sync between them.
I am trying to start two mariadb instance using same volume persistent storage in kubernetes. I am able to start both the instance. I am performing the below steps.
Creating a persistent volume
Creating a persistent volume claim
Using the same claim name starting mariadb-instance-1.
Starting mariadb-instance-2 using same storage claim name.
Creating two services for both the instance to access from outside.
I am able to access instance-1 but when I am trying to access instance-2 its giving me error. MySQL Error: Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’.
Please find the attached dockerfiles.
Any help will be appreciated.
Please find the below git repo for db and storage yaml file which I used to create the deployment.
https://github.com/chandan493/db-as-docker
You can not run two MariaDB engines on the same storage, and if I understood you right this is what you expected. Even if you'd mount an RWX volume on two pods, if you put /var/lib/mysql of containers in two separate MaraiaDB pods in the same place, it will result in a conflict between database engines. For MariaDB clustering lookup MariaDB Galera - an almoust-fully-synchronous replication for MariaDB. But you'll need three db engines running for it to make sense.

Is MariaDB data lost after Docker setting change?

I've setup a basic MariaDB instance running in Docker - basically from starting the container using the Kitematic UI, changing the settings, and letting it run.
Today, I wanted to make a backup, so I used Kitematic to change the port so I could access it from a machine to make automated backups. After changing the port in Kitematic, it seems to have started a fresh MariaDB container (i.e. all my data seems to be removed).
Is that the expected behavior? And, more importantly, is there any way to recover the seemingly missing data, or has it been completely removed?
Also, if the data is actually removed, what is the preferred way to change settings—such as the exposed ports—without losing all changes? docker commit?
Notes:
running docker 1.12.0 beta for OS X
docker -ps a shows the database status as "Up for X minutes" when the original had been up for several days
Thanks in advance!
UPDATE:
It looks like the recommended procedure to retain data (without creating a volume or similar) is to:
commit changes (e.g. docker commit <containerid> <name/tag>)
take the container offline
update settings such as exposed port or whatever else
run the image with committed changes
...taken from this answer.
Yes, this is expected behavior. If you want your data to be persistant you should mount volume from host (via --volume option for docker run) or from another container and store your database files at this volume.
docker run --volume /path/on/your/host/machine:/var/lib/mysql mariadb
Losing changes are actually core feature of containers so it can not be omitted. This way you can be sure that between every docker run you get fresh environment without any changes. If you want your changes to be permanent you should do them in your image's Dockerfile, not in container itself.
For more information please visit official documentation: https://docs.docker.com/engine/tutorials/dockervolumes/.
it looks like you dont mount container volume into certain path. You can read about volumes and storing data into container here
you need run container with volume option
$ docker run --name some-mariadb -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mariadb:tag
where /my/own/datadir is directory on host machine

Dockerfile volume with database - using volume for mutable user-servicable parts

This is taken from the official docker website Dockerfile best practices ..
VOLUME
The VOLUME instruction should be used to expose any database storage
area, configuration storage, or files/folders created by your docker
container. You are strongly encouraged to use VOLUME for any mutable
and/or user-serviceable parts of your image.
What is meant by using volume for any mutable and user serviceable parts of the image? Are there times when I should/shouldnt and use a volume for databases? If so why? Is this where you mount the actual data contents of the database separate from the docker container..
Not a complete answer, but I found an example which might help. From the book "Build your own PAAS with Docker" by Oskar Hane, where he creates a container used only to host files for other containers, such as a MySQL container:
There is a VOLUME instruction for the Dockerfile, where you can define which directories to expose to other containers when this data volume container is added using --volumes-from attribute. In our data volume containers, we first need to add a directory for MySQL data. Let's take a look inside the MySQL image we will be using to see which directory is used for the data storage, and expose that directory to our data volume container so that we can own it:
RUN mkdir –p /var/lib/mysql
VOLUME ["/var/lib/mysql"]

Docker containers slow after restart in Azure VM

I'm experiencing some weirdness with docker.
I have an Ubuntu server VM running in Windows Azure.
If I start a new docker container for e.g. Wordpress like so:
sudo docker run --name some-wordpress --link some-mysql:mysql -p 80:80 -d wordpress
everything works nicely, I get a resonably snappy site considering the low end VM settings.
However, if I reboot the VM, and start the containers:
sudo docker start some-mysql
sudo docker start some-wordpress
The whole thing runs very slowly, the response time for a single page gets up to some 2-4 seconds.
Removing the containers and starting new ones makes everything run normally again.
What can cause this?
I suspect it has to do with disk usage, does the MySQL container use local disk for storage? When you restart an existing docker container, you reuse the existing volume, normally stored at in a sub folder of /var/lib/docker, whereas a new container creates a new volume.
I find a few search results saying that Linux on Azure doesn't handle "soft" reboots well and that stuff doesn't get reconnected as it should. A "hard" reboot supposedly fixes that.
Not sure if it helps, my Docker experience is all from AWS.
Your containers are running on a disk which is stored in a blob storage with a max. 500 IOPS per disk. You can avoid hitting the disk (not very realistic with MySQL) or add more disks to use with striping (RAID0) or use SSD (D series in Azure) And depending on your use case, you might also rebase Docker completely to use ephemeral storage (/dev/sdb) - here's how for CoreOS. BTW, there are some MySQL performance (non-Docker) suggestions in azure.com.