Move MySQL /var/lib/mysql to shared volume - mysql

I'm running a container with MySQL 8.0.18 in Docker on a Synology nas. I'm just using it to support another container (Mediawiki) on that box. MySQL has one volume mounted at path /var/lib/mysql. I would like to move that to a shared volume so i can access it with File Station and also periodically back it up. How can i move it to the docker share shown below without breaking MySQL?
Here are available shares on the Synology nas.
Alternatively, is there a way i can simply copy that /var/lib/mysql to the shared docker folder? That should work as well for periodic backups.
thanks,
russ
EDIT: Showing result after following Zeitounator's plan. Before running the docker command (2.) i created the mediawiki_mysql_backups and 12-29-2019 folders in File Station. After running 2. and 3. all the files from mysql are here and i now have a nice backup!

Stop your current container to make sure mysql is not running.
Create a dummy temp container where you mount the 2 needed volumes. I tend to user busybox for this kind of tasks => docker run -it --rm --name temp_mysql_copy -v mediaviki-mysql:/old-mysql -v /path/to/ttpe/docker:/new-mysql busybox:latest
Copy all files => cp -a /old-mysql/* /new-mysql/
Exit the dummy container (which will cleanup by itself if you used my above command)
Create a new container with mysql:8 image mounting the new folder in /var/lib/mysql (you probably need to do this in your syno docker gui).
If everything works as expected, delete the old mediaviki-mysql volume

Related

Restore Docker MySQL database

I'm new to Docker and learning about it. I'm using a Docker container of MySQL and I have created two databases with populated tables.
I've pushed the image to Docker Hub so I can use it on another device but I've tried several times whenever I pull my MySQL repository and run it I don't see any of my databases. I think I'm doing it the wrong way.
Mysql Databases from the pulled image
How can I push the MySQL image with its two databases to Docker Hub the right way?
Rather than have the database included in your image, you can have SQL scripts in your image that creates the database and populates it with initial data.
If you put files ending in .sh, .sql or .sql.gz in the /docker-entrypoint-initdb.d directory in the image, they will be run the first time the container is run.
If you have an SQL script to initialize your database, you can include it in the image by having a Dockerfile like this
FROM mysql:latest
COPY initialize-database.sql /docker-entrypoint-initdb.d/
Then you can run the container and map /var/lib/mysql to a docker volume that will store the database like this
docker run --rm -e MYSQL_ROOT_PASSWORD=password -v mysql:/var/lib/mysql <my-image-name>

Docker toolbox with mysql container problem on Windows 10 Home (and Pro)

I unable to run MySQL containers made from MySQL images with database volumes mapped to my host machine's folder.
It doesn't matter if the host folder is empty or with existing database files. I do know that Docker Toolbox could mount volumes on Windows only from c:\Users\ so my test folder is under that one.
I was trying different (official and not) MySQL images from 5.5 to latest with no result. Anytime when location /var/lib/mysql in container is pointing to a folder on my host machine (c:\Users\someuser\testfolder) I've got an error on container`s running with InnoDB error ("InnoDB: Operating system error number 22 in a file operation" or "InnoDB: File ./ib_logfile0: 'aio write' returned OS error 122").
I was trying to modify mysql container's /etc/my.cnf (under [mysqld] section, using "docker cp" command) adding "innodb_use_native_aio=OFF" or (sometimes even and) "innodb_use_native_aio=0" keys and even was trying to run "docker run" with "--user 1000:50" with no result either.
Just after I delete mount point from container's /var/lib/mysql to my host folder, the container runs normally.
There are many alike questions but no one has complete step by step solution how to run a MySQL container with a Docker Toolbox under Windows 10 (Home & Pro) to bring container work with an existing database on the host's volumes.
It took me a while to get an answer but finally everything worked! For those who are new to Docker and have problems mounting MySQL folder to the host here is a short guide. Please note I chose bitnami/mysql image for my experiments (for another images folders can be differ).
Create a folder c:\Users\[YourAccount]\MySQLData for MySQL data.
Create a folder c:\Users\[YourAccount]\MySQLConf for a custom MySQL config file.
Create a custom MySQL config file c:\Users\[YourAccount]\MySQLConf\my_custom.cnf and add two lines in it:
[mysqld]
innodb_use_native_aio=0
4. Now create and run the container mounting your custom config and data folder to it:
docker run -d --name mysql -e ALLOW_EMPTY_PASSWORD="yes" \
-v //c/Users/[YourAccount]/MySQLData:/bitnami/mysql/data \
-v //c/Users/[YourAccount]/MySQLConf/my_custom.cnf:/opt/bitnami/mysql/conf/my_custom.cnf:ro \ bitnami/mysql:latest
Hooray!

docker best way to run mysql

I'm new in docker, and i have two microservices running in two containers and i would like to create simple database for them.
i created it like that:
docker run --net=kajsnetwork -d -e MYSQL_ROOT_PASSWORD='mypassword' -v /storage/mysql1/mysql-datadir:/var/lib/mysql mysql
i enter the container using
docker exec -it containernumber /bin/bash
and then i created database... But when i went to /var/lib/mysql mysql on host i haven't there nothing new - no database which i created from docker file. Did i something wrong ?
I would like to have database with data stored on host, but running in a docker container (is it good solution?) ? How to do it correctly?
You should not have to docker exec to create an instance: the container should already have one.
The doc mentions:
The -v /my/own/datadir:/var/lib/mysql part of the command mounts the /my/own/datadir directory from the underlying host system as /var/lib/mysql inside the container, where MySQL by default will write its data files.
So the order matters.
The docker cmd option -v /storage/mysql1/mysql-datadir:/var/lib/mysql indicates that you are mounting host directory /storage/mysql1/mysql-datadir to /var/lib/mysql as a data volume of the container.
So if you check /var/lib/mysql from the container your should see the same contents as /storage/mysql1/mysql-datadir in your host machine.
More details:
https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume

How do I backup data from MySQL container into a shared volume?

If I'm working with a containerized MySQL database that wasn't originally run with shared volume options, what's the easiest way to sort of externalize the data? Is there still a way to modify the container so that it shares its data with the Docker host in a specified directory?
Note: if you're still having problems with this question, please comment so I can improve it further.
Official Docker documentation provides a great overview on how to backup, restore, or migrate data volumes. For my problem, in particular, I did the following:
Run a throw-away Docker container that runs Ubuntu, shares volumes with currently running MySQL container, and backs up database data in local machine (as described in the overview):
docker run --rm --volumes-from some-mysql -v /path/to/local/directory:backup ubuntu:15.10 tar cvf /backup/mysql.tar /var/lib/mysql
(The official MySQL Docker image uses /var/lib/mysql for storing data.)
The previous step will result in creation of /path/to/directory/mysql.tar in the Docker host. This can now be extracted like:
tar -xvf mysql.tar
(Assuming cd /path/to/directory). The resulting directory (/var/lib/mysql) can now be used as shared volume with same instance, or any other instance of containerized MySQL.

Installation of system tables failed! boot2docker tutum/mysql mount file volume on Mac OS

I have trouble mounting a volume on tutum/mysql container on Mac OS.
I am running boot2docker 1.5
When I run
docker run -v $HOME/mysql-data:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
i get this error
Installation of system tables failed! Examine the logs in /var/lib/mysql for more information.
Running the above command also creates an empty $HOME/mysql-data/mysql folder.
The tutum/mysql container runs smoothly when no mounting occurs.
I have successfully mounted a folder on the nginx demo container, which means that the boot2docker is setup correctly for mounting volumes.
I would guess that it's just a permissions issue. Either find the uid of the mysql user inside the container and chown the mysql-data dir to that user, or use a data container to hold the volumes.
For more information on data containers see the official docs.
Also note that as the Dockerfile declares volumes, mounting is taking place whether or not you use -v argument to docker run - it just happens in a directory on the host controlled by Docker (under /var/lib/docker) instead of a directory chosen by you.
I've also had a problem starting mysql docker container with error "Installation of system tables failed". There was no changes on the docker image, and there was no recent update on my machine or docker. One thing I was doing differently was that using images that could take up or more than 5GB memory on testing.
After cleaning dangling images and volumes, I was able to start mysql image as usual.
This blog seems to have a good instructions and explains all variations of clean up with docker.