If I'm working with a containerized MySQL database that wasn't originally run with shared volume options, what's the easiest way to sort of externalize the data? Is there still a way to modify the container so that it shares its data with the Docker host in a specified directory?
Note: if you're still having problems with this question, please comment so I can improve it further.
Official Docker documentation provides a great overview on how to backup, restore, or migrate data volumes. For my problem, in particular, I did the following:
Run a throw-away Docker container that runs Ubuntu, shares volumes with currently running MySQL container, and backs up database data in local machine (as described in the overview):
docker run --rm --volumes-from some-mysql -v /path/to/local/directory:backup ubuntu:15.10 tar cvf /backup/mysql.tar /var/lib/mysql
(The official MySQL Docker image uses /var/lib/mysql for storing data.)
The previous step will result in creation of /path/to/directory/mysql.tar in the Docker host. This can now be extracted like:
tar -xvf mysql.tar
(Assuming cd /path/to/directory). The resulting directory (/var/lib/mysql) can now be used as shared volume with same instance, or any other instance of containerized MySQL.
Related
I'm running a container with MySQL 8.0.18 in Docker on a Synology nas. I'm just using it to support another container (Mediawiki) on that box. MySQL has one volume mounted at path /var/lib/mysql. I would like to move that to a shared volume so i can access it with File Station and also periodically back it up. How can i move it to the docker share shown below without breaking MySQL?
Here are available shares on the Synology nas.
Alternatively, is there a way i can simply copy that /var/lib/mysql to the shared docker folder? That should work as well for periodic backups.
thanks,
russ
EDIT: Showing result after following Zeitounator's plan. Before running the docker command (2.) i created the mediawiki_mysql_backups and 12-29-2019 folders in File Station. After running 2. and 3. all the files from mysql are here and i now have a nice backup!
Stop your current container to make sure mysql is not running.
Create a dummy temp container where you mount the 2 needed volumes. I tend to user busybox for this kind of tasks => docker run -it --rm --name temp_mysql_copy -v mediaviki-mysql:/old-mysql -v /path/to/ttpe/docker:/new-mysql busybox:latest
Copy all files => cp -a /old-mysql/* /new-mysql/
Exit the dummy container (which will cleanup by itself if you used my above command)
Create a new container with mysql:8 image mounting the new folder in /var/lib/mysql (you probably need to do this in your syno docker gui).
If everything works as expected, delete the old mediaviki-mysql volume
i am completely new to Docker and everything that has to do with it. In my last semester i build a mysql Database locally with mysql Workbench and connected a java project to it. This year i need to make this run in a Docker Container. I have pulled the Dockerfile from GitHub and i am using Portainer to manage Docker.
My teacher wants the following:
He wants me to put my code in a repository which he created for me
Then he wants to pull my project, which should include a Dockerfile, so that he don't needs to manually rebuild my mysql Database.
So how can i do this? Do i need to change the mysql Dockerfile? Or should i use the default one and then initialize my Database in my javacode?
This i my first post here on stackoverflow and i am not that advanced in programming (only 2 years experience with java), so if i can give you any more information please let me know. I hope this is the right way to post questions here.
I am thankful to everyone helping me out!
Greets, Luciore
You need to place your sql file also on GitHub, and it should be accessible from your docker environment.
When a container is started for the first time, a new database with
the specified name will be created and initialized with the provided
configuration variables. Furthermore, it will execute files with
extensions .sh, .sql and .sql.gz that are found in
/docker-entrypoint-initdb.d. Files will be executed in alphabetical
order.
docker-hub-mysql
And yes, Also you need to modify your Dockerfile, Here is the example.
This will build mysql8 based docker image, will download sql file from GitHub and will initiate DB name classicmodels mean anything a valid SQL file will be executed on boot.
FROM mysql:8
RUN apt update && apt install curl -y
RUN curl -0 https://raw.githubusercontent.com/Adiii717/doctor-demo-app/master/sample_database.sql > /docker-entrypoint-initdb.d/sampledb.sql
Build the docker image
docker build -t mysql8 .
Run the docker container
docker run --rm --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -it mysql8
I unable to run MySQL containers made from MySQL images with database volumes mapped to my host machine's folder.
It doesn't matter if the host folder is empty or with existing database files. I do know that Docker Toolbox could mount volumes on Windows only from c:\Users\ so my test folder is under that one.
I was trying different (official and not) MySQL images from 5.5 to latest with no result. Anytime when location /var/lib/mysql in container is pointing to a folder on my host machine (c:\Users\someuser\testfolder) I've got an error on container`s running with InnoDB error ("InnoDB: Operating system error number 22 in a file operation" or "InnoDB: File ./ib_logfile0: 'aio write' returned OS error 122").
I was trying to modify mysql container's /etc/my.cnf (under [mysqld] section, using "docker cp" command) adding "innodb_use_native_aio=OFF" or (sometimes even and) "innodb_use_native_aio=0" keys and even was trying to run "docker run" with "--user 1000:50" with no result either.
Just after I delete mount point from container's /var/lib/mysql to my host folder, the container runs normally.
There are many alike questions but no one has complete step by step solution how to run a MySQL container with a Docker Toolbox under Windows 10 (Home & Pro) to bring container work with an existing database on the host's volumes.
It took me a while to get an answer but finally everything worked! For those who are new to Docker and have problems mounting MySQL folder to the host here is a short guide. Please note I chose bitnami/mysql image for my experiments (for another images folders can be differ).
Create a folder c:\Users\[YourAccount]\MySQLData for MySQL data.
Create a folder c:\Users\[YourAccount]\MySQLConf for a custom MySQL config file.
Create a custom MySQL config file c:\Users\[YourAccount]\MySQLConf\my_custom.cnf and add two lines in it:
[mysqld]
innodb_use_native_aio=0
4. Now create and run the container mounting your custom config and data folder to it:
docker run -d --name mysql -e ALLOW_EMPTY_PASSWORD="yes" \
-v //c/Users/[YourAccount]/MySQLData:/bitnami/mysql/data \
-v //c/Users/[YourAccount]/MySQLConf/my_custom.cnf:/opt/bitnami/mysql/conf/my_custom.cnf:ro \ bitnami/mysql:latest
Hooray!
I am very newbie on all of this stuff of Docker. I've read on some sites that should exist one image per each application is running. This means that for run wordpress I would need at least 2 images: One for MySQL and another for Wordpress (and apache). In fact, the official Wordpress docker image does not include MySQL, requires an external connection.
But I've found some images in which MySQL is embedded on the image among wordpress and Apache. This gives you a more portable image because you only need that to deploy on any server. But if in the system is already running an image of docker you are wasting resources.
So, my question is if Wordpress should be runned on a same image with MySQL or not. And if not, how it should be done to move all data on MySQL to a different location.
The standart way is having a container per service, so you will have a container for MySQL and another one for Apache/PHP with the application.
If your are going to use the official MySQL container, and you want to persist the data, you just can to mount a volume from your host to the datadir in the mysql container:
$ docker run --name some-mysql -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
This will create a folder in the /my/own/datadir path of your host with all the content of MySQL.
You can find more information about that in that link:
https://github.com/docker-library/docs/tree/master/mysql#where-to-store-data
I have trouble mounting a volume on tutum/mysql container on Mac OS.
I am running boot2docker 1.5
When I run
docker run -v $HOME/mysql-data:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
i get this error
Installation of system tables failed! Examine the logs in /var/lib/mysql for more information.
Running the above command also creates an empty $HOME/mysql-data/mysql folder.
The tutum/mysql container runs smoothly when no mounting occurs.
I have successfully mounted a folder on the nginx demo container, which means that the boot2docker is setup correctly for mounting volumes.
I would guess that it's just a permissions issue. Either find the uid of the mysql user inside the container and chown the mysql-data dir to that user, or use a data container to hold the volumes.
For more information on data containers see the official docs.
Also note that as the Dockerfile declares volumes, mounting is taking place whether or not you use -v argument to docker run - it just happens in a directory on the host controlled by Docker (under /var/lib/docker) instead of a directory chosen by you.
I've also had a problem starting mysql docker container with error "Installation of system tables failed". There was no changes on the docker image, and there was no recent update on my machine or docker. One thing I was doing differently was that using images that could take up or more than 5GB memory on testing.
After cleaning dangling images and volumes, I was able to start mysql image as usual.
This blog seems to have a good instructions and explains all variations of clean up with docker.