docker best way to run mysql - mysql

I'm new in docker, and i have two microservices running in two containers and i would like to create simple database for them.
i created it like that:
docker run --net=kajsnetwork -d -e MYSQL_ROOT_PASSWORD='mypassword' -v /storage/mysql1/mysql-datadir:/var/lib/mysql mysql
i enter the container using
docker exec -it containernumber /bin/bash
and then i created database... But when i went to /var/lib/mysql mysql on host i haven't there nothing new - no database which i created from docker file. Did i something wrong ?
I would like to have database with data stored on host, but running in a docker container (is it good solution?) ? How to do it correctly?

You should not have to docker exec to create an instance: the container should already have one.
The doc mentions:
The -v /my/own/datadir:/var/lib/mysql part of the command mounts the /my/own/datadir directory from the underlying host system as /var/lib/mysql inside the container, where MySQL by default will write its data files.
So the order matters.

The docker cmd option -v /storage/mysql1/mysql-datadir:/var/lib/mysql indicates that you are mounting host directory /storage/mysql1/mysql-datadir to /var/lib/mysql as a data volume of the container.
So if you check /var/lib/mysql from the container your should see the same contents as /storage/mysql1/mysql-datadir in your host machine.
More details:
https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume

Related

Resuming docker mysql instance after restarting

I'm using docker to run a mysql 5.6 instance on my localhost (which is running ubuntu 20.04), using these instructions. When I create a new container for the database I use the following command
sudo docker run --name mysql-56-container -p 127.0.0.1:3310:3306 -e MYSQL_ROOT_PASSWORD=rootpassword -d mysql:5.6
That serves the intended purpose; I'm able to create the database using port 3310 and get on with what I want to do.
However when I reboot my localhost, I am unable to get back into sql5.6 using that port again.
When I list containers, I see none listed:
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
So I try to recreate it and am told that it already exists:
$ sudo docker run --name mysql-56-container -p 127.0.0.1:3310:3306 -e MYSQL_ROOT_PASSWORD=rootpassword -d mysql:5.6
docker: Error response from daemon: Conflict. The container name "/mysql-56-container" is already in use by container "a05582bff8fc02da37929d2fa2bba2e13c3b9eb488fa03fcffb09348dffd858f". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
So I try starting it but with no luck:
$ sudo docker start my-56-container
Error response from daemon: No such container: my-56-container
Error: failed to start containers: my-56-container
I clearly am not understanding how this works so my question is, how do I resume work on databases I've created in a docker container after I reboot?
docker ps just list running containers. If you reboot your laptop, all of them will be stopped. You can use docker ps --all or docker container ls --all to list all containers (running or stopped). You can check more about the docker ps command in docker ps command line reference
Once a container is created, you cannot create another with the same name. Tha is the reason your second docker run is failing.
You should use docker start instead. But you are trying to start a container with a different name. Your docker start command is using a container named my-56-container but it is called mysql-56-container. Please check your first docker run command in the question.

Load MySQL Employees Sample Database into a already running docker container on Windows

I have a MySQL Docker container running in my local Windows machine. I want to load Employees database into that docker container.
Employees Database Reference: https://dev.mysql.com/doc/employee/en/
I tried using MySQL Workbench and "Run SQL Script", but it's throwing below error:
[WinError 32] The process cannot access the file because it is being used by another process:
'C:\\Users\\roul\\AppData\\Local\\Temp\\tmp4fbw2bb4.cnf'
After reading some article I think we may have one option of attaching the script file location volume into the container and run the script from docker command prompt, but that I'm unable to do it.
Anyone here have already done that?
Find the datadir of you MySQL Server:
SHOW VARIABLES WHERE variable_Name LIKE "datadir"
Copy the content of the folder to your datadir (\. copy the content of the folder, not the folder; maybe you want improve this to not mess the datadir):
docker cp test_db-master/. CONTAINER:/var/lib/mysql/
Run the script inside the container:
docker exec -i CONTAINER /bin/bash -c "cd /var/lib/mysql/ && /usr/bin/mysql -u root --password=123456 < /var/lib/mysql/employees.sql"

Are sql dumps moved into docker container `docker-entrypoint-initdb.d` encrypted?

I'm dumping a database into a sql dump:
docker exec mysql sh -c 'exec mysqldump --all-databases -uroot -ppassword' > all-databases.sql
Then I'm using a Dockerfile to build a mysql image and run as a container:
FROM mysql:5.6.41
# needed for intialization
ENV MYSQL_ROOT_PASSWORD=whateverPassword
ADD all-databases.sql /docker-entrypoint-initdb.d/
EXPOSE 3306
When I run the container if I exec into the container, can I access the all-databases.sql file and see the contents of my database in plaintext in the docker image?
Currently if I look into /docker-entrypoint-initdb.d/ it says all-databases.sql but I don't know where that file is stored/if it's encrypted.
If you docker exec into the container, the file will be unencrypted. (It's just a text file and you can look at it with more on most image bases.)
However, if you can run any Docker command at all, then generally it's trivial to get unrestricted root access on the system. (Consider using docker run -v /etc:/host-etc to add yourself to /etc/sudoers or to allow root logins with no password.)
Also remember that anyone who has the image can docker run it and see the file there, if that matters to your security concerns. If you're looking for a single file with root access on the system anyways, you can find it without too much effort in /var/lib/docker. They can also easily run docker history to see the database root password you've set.

Can I use fig to initialise a persisting database in docker?

I am trying to automate the installation and running of set of linked docker containers using fig. The configuration is composed of a container running RStudio linked to a container running MySQL, such that I can query the MySQL database from RStudio.
On first run, I would like to create the MySQL container from the base MySQL image, and populate it with a user and database. From the command line, something like this:
#Get the latest database file
wget -P /tmp http://ergast.com/downloads/f1db.sql.gz && gunzip -f /tmp/f1db.sql.gz
#Create the database container with user, password and database
docker run --name ergastdb -e MYSQL_USER=ergast -e MYSQL_ROOT_PASSWORD=mrd -e MYSQL_DATABASE=f1db -d mysql
#Populate the database
docker run -it --link=ergastdb:mysql -v /tmp:/tmp/import --rm mysql sh -c 'exec mysql -h$MYSQL_PORT_3306_TCP_ADDR -P$MYSQL_PORT_3306_TCP_PORT -uergast -pmrd f1db < /tmp/import/f1db.sql'
#Fire up RStudio and link to the MySQL db
docker run --name f1djd -p 8788:8787 --link ergastdb:db -d rocker/hadleyverse
If I could get hold of a database image with the data preloaded, I guess that something like the following fig.yml script could link the elements?
gdrive:
command: echo created
image: busybox
volumes:
- "~/Google Drive/shareddata:/gdrive"
dbdata:
image: mysql_preloaded
environment:
MYSQL_USER=ergast
MYSQL_ROOT_PASSWORD=mrd
MYSQL_DATABASE=f1db
rstudio:
image: rocker/hadleyverse
links:
- dbdata:db
ports:
- "8788:8787"
volumes_from:
- gdrive
My question is, can I use a one-shot fig step to create the dbdata container, then perhaps mount a persistent volume, link to it and initialise the database, presumably as part of an initial fig up. If I then start and stop containers, I don't want to run the db initialisation step again, just link to the data volume container that contains the data I previously installed.
I also notice that the MySQL docker image looks like it will support arbitrary datadir definitions (Update entrypoints to read DATADIR from the MySQL configuration directly instead of assuming /var/lib/docker). As I understand it, the current definition of the MySQL image prevents mounting (and hence persisting) the database contents within the database container. I guess this might make it possible to create a mysql_preloaded image, but I don't think the latest version of the MySQL docker script has been pushed to dockerhub just yet and I can't quite think my way to how fig might then be able to make use of this alternative pathway?
Some options:
Edit the fig.yml to run a custom command that is different than the default image command/entrypoint.
From http://www.fig.sh/yml.html (example):
command: bundle exec thin -p 3000
Start the container locally, modify it and then commit it as a new image.
Modify the MySQL image docker-entrypoint.sh file to do your custom initialization.
https://github.com/docker-library/mysql/blob/567028d4e177238c58760bcd69a8766a8f026e2a/5.7/docker-entrypoint.sh
Couldn't you just roll your own version of the MySQL docker image? The official one from MySQL "upstream" is available at https://github.com/mysql/mysql-docker/blob/mysql-server/5.7/Dockerfile
What if you simply make your own copy of that, remove the VOLUME line (line 11) and then you can
docker build -t my_mysql .
docker run -d --name=empty_db my_mysql ...
# add data to the database running in the container
docker commit empty_db primed_db
docker rm -v empty_db
docker run -d --name=instance1 primed_db
docker run -d --name=instance2 primed_db
which should leave you with two running "identical" but fully isolated instances.

Possible to run two instances of docker containers on one mysql database container?

There are three containers
Container A : web server
Container B : replicate web server of Container A
Container Z : mysql datastore container for Container A
Can I run Container A and B at the same time using Container Z as mysql datastore? will it corrupt mysql data store?
Runtime of the containers below:
Container Z :
docker run --name mysql_datastore -it busybox:mysql_datastore true
Container A:
docker run -it -p 80:80 --volumes-from mysql_datastore --name webservera -h webservera centos:webseverwithmysql /bin/bash
Container B :
docker run -it -p 81:81 --volumes-from mysql_datastore --name webserverb -h webserverb centos:webseverwithmysql /bin/bash
Hopefully one of these interpretations is correct.
Can I run multiple mysql daemons in different containers that all share a single data volume?
No, each daemon needs a separate data directory to avoid conflicts. You could put multiple data directories in the shared volume, but the result of that is multiple completely separate databases. - source
Can I run multiple containers that connect to a single mysql database container?
Yes it is possible to allow multiple containers to connect to a single database container, but not by sharing volumes. Container Z will run the mysql daemon and other containers can connect to it via tcp sockets. The official mysql repo readme has steps to get started:
First start Container Z.
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mysql
Then run other containers that you want to connect to the database with something like this:
docker run --name webservera --link some-mysql:mysql -d application-that-uses-mysql
Docs for the --link flag. Container linking adds a hostfile entry for the link alias so you don't have to find the address manually. Your webserver's database configuration would look something like this
jdbc:mysql://address=(protocol=tcp)(host=mysql)(port=3306)(user=root)(password=mysecretpassword)
I hope this helps.