How to pre-configure and prefill official MySQL docker container? - mysql

I want to build a new MySQL image based on the official MySQL docker container image. I want to reduce the number of parameters I need to add when running the image. (eg. -e MYSQL_USER, -e MYSQL_DATABASE and even -e MYSQL_ROOT_PASSWORD='rootsecret')
that already includes my settings for the global variables and my Create Database SQL file in the docker-entrypoint-initdb.d folder.
How do I add all my settings and create a new image to simply docker run mysql:config1 docker run mysql:config2 and so on?

You could build your own mysql docker image using a docker file, configure username, password and everything else you might need, build that image, upload it to the docker hub and then when you launch a new docker container you just use the previously built container.
An example of a Docker file to build an ubuntu image with a mysql server inside would be something like bellow (save it to a file called Dockerfile):
FROM ubuntu:latest
RUN apt-get update \
&& apt-get install -y apt-utils \
&& { \
echo debconf debconf/frontend select Noninteractive; \
echo mysql-community-server mysql-community-server/data-dir \
select ''; \
echo mysql-community-server mysql-community-server/root-pass \
password 'Desired-Password'; \
echo mysql-community-server mysql-community-server/re-root-pass \
password 'Desired-Password'; \
echo mysql-community-server mysql-community-server/remove-test-db \
select true; \
} | debconf-set-selections \
&& apt-get install -y mysql-server mysql-client
Then build your mysql docker container like this (you have to be in the folder where the Docker file was/is saved):
docker build my-ubuntu-mysql-docker
Then you have to push it to the docker hub and then you can use it to start a new docker container like this:
docker run -d -p 2222:22 -p 3306:3306 --name my-ubuntu-mysql-docker ...
Where 2222 is local ssh port mapped to ssh port 22 of the docker container and 3306 is local mysql port mapped to the mysql port of the docker container.
I hope this helps!

The following has to be written into the Dockerfile:
FROM mysql:latest
LABEL Name=mylabel Version=0.0.1
COPY path/to/sh/sql/sql.gz/files /docker-entrypoint-initdb.d
ENV MYSQL_ROOT_PASSWORD='rootpassword'
As stated in the documentation on the official docker website:
When a container is started for the first time, a new database with
the specified name will be created and initialized with the provided
configuration variables. Furthermore, it will execute files with
extensions .sh, .sql and .sql.gz that are found in
/docker-entrypoint-initdb.d. Files will be executed in alphabetical
order.

What you you would want to do is to modify Mysql's image entry point
and please note that you do not need to pass all the parameters, most of them are optional

Related

How to migrate data from docker container to a newly created volume? [duplicate]

I have a Docker container that I've created simply by installing Docker on Ubuntu and doing:
sudo docker run -i -t ubuntu /bin/bash
I immediately started installing Java and some other tools, spent some time with it, and stopped the container by
exit
Then I wanted to add a volume and realised that this is not as straightforward as I thought it would be. If I use sudo docker -v /somedir run ... then I end up with a fresh new container, so I'd have to install Java and do what I've already done before just to arrive at a container with a mounted volume.
All the documentation about mounting a folder from the host seems to imply that mounting a volume is something that can be done when creating a container. So the only option I have to avoid reconfiguring a new container from scratch is to commit the existing container to a repository and use that as the basis of a new one whilst mounting the volume.
Is this indeed the only way to add a volume to an existing container?
You can commit your existing container (that is create a new image from container’s changes) and then run it with your new mounts.
Example:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5a8f89adeead ubuntu:14.04 "/bin/bash" About a minute ago Exited (0) About a minute ago agitated_newton
$ docker commit 5a8f89adeead newimagename
$ docker run -ti -v "$PWD/somedir":/somedir newimagename /bin/bash
If it's all OK, stop your old container, and use this new one.
You can also commit a container using its name, for example:
docker commit agitated_newton newimagename
That's it :)
We don't have any way to add volume in running container, but to achieve this objective you may use the below commands:
Copy files/folders between a container and the local filesystem:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
For reference see:
https://docs.docker.com/engine/reference/commandline/cp/
I've successfully mount /home/<user-name> folder of my host to the /mnt folder of the existing (not running) container. You can do it in the following way:
Open configuration file corresponding to the stopped container, which can be found at /var/lib/docker/containers/99d...1fb/config.v2.json (may be config.json for older versions of docker).
Find MountPoints section, which was empty in my case: "MountPoints":{}. Next replace the contents with something like this (you can copy proper contents from another container with proper settings):
"MountPoints":{"/mnt":{"Source":"/home/<user-name>","Destination":"/mnt","RW":true,"Name":"","Driver":"","Type":"bind","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/home/<user-name>","Target":"/mnt"},"SkipMountpointCreation":false}}
or the same (formatted):
"MountPoints": {
"/mnt": {
"Source": "/home/<user-name>",
"Destination": "/mnt",
"RW": true,
"Name": "",
"Driver": "",
"Type": "bind",
"Propagation": "rprivate",
"Spec": {
"Type": "bind",
"Source": "/home/<user-name>",
"Target": "/mnt"
},
"SkipMountpointCreation": false
}
}
Restart the docker service: service docker restart
This works for me with Ubuntu 18.04.1 and Docker 18.09.0
Jérôme Petazzoni has a pretty interesting blog post on how to Attach a volume to a container while it is running. This isn't something that's built into Docker out of the box, but possible to accomplish.
As he also points out
This will not work on filesystems which are not based on block devices.
It will only work if /proc/mounts correctly lists the block device node (which, as we saw above, is not necessarily true).
Also, I only tested this on my local environment; I didn’t even try on a cloud instance or anything like that
YMMV
Unfortunately the switch option to mount a volume is only found in the run command.
docker run --help
-v, --volume list Bind mount a volume (default [])
There is a way you can work around this though so you won't have to reinstall the applications you've already set up on your container.
Export your container
docker container export -o ./myimage.docker mycontainer
Import as an image
docker import ./myimage.docker myimage
Then docker run -i -t -v /somedir --name mycontainer myimage /bin/bash
A note for using Docker Windows containers after I had to look for this problem for a long time!
Condiditions:
Windows 10
Docker Desktop (latest version)
using Docker Windows Container for image microsoft/mssql-server-windows-developer
Problem:
I wanted to mount a host dictionary into my windows container.
Solution as partially discripted here:
create docker container
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer
go to command shell in container
docker exec -it <CONTAINERID> cmd.exe
create DIR
mkdir DirForMount
stop container
docker container stop <CONTAINERID>
commit container
docker commit <CONTAINERID> <NEWIMAGENAME>
delete old container
docker container rm <CONTAINERID>
create new container with new image and volume mounting
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y -v C:\DirToMount:C:\DirForMount <NEWIMAGENAME>
After this i solved this problem on docker windows containers.
My answer will be little different. You can stop your container, add the volume and restart it. How to do it, follow the steps.
docker volume create ubuntu-volume
docker stop <container-name>
sudo docker run -i -t --mount source=ubuntu-volume,target=<target-path-in-container> ubuntu /bin/bash
You can stop and remove the container, append the existing volume in a startup script, and restart from the image. If the already existing existing partitions do keep the data, you shouldn't experience any loss of information. This should also work the same way with Dockerfile and Docker composer.
eg (solr image).
(initial script)
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
file with the second volume
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-v "/XXXX/backups/solr_snapshot_folder":/var/solr_snapshots \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
Use symlink to the already mounted drive:
ln -s Source_path targer_path_which_is_already_mounted_on_the_running_docker
The best way is to copy all the files and folders inside a directory on your local file system by: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
SRC_PATH is on container
DEST_PATH is on localhost
Then do docker-compose down attach a volume to the same DEST_PATH and run Docker containers by using docker-compose up -d
Add volume by following in docker-compose.yml
volumes:
- DEST_PATH:SRC_PATH

Openproject: How to set config/configuration.yml in docker environment

I want to configure the docker version of Open Project with the configuration.yml. Where has the file to be stored or where can I find it. None of the given external directories .asset and .pgconfig contains the yml file.
You can mount single files into your container. So we can adjust the example from the docs like this to include your own configuration.yml:
sudo mkdir -p /var/lib/openproject/{pgdata,assets}
printf "production:\n disable_password_login: true" > /var/lib/openproject/configuration.yml
docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=secret \
-v /var/lib/openproject/pgdata:/var/openproject/pgdata \
-v /var/lib/openproject/assets:/var/openproject/assets \
-v /var/lib/openproject/configuration.yml:/app/config/configuration.yml \
openproject/community:11
This, for instance, will disable the password login in OpenProject via the configuration.yml. Usually you would do this via env variables (-e OPENPROJECT_DISABLE__PASSWORD__LOGIN=true) but there are configurations such as for SAML which are indeed easier to just define in the configuration.yml instead.
The file inside of the container is /app/config/configuration.yml.

Docker image with mysql and .war file on ECS

I am totally new in Docker community and I am trying to create a custom container image with mysql and a .war file inside and to run it on a AWS EC2 instance. I've tried a lot but I cannot figure this out..
To build the container image I rum this
docker build -t <name-of-image> -f Dockerfile
I suppose Docker file content should contain something like
FROM mysql:latest
ENV TARGETD /opt/apache-tomcat-9.0.35
ENV WAR /target/NewWebApp.war
RUN apt-get -y update
RUN apt-get -y upgrade
# Create database
RUN mkdir /usr/sql
#RUN CHMOD 644 /usr/sql
ADD db.sql /usr/sql/db.sql
RUN mysql -h localhost -P 3306 --protocol=tcp -u root start && \
mysql -u root -e < /usr/sql/db.sql
EXPOSE 3306
ADD ${WAR} ${TARGETD}/webapps
And to run(deploy) image I use
docker run -d -p 8080:3306 <name-of-image>:latest
I have already installed Tomcat on 8080
What can I do in order to run this image and to be able to access it through AWS EC2?

Create database and schema for mySQL in docker

I am creating a docker image with tomcat and mySql. I have a .war file that I can push to the Tomcat and the docker image is working as expected.
But the application also need a database on mySQL in the same docker image (as I do not want to run multiple images as this is fairly small and is for a demonstration only).
I am using a tomcat image as base and install mySql on it. The base OS is Ubuntu.
Here is my dockerfile:
#Get the base
FROM davidcaste/debian-tomcat:tomcat8
#Add mySql
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get -y update
RUN apt-get -y install wget zip gcc
RUN { \
echo mysql-community-server mysql-community-server/data-dir select ''; \
echo mysql-community-server mysql-community-server/root-pass password ''; \
echo mysql-community-server mysql-community-server/re-root-pass password ''; \
echo mysql-community-server mysql-community-server/remove-test-db select false; \
} | debconf-set-selections \
&& apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server
RUN /etc/init.d/mysql start
RUN wget http://github.com/xxxx/xxxx/blob/master/xxxx/src/main/resources/sql/create-schema.sql
RUN cp create-schema.sql /usr/
RUN wget http://github.com/xxxx/xxxx/blob/master/xxxx/src/main/resources/sql/metadata.sql
RUN cp metadata.sql /usr/
#RUN mysql -- this gives error
#RUN create database test; -- this gives error
#Get the Web Application from Nexus
RUN wget "http://mynexus:8081/nexus/service/local/artifact/maven/redirect?g=org.my&a=my-app&r=repo&e=war&v=LATEST" --content-disposition -O app.war
#Copy the war file
RUN cp app.war /opt/tomcat/webapps/
EXPOSE 8080
CMD ["catalina.sh", "run"]
Without the mySql related items (create database etc) the docker build works and it runs well. But I am not able to understand how to create the database using my schema and metadata sql files.
You can run the SQL commands after container start up but not during you build it. One option would be to override the entrypoint and do it there. Another option would be to have a docker-compose first bringing up the plain mysql container and after that create db and schema with an additional container that runs the bash script.
See i.e. here to get an idea about it. Another option is to pass the SQL related stuff as ENV setting as depicted in one of the answers in above link.

Dockerfile and background running mysql server

i have problems...
Firstly i have a Dockerfile where i define all the steps, like updating system, installing mysql, change mysql root password.
Then i set an EntryPoint so my container on start will exec mysql server.
I have 2 problems:
- When i start the container, it restarts every 10 seconds.
- When i use exec to enter the docker it says: "No docker with such id".
This is my Dockerfile:
# Set the base image
FROM ubuntu:14.04
MAINTAINER redigaffi
RUN apt-get update \
&& apt-get -y install mysql-server \
&& service mysql start \
&& mysqladmin -u root password FEGj5nmKYRha
ENTRYPOINT service mysql start \
&& bash
#VOLUME /root/mysql:/var/lib/mysql:rw Please run -v running this docker since Dockerfile has not access to host files
EXPOSE 3306
I put bash on the end on the entrypoint because without it container just closes, so it remains in background.
I have tried many commands to execute this container:
docker run -d df0bb600c10f /bin/bash # This one closes the container after 2 seconds
docker run -d --restart=always df0bb600c10f /bin/bash # This one remains, but restarts every 10 seconds and i cant access this docker using exec.
Please help, what is wrong ?
Thank you!
Try using the supervisor. This article here shows the steps.