I wish to build a docker image with an initialised database.
This initial data will contain a default client ref value 'XXX'.
Here's my Dockerfile:
FROM mysql/mysql-server:5.7
COPY data.sql /docker-entrypoint-initdb.d/
When starting up the container of this image, I need to replace the ref value with the user's particular value, 'ABCD', which will come from an environment variable set by a docker compose file.
So the update query to run is something like:
update client set ref='ABCD' where ref='XXX'
How do I get the Dockerfile to do this? I don't think it can be a RUN command as I don't want the update to be part of the image build, but part of the startup of the image (it's fine if it runs this update on every startup).
I have all the usual mysql env varibles set (MYSQL_ROOT_PASSWORD/MYSQL_ROOT_HOST/MYSQL_DATABASE/MYSQL_USER/MYSQL_PASSWORD) and hope to refer to another env var for the desired ref to be setting. Keen to see this could be done as raw commands as well as a script.
You're right,
Everything that you want to be persistent should be in the Dockerfile or in build: section of your docker-compose file.
As you want to have the update only in docker run or docker-compose up phase, you can use CMD or ENTRYPOINT to execute it.
There are several ways, but let me recommend the following one:
docker-compose.yml
version: '3'
services:
your-service:
env_file: ./your-mysql-env-file.env
image: mysql/mysql-server:5.7
volumes:
- ./data.sql:/docker-entrypoint-initdb.d/data.sql
- ./your-init-sql-commands.sql:/docker-entrypoint-initdb.d/your-init-sql-commands.sql
container_name: your-container-name
command:
- /bin/bash
- -c
- |
/etc/init.d/mysqld start
mysql -u ${MYSQL_USER} -p ${MYSQL_PASSWORD} -h ${MYSQL_ROOT_HOST} ${MYSQL_DATABASE} < your-init-sql-commands.sql
[other commands if needed...]
Note that your update client set ref='ABCD' where ref='XXX' should be defined inside your-init-sql-commands.sql. So, if you want to change updates, you won't need to rebuild image.
your-mysql-env-file.env just define inside it your MYSQL env variables if you need them inside.
Note that I've changed your dockerfile by image:section and COPY by volumes: in docker-compose.yml, but all this steps can be also done with Dockerfile and docker run.
If you don't need a multiple command, just change all lines by a single one.
I hope this can be helpful for you.
Related
I want to deploy data slot to the distributed mysql databases via middleware,need one mysql docker container running two instances with different port, eg. 3306 and 3316.
tried many ways, such as:
Add mysql_3316.sh:
#!/bin/bash
/entrypoint.sh --defaults-file=/etc/mysql/my_3316.cnf
in the rc.local:
#!/bin/sh -e
/usr/local/bin/mysql_3316.sh || exit 1
exit 0
and modified the Dockerfile like below,
RUN touch /etc/mysql/my_3316.cnf
COPY mysql_3316.sh /usr/local/bin/mysql_3316.sh
RUN chmod +x /usr/local/bin/mysql_3316.sh
COPY rc.local /etc/rc.local
RUN chmod +x /etc/rc.local
RUN chown root:root /etc/rc.local
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 3316
CMD ["mysqld"]
it doesn't work while the mysql container comes up, but the 3316 mysql
port instance works by run the /entrypoint.sh --defaults-file=/etc/mysql/my_3316.cnf shell line manually.
tried the init.d ,
RUN touch /etc/mysql/my_3316.cnf
COPY mysql_3316.sh /etc/init.d/mysql_3316
RUN chmod +x /etc/init.d/mysql_3316
RUN update-rc.d mysql_3316 defaults 99
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 3316
CMD ["mysqld"]
doesn't work too.
tried the crontab,
#reboot /usr/local/bin/mysql_3316.sh
#Don't remove the empty line at the end of this file. It is required to run the cron job
and the Dockerfile as that,
COPY mysql_3316.sh /usr/local/bin/mysql_3316.sh
RUN chmod +x /usr/local/bin/mysql_3316.sh
ADD crontab /etc/cron.d/docker-cron
RUN chmod +x /etc/cron.d/docker-cron
RUN crontab /etc/cron.d/docker-cron
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 3316
CMD ["mysqld"]
doesn't work third.
It's been spent much on this key, almost give up...
Any kindly suggestion are welcomed please.
The docker-compose.yml for mysql right here:
services:
mysql:
image: mysql:latest
container_name: mysql
hostname: mysql
restart: unless-stopped
networks:
dockernet:
ipv4_address: 172.18.0.5
ports:
- 3306:3306
- 3316:3316
volumes:
- /Docker/mysql/:/var/lib/mysql/
- ./docker/mysql/mysql/my.cnf:/etc/mysql/my.cnf
- ./docker/mysql/mysql/my_3316.cnf:/etc/mysql/my_3316.cnf
- ./docker/mysql/mysql/logs/:/var/log/mysql/
- ./docker/mysql/mysql/init/:/docker-entrypoint-initdb.d/
entrypoint: ['/entrypoint.sh', '--default-authentication-plugin=mysql_native_password']
Normally you do NOT want to run more than one process in the same container. Despite your title I really think that what you are looking for is to start two containers, both from a MySQL image.
You should not need to change any startup scripts, Dockerfile or anything else to start up similar containers bound to different ports.
Remember that the EXPOSE command only exposes the ports to different containers, not to the outside world.
To access the port you need to use the -p flag with your docker run: https://docs.docker.com/engine/reference/run/#expose-incoming-ports
You can use the same docker image from the same Dockerfile. Just give different -p parameter when you run.
Edit:
You added your docker-compose.yml after my initial response. Using docker-compose will make my advice about -p obsolete, and you should use the ports: section of the docker-compose.yml to vary the port numbers instead.
This answer, however, might not be what you are looking for because based on your comment I think I do not fully understand your use case here.
Use the stock mysql container and just run:
docker run -p3306:3306 --name mysql1 mysql
docker run -p3316:3306 --name mysql2 mysql
# plus appropriate -d -e ... -v ... as needed on both commands
Don't try to build your own image and definitely don't try to run two servers with expected different lifetimes in a single container.
I discovered docker last week and am playing around withit for a decent time.
Now I want to deploy a Website inside a Container. The Website is already finished and I got all the files on my host system. It needs php, java, tomcat and - and here is the problem - a mysql-db.
So I created a Dockerfile, using alpine:latest as base image and after that installing the above named applications one by one.
FROM alpine:latest
ENV http_proxy http://not_important/
RUN apk update
RUN apk --no-cache --quiet add openjdk8
RUN apk --no-cache --quiet add nano
RUN apk --no-cache --quiet add php7
RUN apk --no-cache --quiet add mysql
RUN apk --no-cache --quiet add phpmyadmin
RUN mkdir -p /usr/local/tomcat/
COPY apache-tomcat-9.0.4.tar.gz /usr/local/tomcat/
RUN cd /usr/local/tomcat/ && tar xzf /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
RUN mv /usr/local/tomcat/apache-tomcat-9.0.4/* /usr/local/tomcat
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
But now, I dont rly know how to finish my work. How am I able to start the mysql-db and access it with phpmyadmin?
I run the container with the following command:
docker run --name alpine_custom -dit -p 30000:8080 -p31000:80 alpine:custom
The tomcat is running on port 30000 without a problem and I want phpmyadmin to be accessable over port 31000. I do have a working MySQL-DB on my Host and manage it with phpmyadmin (meaning, there are two containers, the phpmyadmin container is linked with the database)...
Is it even possible to do it like I want it, or do I have to deploy a second container with a database which is linked with my alpine container (and a third one with phpmyadmin...)?
I am thankful for every answer, thank you in advance
Sincerely
Telvanis :)
PS: I know, the Dockerfile isn't very good but i think its enough for my needs ^^
Try to avoid having it "all-in-one".
This is the idea behind Docker, to go from something "monolithic" to something which is separated to components. This approach gives you an advantage when you want to scale up/down your app, update specific components without rebuilding the whole app... etc.
Try to avoid the installation & configuration of every technology on your own
I remember myself trying to do so with MySQL. I spent much time and had no result. Ended up using the official image. The installation of a software inside docker might have tricky parts and not be the same with the installation one does in a VM.
So, I would propose to start searching for the official images of the technologies that you are trying to put into use. Docker hub has plenty and most of them also provide guidelines on how to use/configure them. For example:
https://hub.docker.com/r/phpmyadmin/phpmyadmin/
https://hub.docker.com/_/mysql/
https://hub.docker.com/_/openjdk/
...you get the idea.
Your running containers will have names. Docker offers a DNS mechanism so that your containers can connect to each other by using these names. For example if you have a container for your MySQL database named my_app_db listening on port 5000, configure the phpmyadmin container to connect there. An important notice here: don't try these on the default network, because it will not work. Define your own test-network.
Dealing with 3,4,5... or maybe more containers will make you type commands to build them, run them, start/stop them. Here is where docker-compose comes in and proves to be very handy. Within a docker-compose.yml file, you can define a "composition" of inter-connecting containers and handle them with single commands like docker-compose up, docker-compose down etc...
Working example:
comes from here, but is slightly modified...
docker-compose.yml file:
version: '2'
services:
mysql:
image: mysql:latest
container_name: phpmyadmin_testing_mysql
environment:
- MYSQL_ROOT_PASSWORD=test123
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin_testing
volumes:
- /sessions
ports:
- 8090:80
environment:
- PMA_ARBITRARY=1
- TESTSUITE_PASSWORD=test123
depends_on:
- mysql
To run, simply use docker-compose up. To connect, use:
server: phpmyadmin_testing_mysql (the name of the MySQL container)
username: root
password: test123
Documentation for the mysql docker image says:
When a container is started for the first time [...] it will execute files with extensions .sh and .sql that are found in /docker-entrypoint-initdb.d. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data.
So at first I did this in my docker-compose.yml:
version: '2'
services:
db:
image: mysql:5.7
volumes:
- .:/docker-entrypoint-initdb.d:ro
When I ran docker-compose build and docker-compose up the container was created and the sql files in the current directory were executed. So far all good.
But if I want to deploy these containers to another machine (using docker-machine), mounting /docker-entrypoint-initdb.d as a volume won't work, since that machine won't have access to my machine's . directory.
So then I tried to extend the mysql:5.7 image:
FROM mysql:5.7
COPY ./*.sql /docker-entrypoint-initdb.d/
And do this in my docker-compose.yml
version: '2'
services:
db:
build:
context: .
dockerfile: Dockerfile
However, when I then run docker-compose build and docker-compose up on the second machine and try to run my application, the *.sql files in the current directory aren't executed. None of my tables are created.
Why doesn't my second approach work?
EDIT:
Ah, wait. I have asked the wrong question. The problem is not that the second approach doesn't work, it is that the second approach doesn't work when running it on the local docker-machine running in Virtualbox. The second approach actually works when I use it on my host machine (i.e. not using docker-machine).
I found the issue. The problem was that I thought docker-compose rm -f destroyed any volumes attached to the containers, but I was wrong. So what I thought was the first up:ed containers were in fact using the database created by an earlier up. So the sql-files weren't run because it wasn't actually the first time the containers started. Duh. Thanks Ken for pointing me in the right direction.
Turns out that not even using docker-compose rm -v removes the volumes. I had to list them with docker volume ls and then remove them manually with docker volume rm <volume>.
Another Docker-specific way to clean up volumes:
docker system prune
This will remove dangling images, containers, volumes, and networks. Adding -a will also remove expired containers and currently unused images.
Using the official MySQL Docker image, I don't understand how to mount the data directory to a specifc point on the host. The Dockerfile of the image sets
VOLUME /var/lib/mysql
so database data should be stored "somewhere" on the host. I want to be more specific in my docker-compose file, so I tried the following:
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=mydb
volumes:
- ./database/mysql:/var/lib/mysql
Starting with docker-compose up everything works fine, but the ./database/mysql directory on the host stays empty, whereas /var/lib/mysql in the container contains data. Is there a problem in my configuration? Or do I misunderstand how to use volumes?
docker-compose will always try to preserve data volumes, so that you don't lose any data within them. If you started with a data volume, then changed to a host volume, you may still get the data volume.
To correct this, run docker-compose stop && docker-compose rm -f, which will remove your containers and your data volumes (this will erase any data in your data volumes). On the next docker-compose up, you should see it using the host volume.
Edit: As of Compose 1.6 you can run docker-compose stop -v instead of the two commands above.
I am trying to automate the installation and running of set of linked docker containers using fig. The configuration is composed of a container running RStudio linked to a container running MySQL, such that I can query the MySQL database from RStudio.
On first run, I would like to create the MySQL container from the base MySQL image, and populate it with a user and database. From the command line, something like this:
#Get the latest database file
wget -P /tmp http://ergast.com/downloads/f1db.sql.gz && gunzip -f /tmp/f1db.sql.gz
#Create the database container with user, password and database
docker run --name ergastdb -e MYSQL_USER=ergast -e MYSQL_ROOT_PASSWORD=mrd -e MYSQL_DATABASE=f1db -d mysql
#Populate the database
docker run -it --link=ergastdb:mysql -v /tmp:/tmp/import --rm mysql sh -c 'exec mysql -h$MYSQL_PORT_3306_TCP_ADDR -P$MYSQL_PORT_3306_TCP_PORT -uergast -pmrd f1db < /tmp/import/f1db.sql'
#Fire up RStudio and link to the MySQL db
docker run --name f1djd -p 8788:8787 --link ergastdb:db -d rocker/hadleyverse
If I could get hold of a database image with the data preloaded, I guess that something like the following fig.yml script could link the elements?
gdrive:
command: echo created
image: busybox
volumes:
- "~/Google Drive/shareddata:/gdrive"
dbdata:
image: mysql_preloaded
environment:
MYSQL_USER=ergast
MYSQL_ROOT_PASSWORD=mrd
MYSQL_DATABASE=f1db
rstudio:
image: rocker/hadleyverse
links:
- dbdata:db
ports:
- "8788:8787"
volumes_from:
- gdrive
My question is, can I use a one-shot fig step to create the dbdata container, then perhaps mount a persistent volume, link to it and initialise the database, presumably as part of an initial fig up. If I then start and stop containers, I don't want to run the db initialisation step again, just link to the data volume container that contains the data I previously installed.
I also notice that the MySQL docker image looks like it will support arbitrary datadir definitions (Update entrypoints to read DATADIR from the MySQL configuration directly instead of assuming /var/lib/docker). As I understand it, the current definition of the MySQL image prevents mounting (and hence persisting) the database contents within the database container. I guess this might make it possible to create a mysql_preloaded image, but I don't think the latest version of the MySQL docker script has been pushed to dockerhub just yet and I can't quite think my way to how fig might then be able to make use of this alternative pathway?
Some options:
Edit the fig.yml to run a custom command that is different than the default image command/entrypoint.
From http://www.fig.sh/yml.html (example):
command: bundle exec thin -p 3000
Start the container locally, modify it and then commit it as a new image.
Modify the MySQL image docker-entrypoint.sh file to do your custom initialization.
https://github.com/docker-library/mysql/blob/567028d4e177238c58760bcd69a8766a8f026e2a/5.7/docker-entrypoint.sh
Couldn't you just roll your own version of the MySQL docker image? The official one from MySQL "upstream" is available at https://github.com/mysql/mysql-docker/blob/mysql-server/5.7/Dockerfile
What if you simply make your own copy of that, remove the VOLUME line (line 11) and then you can
docker build -t my_mysql .
docker run -d --name=empty_db my_mysql ...
# add data to the database running in the container
docker commit empty_db primed_db
docker rm -v empty_db
docker run -d --name=instance1 primed_db
docker run -d --name=instance2 primed_db
which should leave you with two running "identical" but fully isolated instances.