Docker integration test with static data at launch - mysql

I have been struggling with this issue for a while now. I want to do a performance test using a specific set of data.
To achieve this I am using Docker-compose and I have exported a couple of .sql files.
When I first build and run the container using docker-compose build and docker-compose up the dataset is fine. Then I run my test (which also inserts) data. After running my test I want to perform it again, so I relaunch the container using docker-compose up. But this time, for some reason I don't understand, the data that was inserted the last time (by my test) is still there. I get different behaviour now.
At the moment I have the following Dockerfile:
FROM mysql:5.7
ENV MYSQL_DATABASE dev_munisense1\
MYSQL_ROOT_PASSWORD pass
EXPOSE 3306
ADD docker/data/ /docker-entrypoint-initdb.d/
Because I read that the mysql Docker image runs everything in /docker-entrypoint-initdb.d/. The first time it works properly.
I have also tried what these posts suggested:
How do i migrate mysql data directory in docker container?
http://txt.fliglio.com/2013/11/creating-a-mysql-docker-container/
How to create populated MySQL Docker Image on build time
How to make a docker image with a populated database for automated tests?
And a couple of identical other posts.
None of them seem to work currently.
How can I make sure the dataset is exactly the same each time I launch the container? Without having to rebuild the image each time (this takes kind of long because of a large dataset).
Thanks in advance
EDIT:
I have also tried running the container with different arguments like:
docker-compose up --force-recreate --build mysql but this has no success. The container is rebuilt and restarted but the db is still affected by my test. Currently the only solution to my problem is to remove the entire container and image.

I managed to fix the issue (with mysql image) by doing the following:
change the mounting point of the sql storage (This is what actually caused the problem) I used the solution suggested here: How to create populated MySQL Docker Image on build time but I did it by running a sed command: RUN sed -i 's|/var/lib/mysql|/var/lib/mysql2|g' /etc/mysql/my.cnf
add my scripts to a folder inside the container
run import.sh script that inserts data using the daemon (using the wait-for-it.sh script)
remove the sql scripts
expose port like regular
The docker file looks like this (the variables are used to select different SQL files, I wanted multiple versions of the image):
FROM mysql:5.5.54
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh /utils/wait-for-it.sh
COPY docker/import.sh /usr/local/bin/
RUN sed -i 's|/var/lib/mysql|/var/lib/mysql2|g' /etc/mysql/my.cnf
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_ROOT_PASSWORD
ARG MYSQL_ALLOW_EMPTY_PASSWORD
ARG DEVICE_INFORMATION
ARG LAST_NODE_STATUS
ARG V_NODE
ARG NETWORKSTATUS_EVENTS
ENV MYSQL_DATABASE=$MYSQL_DATABASE \
MYSQL_USER=$MYSQL_USER \
MYSQL_PASSWORD=$MYSQL_PASSWORD \
MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \
DEVICE_INFORMATION=$DEVICE_INFORMATION \
LAST_NODE_STATUS=$LAST_NODE_STATUS \
V_NODE=$V_NODE \
MYSQL_ALLOW_EMPTY_PASSWORD=$MYSQL_ALLOW_EMPTY_PASSWORD
#Set up tables
COPY docker/data/$DEVICE_INFORMATION.sql /usr/local/bin/device_information.sql
COPY docker/data/$NETWORKSTATUS_EVENTS.sql /usr/local/bin/networkstatus_events.sql
COPY docker/data/$LAST_NODE_STATUS.sql /usr/local/bin/last_node_status.sql
COPY docker/data/$V_NODE.sql /usr/local/bin/v_node.sql
RUN chmod 777 /usr/local/bin/import.sh && chmod 777 /utils/wait-for-it.sh && \
/bin/bash /entrypoint.sh mysqld --user='root' & /bin/bash /utils/wait-for-it.sh -t 0 localhost:3306 -- /usr/local/bin/import.sh; exit
RUN rm -f /usr/local/bin/*.sql
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
the script looks like this:
#!/bin/bash
echo "Going to insert the device information"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/device_information.sql
echo "Going to insert the last_node_status"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/last_node_status.sql
echo "Going to insert the v_node"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/v_node.sql
echo "Going to insert the networkstatus_events"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/networkstatus_events.sql
echo "Database now has the following tables"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE --execute="SHOW TABLES;"
So now all I have to do to start my performance tests is
#!/usr/bin/env bash
echo "Shutting down previous containers"
docker-compose -f docker-compose.yml down
docker-compose -f docker-compose-test-10k-half.yml down
docker-compose -f docker-compose-test-100k-half.yml down
docker-compose -f docker-compose-test-500k-half.yml down
docker-compose -f docker-compose-test-1m-half.yml down
echo "Launching rabbitmq container"
docker-compose up -d rabbitmq & sh wait-for-it.sh -t 0 -h localhost -p 5672 -- sleep 5;
echo "Going to execute 10k test"
docker-compose -f docker-compose-test-10k-half.yml up -d mysql_10k & sh wait-for-it.sh -t 0 -h localhost -p 3306 -- sleep 5 && ./networkstatus-event-service --env=performance-test --run-once=true;
docker-compose -f docker-compose-test-10k-half.yml stop mysql_10k
couple of more of these lines (slightly different, cause different container names)

Running docker-compose down after your tests will destroy everything associated with your docker-compose.yml
Docker Compose is a container life cycle manager and by default it tries to keep everything across multiple runs. As Stas Makarov mentions, there is a VOLUME defined in the mysql image that persists the data outside of the container.

Related

Run bash script after MySQL Docker container starts (every time, not just the initial time)

I am trying to get a bash script to run when my MySQL container starts. Not the initial time when there are no databases to create, but subsequent times (so placing the files in docker-entrypoint-initdb.d will not work).
My objective is to re-build my container with some database upgrade scripts (schema changes, etc). The thought being I deploy the container with the initial scripts and deploy subsequent updates with my database upgrades as the application ages. It seems like this would be an easy task, but I am having trouble.
Most of the things I have tried came from suggestions I found googling. Here are things I have tried with no success:
Modify the entrypoint.sh (and /usr/local/bin/docker-entrypoint.sh) in the Dockerfile build to add in a call to my script.
This does not even seem to be called, which I suspect is a sign, but my database starts (also note it creates my schema fine the first time)
I do this with a RUN sed in my Dockerfile and have confirmed my changes exist after the container starts
Tried running my script on startup by:
adding a script to /etc/r.d/rc.local
adding a restart cron job (well, I tried, but the Oracle Linux distro doesn’t have it)
— Modifying the /etc/bashrc
— Adding a script to /etc/profile.d/
— Appending to /etc/profie.d/sh.local
Tried adding a command to my docker-compose.yml, but it said that wasn’t found.
My actual database upgrade script works great when I log in to the container manually and execute it. All of my experiments above have been just touching a file or echoing to a file as a proof of concept. Once I get that working, I'll add in the logic to wait for MySQL to start and then run my actual script.
Dockerfile:
FROM mysql:8.0.32
VOLUME /var/lib/mysql
## these are my experiments
RUN sed -i '/main "$#"/a echo "run the script here" > /usr/tmp/XXX' /entrypoint.sh
RUN sed -i '/main "$#"/a echo "run the script here" > /usr/tmp/XXX' /usr/local/bin/docker-entrypoint.sh
RUN echo "touch /usr/tmp/XXX" >> /etc/profile.d/sh.local
RUN sed -i '/doublesourcing/a echo “run the script here > /usr/tmp/XXX' etc/bashrc
I build and run it using:
docker build -t mysql-database -f Dockerfile .
docker run -it --rm -d -p 3306:3306 --name database -v ~/Docker-Volume-Share/database:/var/lib/mysql mysql-database
Some other information that may be useful
I am using a volume on the host. I’ve run my experiments with an existing schema as well as by deleting this directory so it starts fresh
I am using mysql:8.0.32 as the image (Oracle Linux Server release 8.7)
Docker version 20.10.22, build 3a2c30b
Host OS is macOS 13.2.1
Thanks in advance for any tips and guidance!
It sounds like you are trying to run a script after the MySQL container has started and the initial setup has been completed. Here are a few suggestions:
1-Use a custom entrypoint script
You can create a custom entrypoint script that runs after the default entrypoint script included in the MySQL container image. In your Dockerfile, copy your custom entrypoint script into the container and set it as the entrypoint. Here's an example:
FROM mysql:8.0.32
COPY custom-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/custom-entrypoint.sh
ENTRYPOINT ["custom-entrypoint.sh"]
In your custom entrypoint script, you can check if the database already exists and run your upgrade script if it does. Here's an example:
#!/bin/bash
set -e
# Run the default entrypoint script
/docker-entrypoint.sh "$#"
# Check if the database already exists
if mysql -uroot -p"$MYSQL_ROOT_PASSWORD" -e "use my_database"; then
# Run your upgrade script
/path/to/upgrade-script.sh
fi
2-Use a Docker Compose file
If you're using Docker Compose, you can specify a command to run after the container has started. Here's an example:
version: '3'
services:
database:
image: mysql:8.0.32
volumes:
- ~/Docker-Volume-Share/database:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: mypassword
command: >
bash -c "
/docker-entrypoint.sh mysqld &
while ! mysqladmin ping -hlocalhost --silent; do sleep 1; done;
/path/to/upgrade-script.sh
"
This command runs the default entrypoint script in the background, waits for MySQL to start, and then runs your upgrade script.
I hope these suggestions help you achieve your goal!

Docker containers communication - can't access database [duplicate]

I am deploying a few different docker containers, mysql being the first one. I want to run scripts as soon as database is up and proceed to building other containers. The script has been failing because it was trying to run when the entrypoint script, which sets up mysql (from this official mysql container), was still running.
sudo docker run --name mysql -e MYSQL_ROOT_PASSWORD=MY_ROOT_PASS -p 3306:3306 -d mysql
[..] wait for mysql to be ready [..]
mysql -h 127.0.0.1 -P 3306 -u root --password=MY_ROOT_PASS < MY_SQL_SCRIPT.sql
Is there a way to wait for a signal of an entrypoiny mysql setup script finishing inside the docker container? Bash sleep seems like a suboptimal solution.
EDIT: Went for a bash script like this. Not the most elegant and kinda brute force but works like a charm. Maybe someone will find that useful.
OUTPUT="Can't connect"
while [[ $OUTPUT == *"Can't connect"* ]]
do
OUTPUT=$(mysql -h $APP_IP -P :$APP_PORT -u yyy --password=xxx < ./my_script.sql 2>&1)
done
You can install mysql-client package and use mysqladmin to ping target server. Useful when working with multiple docker container. Combine with sleep and create a simple wait-loop:
while ! mysqladmin ping -h"$DB_HOST" --silent; do
sleep 1
done
This little bash loop waits for mysql to be open, shouldn't require any extra packages to be installed:
until nc -z -v -w30 $CFG_MYSQL_HOST 3306
do
echo "Waiting for database connection..."
# wait for 5 seconds before check again
sleep 5
done
This was more or less mentioned in comments to other answers, but I think it deserves its own entry.
First of all you can run your container in the following manner:
docker run --name mysql --health-cmd='mysqladmin ping --silent' -d mysql
There is also an equivalent in the Dockerfile.
With that command your docker ps and docker inspect will show you health status of your container. For mysql in particular this method has the advantage of mysqladmin being available inside the container, so you do not need to install it on the docker host.
Then you can simply loop in a bash script to wait on the status to become healthy. The following bash script is created by Dennis.
function getContainerHealth {
docker inspect --format "{{.State.Health.Status}}" $1
}
function waitContainer {
while STATUS=$(getContainerHealth $1); [ $STATUS != "healthy" ]; do
if [ $STATUS == "unhealthy" ]; then
echo "Failed!"
exit -1
fi
printf .
lf=$'\n'
sleep 1
done
printf "$lf"
}
Now you can do this in your script:
waitContainer mysql
and your script will wait until the container is up and running. The script will exit if the container becomes unhealthy, which is possible, if for example docker host is out of memory, so that the mysql cannot allocate enough of it for itself.
I've found that using the mysqladmin ping approach isn't always reliable, especially if you're bringing up a new DB. In that case, even if you're able to ping the server, you might be unable to connect if the user/privilege tables are still being initialized. Instead I do something like the following:
while ! docker exec db-container mysql --user=foo --password=bar -e "SELECT 1" >/dev/null 2>&1; do
sleep 1
done
So far I haven't encountered any problems with this method. I see that something similar was suggested by VinGarcia in a comment to one of the mysqladmin ping answers.
Some times the problem with the port is that the port could be open, but the database is not ready yet.
Other solutions require that you have installed the mysql o a mysql client in your host machine, but really you already have it inside the Docker container, so I prefer to use something like this:
Option 1:
while ! docker exec mysql mysqladmin --user=root --password=pass --host "127.0.0.1" ping --silent &> /dev/null ; do
echo "Waiting for database connection..."
sleep 2
done
Option 2 (from #VinGarcia):
while ! docker exec container_name mysql --user=root --password=pass -e "status" &> /dev/null ; do
echo "Waiting for database connection..."
sleep 2
done
One liner using curl, found on all linux distributions:
while ! curl -o - db-host:3306; do sleep 1; done
The following health-check works for all my mysql containers:
db:
image: mysql:5.7.16
healthcheck:
test: ["CMD-SHELL", 'mysql --database=$$MYSQL_DATABASE --password=$$MYSQL_ROOT_PASSWORD --execute="SELECT count(table_name) > 0 FROM information_schema.tables;" --skip-column-names -B']
interval: 30s
timeout: 10s
retries: 4
extends:
file: docker-compose-common-config.yml
service: common_service
So I am not sure if any one has posted this. It doesn't look like any one has, so... there is a command in mysqladmin that features a wait, it handles testing of the connection, then retries internally and returns a success upon completion.
sudo docker run --name mysql -e MYSQL_ROOT_PASSWORD=MY_ROOT_PASS -p 3306:3306 -d mysql
mysqladmin ping -h 127.0.0.1 -u root --password=MY_ROOT_PASS --wait=30 && mysql -h 127.0.0.1 -P 3306 -u root --password=MY_ROOT_PASS < MY_SQL_SCRIPT.sql
The important piece is mysqladmin ping -h 127.0.0.1 -u root --password=MY_ROOT_PASS --wait=30 -v with the --wait being the flag to wait until the connection is successful and the number being the amount of attempts to retry.
Ideally you would run that command from inside the docker container, but I didn't want to modify the original posters command too much.
When used in my make file for initialization
db.initialize: db.wait db.initialize
db.wait:
docker-compose exec -T db mysqladmin ping -u $(DATABASE_USERNAME) -p$(DATABASE_PASSWORD) --wait=30 --silent
db.initialize:
docker-compose exec -T db mysql -u $(DATABASE_USERNAME) -p$(DATABASE_PASSWORD) $(DATABASE_NAME) < dev/sql/base_instance.sql
I had the same issue when my Django container tried to connect the mysql container just after it started. I solved it using the vishnubob's wait-for.it.sh script. Its a shell script which waits for an IP and a host to be ready before continuing. Here is the example I use for my applicaction.
./wait-for-it.sh \
-h $(docker inspect --format '{{ .NetworkSettings.IPAddress }}' $MYSQL_CONTAINER_NAME) \
-p 3306 \
-t 90
In that script I'm asking to the mysql container to wait maximum 90 seconds (it will run normally when ready) in the port 3306 (default mysql port) and the host asigned by docker for my MYSQL_CONTAINER_NAME. The script have more variables but for mw worked with these three.
If the docker container waiting for a mysql container is based on a python image (for instance for a Django application), you can use the code below.
Advantages are:
It's not based on wait-for-it.sh, which does wait for the IP and port of mysql to be ready, but this doesn't automatically mean also that the mysql initialization has finished.
It's not a shell script based on a mysql or mysqladmin executable that must be present in your container: since your container is based on a python image, this would require installing mysql on top of that image. With the below solution, you use the technology that is already present in the container: pure python.
Code:
import time
import pymysql
def database_not_ready_yet(error, checking_interval_seconds):
print('Database initialization has not yet finished. Retrying over {0} second(s). The encountered error was: {1}.'
.format(checking_interval_seconds,
repr(error)))
time.sleep(checking_interval_seconds)
def wait_for_database(host, port, db, user, password, checking_interval_seconds):
"""
Wait until the database is ready to handle connections.
This is necessary to ensure that the application docker container
only starts working after the MySQL database container has finished initializing.
More info: https://docs.docker.com/compose/startup-order/ and https://docs.docker.com/compose/compose-file/#depends_on .
"""
print('Waiting until the database is ready to handle connections....')
database_ready = False
while not database_ready:
db_connection = None
try:
db_connection = pymysql.connect(host=host,
port=port,
db=db,
user=user,
password=password,
charset='utf8mb4',
connect_timeout=5)
print('Database connection made.')
db_connection.ping()
print('Database ping successful.')
database_ready = True
print('The database is ready for handling incoming connections.')
except pymysql.err.OperationalError as err:
database_not_ready_yet(err, checking_interval_seconds)
except pymysql.err.MySQLError as err:
database_not_ready_yet(err, checking_interval_seconds)
except Exception as err:
database_not_ready_yet(err, checking_interval_seconds)
finally:
if db_connection is not None and db_connection.open:
db_connection.close()
Usage:
Add this code into a python file (wait-for-mysql-db.py for instance) inside your application's source code.
Write another python script (startup.py for instance) that first executes the above code, and afterwards starts up your application.
Make sure your application container's Dockerfile packs these two python scripts together with the application's source code into a Docker image.
In your docker-compose file, configure your application container with: command: ["python3", "startup.py"].
Note that this solution is made for a MySQL database. You'll need to adapt it slightly for another database.
I developed a new solution for this issue based on a new approach. All approaches I found rely on a script that tries over and over to connect to the database, or try to establish a TCP connection with the container. The full details can be found on the waitdb repository, but, my solution is to rely on the retrieved log from the container. The script waits until the log fires the message ready for connections. The script can identify if the container is starting for the first time. In this case the script waits until the initial database script is executed and the database is restarted, waiting again for a new ready for connections message. I tested this solution on MySQL 5.7 and MySQL 8.0.
The script itself (wait_db.sh):
#!/bin/bash
STRING_CONNECT="mysqld: ready for connections"
findString() {
($1 logs -f $4 $5 $6 $7 $8 $9 2>&1 | grep -m $3 "$2" &) | grep -m $3 "$2" > /dev/null
}
echo "Waiting startup..."
findString $1 "$STRING_CONNECT" 1 $2 $3 $4 $5 $6 $7
$1 logs $2 $3 $4 $5 2>&1 | grep -q "Initializing database"
if [ $? -eq 0 ] ; then
echo "Almost there..."
findString $1 "$STRING_CONNECT" 2 $2 $3 $4 $5 $6 $7
fi
echo "Server is up!"
The script can be used in Docker Compose or in Docker itself. I hope the examples bellow make the the usage clear:
Example 01: Using with Docker Compose
SERVICE_NAME="mysql" && \
docker-compose up -d $SERVICE_NAME && \
./wait_db.sh docker-compose --no-color $SERVICE_NAME
Example 02: Using with Docker
CONTAINER_NAME="wait-db-test" && \
ISO_NOW=$(date -uIs) && \
docker run --rm --name $CONTAINER_NAME \
-e MYSQL_ROOT_PASSWORD=$ROOT_PASSWORD \
-d mysql:5.7 && \
./wait_db.sh docker --since "$ISO_NOW" $CONTAINER_NAME
Example 3: A full example (the test-case)
A full example can be found on the test case of the repository. This test-case will startup a new MySQL, create a dummy database, wait until everything is started and then fires a select to check if everything goes fine. After that it'll going restart the container and wait it to be started and then fires a new select to check if it's ready for connection.
Here's how I incorporated Adams solution into my docker-compose based project:
Created a bash file titled db-ready.sh in my server container folder (the contents of which are copied in to my container - server):
#!bin/bash
until nc -z -v -w30 $MYSQL_HOST 3306
do
echo "Waiting a second until the database is receiving connections..."
# wait for a second before checking again
sleep 1
done
I can then run docker-compose run server sh ./db-ready.sh && docker-compose run server yarn run migrate to ensure that when I run my migrate task within my server container, I know the DB will be accepting connections.
I like this approach as the bash file is separate to any command I want to run. I could easily run the db-ready.sh before any other DB using task I run.
i can recommend you to use /usr/bin/mysql --user=root --password=root --execute "SHOW DATABASE;" in healthcheck script instead of mysqladmin ping. This wait for real initialization and service is ready for client connection.
Example:
docker run -d --name "test-mysql-client" -p 0.0.0.0:3306:3306 -e MYSQL_PASSWORD=password -e MYSQL_USER=user -e MYSQL_ROOT_PASSWORD=root --health-cmd="/usr/bin/mysql --user=root --password=root --execute \"SHOW DATABASE;\"" --health-interval=1s --health-retries=60 --health-timeout=10s -e MYSQL_DATABASE=db mysql:latest```
Combining flamemyst‘s answer and Nathan Arthur's comment, I believe this answer would be the most convenient one:
CONTAINER_MYSQL='' # name of the MySQL container
CONTAINER_DB_HOST='127.0.0.1'
CONTAINER_DB_PORT=3306
MYSQL_USER='' # user name if there is, normally 'root'
MYSQL_PWD='' # password you set
is_mysql_alive() {
docker exec -it ${CONTAINER_MYSQL} \
mysqladmin ping \
--user=${MYSQL_USER} \
--password=${MYSQL_PWD} \
--host=${CONTAINER_DB_HOST} \
--port=${CONTAINER_DB_PORT} \
> /dev/null
returned_value=$?
echo ${returned_value}
}
until [ "$(is_mysql_alive)" -eq 0 ]
do
sleep 2
echo "Waiting for MySQL to be ready..."
done
anything_else_to_do
Basically, it checks whether mysqladmin is alive in MySQL container, MySQL should be up if so.
Building a bit on Mihai Crăiță excellent answer above, I added in the CURL option to enable 0.9 (which is disabled by default now) and to hide the output to reduce log "noise" during startup:
server="MyServerName"
echo "Waiting for MySQL at ${server}"
while ! curl --http0.9 -o - "${server}:3306" &> /dev/null; do sleep 1; done
https://github.com/docker-library/mysql/blob/master/5.7/docker-entrypoint.sh
docker-entrypoint.sh doesn't support merging customized .sql yet.
I think you can modify docker-entrypoint.sh to merge your sql so it can be executed once mysql instance is ready.
On your ENTRYPOINT script, you have to check if you have a valid MySQL connection or not.
This solution does not require you to install a MySQL Client on the container and while running the container with php:7.0-fpm running nc was not an option, because it had to be installed as well. Also, checking if the port is open does not necessarily mean that the service is running and exposed correctly. [more of this]
So in this solution, I will show you how to run a PHP script to check if a MySQL Container is able to take connection. If you want to know why I think this is a better approach check my comment here.
File entrypoint.sh
#!/bin/bash
cat << EOF > /tmp/wait_for_mysql.php
<?php
\$connected = false;
while(!\$connected) {
try{
\$dbh = new pdo(
'mysql:host=mysql:3306;dbname=db_name', 'db_user', 'db_pass',
array(PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION)
);
\$connected = true;
}
catch(PDOException \$ex){
error_log("Could not connect to MySQL");
error_log(\$ex->getMessage());
error_log("Waiting for MySQL Connection.");
sleep(5);
}
}
EOF
php /tmp/wait_for_mysql.php
# Rest of entry point bootstrapping
By running this, you are essentially blocking any bootstrapping logic of your container UNTIL you have a valid MySQL Connection.
I use the following code ;
export COMPOSE_PROJECT_NAME=web;
export IS_DATA_CONTAINER_EXISTS=$(docker volume ls | grep ${COMPOSE_PROJECT_NAME}_sqldata);
docker-compose up -d;
docker-compose ps;
export NETWORK_GATEWAY=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.Gateway}}{{end}}' ${COMPOSE_PROJECT_NAME}_webserver1_con);

Dockerfile and background running mysql server

i have problems...
Firstly i have a Dockerfile where i define all the steps, like updating system, installing mysql, change mysql root password.
Then i set an EntryPoint so my container on start will exec mysql server.
I have 2 problems:
- When i start the container, it restarts every 10 seconds.
- When i use exec to enter the docker it says: "No docker with such id".
This is my Dockerfile:
# Set the base image
FROM ubuntu:14.04
MAINTAINER redigaffi
RUN apt-get update \
&& apt-get -y install mysql-server \
&& service mysql start \
&& mysqladmin -u root password FEGj5nmKYRha
ENTRYPOINT service mysql start \
&& bash
#VOLUME /root/mysql:/var/lib/mysql:rw Please run -v running this docker since Dockerfile has not access to host files
EXPOSE 3306
I put bash on the end on the entrypoint because without it container just closes, so it remains in background.
I have tried many commands to execute this container:
docker run -d df0bb600c10f /bin/bash # This one closes the container after 2 seconds
docker run -d --restart=always df0bb600c10f /bin/bash # This one remains, but restarts every 10 seconds and i cant access this docker using exec.
Please help, what is wrong ?
Thank you!
Try using the supervisor. This article here shows the steps.

Can I use fig to initialise a persisting database in docker?

I am trying to automate the installation and running of set of linked docker containers using fig. The configuration is composed of a container running RStudio linked to a container running MySQL, such that I can query the MySQL database from RStudio.
On first run, I would like to create the MySQL container from the base MySQL image, and populate it with a user and database. From the command line, something like this:
#Get the latest database file
wget -P /tmp http://ergast.com/downloads/f1db.sql.gz && gunzip -f /tmp/f1db.sql.gz
#Create the database container with user, password and database
docker run --name ergastdb -e MYSQL_USER=ergast -e MYSQL_ROOT_PASSWORD=mrd -e MYSQL_DATABASE=f1db -d mysql
#Populate the database
docker run -it --link=ergastdb:mysql -v /tmp:/tmp/import --rm mysql sh -c 'exec mysql -h$MYSQL_PORT_3306_TCP_ADDR -P$MYSQL_PORT_3306_TCP_PORT -uergast -pmrd f1db < /tmp/import/f1db.sql'
#Fire up RStudio and link to the MySQL db
docker run --name f1djd -p 8788:8787 --link ergastdb:db -d rocker/hadleyverse
If I could get hold of a database image with the data preloaded, I guess that something like the following fig.yml script could link the elements?
gdrive:
command: echo created
image: busybox
volumes:
- "~/Google Drive/shareddata:/gdrive"
dbdata:
image: mysql_preloaded
environment:
MYSQL_USER=ergast
MYSQL_ROOT_PASSWORD=mrd
MYSQL_DATABASE=f1db
rstudio:
image: rocker/hadleyverse
links:
- dbdata:db
ports:
- "8788:8787"
volumes_from:
- gdrive
My question is, can I use a one-shot fig step to create the dbdata container, then perhaps mount a persistent volume, link to it and initialise the database, presumably as part of an initial fig up. If I then start and stop containers, I don't want to run the db initialisation step again, just link to the data volume container that contains the data I previously installed.
I also notice that the MySQL docker image looks like it will support arbitrary datadir definitions (Update entrypoints to read DATADIR from the MySQL configuration directly instead of assuming /var/lib/docker). As I understand it, the current definition of the MySQL image prevents mounting (and hence persisting) the database contents within the database container. I guess this might make it possible to create a mysql_preloaded image, but I don't think the latest version of the MySQL docker script has been pushed to dockerhub just yet and I can't quite think my way to how fig might then be able to make use of this alternative pathway?
Some options:
Edit the fig.yml to run a custom command that is different than the default image command/entrypoint.
From http://www.fig.sh/yml.html (example):
command: bundle exec thin -p 3000
Start the container locally, modify it and then commit it as a new image.
Modify the MySQL image docker-entrypoint.sh file to do your custom initialization.
https://github.com/docker-library/mysql/blob/567028d4e177238c58760bcd69a8766a8f026e2a/5.7/docker-entrypoint.sh
Couldn't you just roll your own version of the MySQL docker image? The official one from MySQL "upstream" is available at https://github.com/mysql/mysql-docker/blob/mysql-server/5.7/Dockerfile
What if you simply make your own copy of that, remove the VOLUME line (line 11) and then you can
docker build -t my_mysql .
docker run -d --name=empty_db my_mysql ...
# add data to the database running in the container
docker commit empty_db primed_db
docker rm -v empty_db
docker run -d --name=instance1 primed_db
docker run -d --name=instance2 primed_db
which should leave you with two running "identical" but fully isolated instances.

How do I know when my docker mysql container is up and mysql is ready for taking queries?

I am deploying a few different docker containers, mysql being the first one. I want to run scripts as soon as database is up and proceed to building other containers. The script has been failing because it was trying to run when the entrypoint script, which sets up mysql (from this official mysql container), was still running.
sudo docker run --name mysql -e MYSQL_ROOT_PASSWORD=MY_ROOT_PASS -p 3306:3306 -d mysql
[..] wait for mysql to be ready [..]
mysql -h 127.0.0.1 -P 3306 -u root --password=MY_ROOT_PASS < MY_SQL_SCRIPT.sql
Is there a way to wait for a signal of an entrypoiny mysql setup script finishing inside the docker container? Bash sleep seems like a suboptimal solution.
EDIT: Went for a bash script like this. Not the most elegant and kinda brute force but works like a charm. Maybe someone will find that useful.
OUTPUT="Can't connect"
while [[ $OUTPUT == *"Can't connect"* ]]
do
OUTPUT=$(mysql -h $APP_IP -P :$APP_PORT -u yyy --password=xxx < ./my_script.sql 2>&1)
done
You can install mysql-client package and use mysqladmin to ping target server. Useful when working with multiple docker container. Combine with sleep and create a simple wait-loop:
while ! mysqladmin ping -h"$DB_HOST" --silent; do
sleep 1
done
This little bash loop waits for mysql to be open, shouldn't require any extra packages to be installed:
until nc -z -v -w30 $CFG_MYSQL_HOST 3306
do
echo "Waiting for database connection..."
# wait for 5 seconds before check again
sleep 5
done
This was more or less mentioned in comments to other answers, but I think it deserves its own entry.
First of all you can run your container in the following manner:
docker run --name mysql --health-cmd='mysqladmin ping --silent' -d mysql
There is also an equivalent in the Dockerfile.
With that command your docker ps and docker inspect will show you health status of your container. For mysql in particular this method has the advantage of mysqladmin being available inside the container, so you do not need to install it on the docker host.
Then you can simply loop in a bash script to wait on the status to become healthy. The following bash script is created by Dennis.
function getContainerHealth {
docker inspect --format "{{.State.Health.Status}}" $1
}
function waitContainer {
while STATUS=$(getContainerHealth $1); [ $STATUS != "healthy" ]; do
if [ $STATUS == "unhealthy" ]; then
echo "Failed!"
exit -1
fi
printf .
lf=$'\n'
sleep 1
done
printf "$lf"
}
Now you can do this in your script:
waitContainer mysql
and your script will wait until the container is up and running. The script will exit if the container becomes unhealthy, which is possible, if for example docker host is out of memory, so that the mysql cannot allocate enough of it for itself.
I've found that using the mysqladmin ping approach isn't always reliable, especially if you're bringing up a new DB. In that case, even if you're able to ping the server, you might be unable to connect if the user/privilege tables are still being initialized. Instead I do something like the following:
while ! docker exec db-container mysql --user=foo --password=bar -e "SELECT 1" >/dev/null 2>&1; do
sleep 1
done
So far I haven't encountered any problems with this method. I see that something similar was suggested by VinGarcia in a comment to one of the mysqladmin ping answers.
Some times the problem with the port is that the port could be open, but the database is not ready yet.
Other solutions require that you have installed the mysql o a mysql client in your host machine, but really you already have it inside the Docker container, so I prefer to use something like this:
Option 1:
while ! docker exec mysql mysqladmin --user=root --password=pass --host "127.0.0.1" ping --silent &> /dev/null ; do
echo "Waiting for database connection..."
sleep 2
done
Option 2 (from #VinGarcia):
while ! docker exec container_name mysql --user=root --password=pass -e "status" &> /dev/null ; do
echo "Waiting for database connection..."
sleep 2
done
One liner using curl, found on all linux distributions:
while ! curl -o - db-host:3306; do sleep 1; done
The following health-check works for all my mysql containers:
db:
image: mysql:5.7.16
healthcheck:
test: ["CMD-SHELL", 'mysql --database=$$MYSQL_DATABASE --password=$$MYSQL_ROOT_PASSWORD --execute="SELECT count(table_name) > 0 FROM information_schema.tables;" --skip-column-names -B']
interval: 30s
timeout: 10s
retries: 4
extends:
file: docker-compose-common-config.yml
service: common_service
So I am not sure if any one has posted this. It doesn't look like any one has, so... there is a command in mysqladmin that features a wait, it handles testing of the connection, then retries internally and returns a success upon completion.
sudo docker run --name mysql -e MYSQL_ROOT_PASSWORD=MY_ROOT_PASS -p 3306:3306 -d mysql
mysqladmin ping -h 127.0.0.1 -u root --password=MY_ROOT_PASS --wait=30 && mysql -h 127.0.0.1 -P 3306 -u root --password=MY_ROOT_PASS < MY_SQL_SCRIPT.sql
The important piece is mysqladmin ping -h 127.0.0.1 -u root --password=MY_ROOT_PASS --wait=30 -v with the --wait being the flag to wait until the connection is successful and the number being the amount of attempts to retry.
Ideally you would run that command from inside the docker container, but I didn't want to modify the original posters command too much.
When used in my make file for initialization
db.initialize: db.wait db.initialize
db.wait:
docker-compose exec -T db mysqladmin ping -u $(DATABASE_USERNAME) -p$(DATABASE_PASSWORD) --wait=30 --silent
db.initialize:
docker-compose exec -T db mysql -u $(DATABASE_USERNAME) -p$(DATABASE_PASSWORD) $(DATABASE_NAME) < dev/sql/base_instance.sql
I had the same issue when my Django container tried to connect the mysql container just after it started. I solved it using the vishnubob's wait-for.it.sh script. Its a shell script which waits for an IP and a host to be ready before continuing. Here is the example I use for my applicaction.
./wait-for-it.sh \
-h $(docker inspect --format '{{ .NetworkSettings.IPAddress }}' $MYSQL_CONTAINER_NAME) \
-p 3306 \
-t 90
In that script I'm asking to the mysql container to wait maximum 90 seconds (it will run normally when ready) in the port 3306 (default mysql port) and the host asigned by docker for my MYSQL_CONTAINER_NAME. The script have more variables but for mw worked with these three.
If the docker container waiting for a mysql container is based on a python image (for instance for a Django application), you can use the code below.
Advantages are:
It's not based on wait-for-it.sh, which does wait for the IP and port of mysql to be ready, but this doesn't automatically mean also that the mysql initialization has finished.
It's not a shell script based on a mysql or mysqladmin executable that must be present in your container: since your container is based on a python image, this would require installing mysql on top of that image. With the below solution, you use the technology that is already present in the container: pure python.
Code:
import time
import pymysql
def database_not_ready_yet(error, checking_interval_seconds):
print('Database initialization has not yet finished. Retrying over {0} second(s). The encountered error was: {1}.'
.format(checking_interval_seconds,
repr(error)))
time.sleep(checking_interval_seconds)
def wait_for_database(host, port, db, user, password, checking_interval_seconds):
"""
Wait until the database is ready to handle connections.
This is necessary to ensure that the application docker container
only starts working after the MySQL database container has finished initializing.
More info: https://docs.docker.com/compose/startup-order/ and https://docs.docker.com/compose/compose-file/#depends_on .
"""
print('Waiting until the database is ready to handle connections....')
database_ready = False
while not database_ready:
db_connection = None
try:
db_connection = pymysql.connect(host=host,
port=port,
db=db,
user=user,
password=password,
charset='utf8mb4',
connect_timeout=5)
print('Database connection made.')
db_connection.ping()
print('Database ping successful.')
database_ready = True
print('The database is ready for handling incoming connections.')
except pymysql.err.OperationalError as err:
database_not_ready_yet(err, checking_interval_seconds)
except pymysql.err.MySQLError as err:
database_not_ready_yet(err, checking_interval_seconds)
except Exception as err:
database_not_ready_yet(err, checking_interval_seconds)
finally:
if db_connection is not None and db_connection.open:
db_connection.close()
Usage:
Add this code into a python file (wait-for-mysql-db.py for instance) inside your application's source code.
Write another python script (startup.py for instance) that first executes the above code, and afterwards starts up your application.
Make sure your application container's Dockerfile packs these two python scripts together with the application's source code into a Docker image.
In your docker-compose file, configure your application container with: command: ["python3", "startup.py"].
Note that this solution is made for a MySQL database. You'll need to adapt it slightly for another database.
I developed a new solution for this issue based on a new approach. All approaches I found rely on a script that tries over and over to connect to the database, or try to establish a TCP connection with the container. The full details can be found on the waitdb repository, but, my solution is to rely on the retrieved log from the container. The script waits until the log fires the message ready for connections. The script can identify if the container is starting for the first time. In this case the script waits until the initial database script is executed and the database is restarted, waiting again for a new ready for connections message. I tested this solution on MySQL 5.7 and MySQL 8.0.
The script itself (wait_db.sh):
#!/bin/bash
STRING_CONNECT="mysqld: ready for connections"
findString() {
($1 logs -f $4 $5 $6 $7 $8 $9 2>&1 | grep -m $3 "$2" &) | grep -m $3 "$2" > /dev/null
}
echo "Waiting startup..."
findString $1 "$STRING_CONNECT" 1 $2 $3 $4 $5 $6 $7
$1 logs $2 $3 $4 $5 2>&1 | grep -q "Initializing database"
if [ $? -eq 0 ] ; then
echo "Almost there..."
findString $1 "$STRING_CONNECT" 2 $2 $3 $4 $5 $6 $7
fi
echo "Server is up!"
The script can be used in Docker Compose or in Docker itself. I hope the examples bellow make the the usage clear:
Example 01: Using with Docker Compose
SERVICE_NAME="mysql" && \
docker-compose up -d $SERVICE_NAME && \
./wait_db.sh docker-compose --no-color $SERVICE_NAME
Example 02: Using with Docker
CONTAINER_NAME="wait-db-test" && \
ISO_NOW=$(date -uIs) && \
docker run --rm --name $CONTAINER_NAME \
-e MYSQL_ROOT_PASSWORD=$ROOT_PASSWORD \
-d mysql:5.7 && \
./wait_db.sh docker --since "$ISO_NOW" $CONTAINER_NAME
Example 3: A full example (the test-case)
A full example can be found on the test case of the repository. This test-case will startup a new MySQL, create a dummy database, wait until everything is started and then fires a select to check if everything goes fine. After that it'll going restart the container and wait it to be started and then fires a new select to check if it's ready for connection.
Here's how I incorporated Adams solution into my docker-compose based project:
Created a bash file titled db-ready.sh in my server container folder (the contents of which are copied in to my container - server):
#!bin/bash
until nc -z -v -w30 $MYSQL_HOST 3306
do
echo "Waiting a second until the database is receiving connections..."
# wait for a second before checking again
sleep 1
done
I can then run docker-compose run server sh ./db-ready.sh && docker-compose run server yarn run migrate to ensure that when I run my migrate task within my server container, I know the DB will be accepting connections.
I like this approach as the bash file is separate to any command I want to run. I could easily run the db-ready.sh before any other DB using task I run.
i can recommend you to use /usr/bin/mysql --user=root --password=root --execute "SHOW DATABASE;" in healthcheck script instead of mysqladmin ping. This wait for real initialization and service is ready for client connection.
Example:
docker run -d --name "test-mysql-client" -p 0.0.0.0:3306:3306 -e MYSQL_PASSWORD=password -e MYSQL_USER=user -e MYSQL_ROOT_PASSWORD=root --health-cmd="/usr/bin/mysql --user=root --password=root --execute \"SHOW DATABASE;\"" --health-interval=1s --health-retries=60 --health-timeout=10s -e MYSQL_DATABASE=db mysql:latest```
Combining flamemyst‘s answer and Nathan Arthur's comment, I believe this answer would be the most convenient one:
CONTAINER_MYSQL='' # name of the MySQL container
CONTAINER_DB_HOST='127.0.0.1'
CONTAINER_DB_PORT=3306
MYSQL_USER='' # user name if there is, normally 'root'
MYSQL_PWD='' # password you set
is_mysql_alive() {
docker exec -it ${CONTAINER_MYSQL} \
mysqladmin ping \
--user=${MYSQL_USER} \
--password=${MYSQL_PWD} \
--host=${CONTAINER_DB_HOST} \
--port=${CONTAINER_DB_PORT} \
> /dev/null
returned_value=$?
echo ${returned_value}
}
until [ "$(is_mysql_alive)" -eq 0 ]
do
sleep 2
echo "Waiting for MySQL to be ready..."
done
anything_else_to_do
Basically, it checks whether mysqladmin is alive in MySQL container, MySQL should be up if so.
Building a bit on Mihai Crăiță excellent answer above, I added in the CURL option to enable 0.9 (which is disabled by default now) and to hide the output to reduce log "noise" during startup:
server="MyServerName"
echo "Waiting for MySQL at ${server}"
while ! curl --http0.9 -o - "${server}:3306" &> /dev/null; do sleep 1; done
https://github.com/docker-library/mysql/blob/master/5.7/docker-entrypoint.sh
docker-entrypoint.sh doesn't support merging customized .sql yet.
I think you can modify docker-entrypoint.sh to merge your sql so it can be executed once mysql instance is ready.
On your ENTRYPOINT script, you have to check if you have a valid MySQL connection or not.
This solution does not require you to install a MySQL Client on the container and while running the container with php:7.0-fpm running nc was not an option, because it had to be installed as well. Also, checking if the port is open does not necessarily mean that the service is running and exposed correctly. [more of this]
So in this solution, I will show you how to run a PHP script to check if a MySQL Container is able to take connection. If you want to know why I think this is a better approach check my comment here.
File entrypoint.sh
#!/bin/bash
cat << EOF > /tmp/wait_for_mysql.php
<?php
\$connected = false;
while(!\$connected) {
try{
\$dbh = new pdo(
'mysql:host=mysql:3306;dbname=db_name', 'db_user', 'db_pass',
array(PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION)
);
\$connected = true;
}
catch(PDOException \$ex){
error_log("Could not connect to MySQL");
error_log(\$ex->getMessage());
error_log("Waiting for MySQL Connection.");
sleep(5);
}
}
EOF
php /tmp/wait_for_mysql.php
# Rest of entry point bootstrapping
By running this, you are essentially blocking any bootstrapping logic of your container UNTIL you have a valid MySQL Connection.
I use the following code ;
export COMPOSE_PROJECT_NAME=web;
export IS_DATA_CONTAINER_EXISTS=$(docker volume ls | grep ${COMPOSE_PROJECT_NAME}_sqldata);
docker-compose up -d;
docker-compose ps;
export NETWORK_GATEWAY=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.Gateway}}{{end}}' ${COMPOSE_PROJECT_NAME}_webserver1_con);