Run bash script after MySQL Docker container starts (every time, not just the initial time) - mysql

I am trying to get a bash script to run when my MySQL container starts. Not the initial time when there are no databases to create, but subsequent times (so placing the files in docker-entrypoint-initdb.d will not work).
My objective is to re-build my container with some database upgrade scripts (schema changes, etc). The thought being I deploy the container with the initial scripts and deploy subsequent updates with my database upgrades as the application ages. It seems like this would be an easy task, but I am having trouble.
Most of the things I have tried came from suggestions I found googling. Here are things I have tried with no success:
Modify the entrypoint.sh (and /usr/local/bin/docker-entrypoint.sh) in the Dockerfile build to add in a call to my script.
This does not even seem to be called, which I suspect is a sign, but my database starts (also note it creates my schema fine the first time)
I do this with a RUN sed in my Dockerfile and have confirmed my changes exist after the container starts
Tried running my script on startup by:
adding a script to /etc/r.d/rc.local
adding a restart cron job (well, I tried, but the Oracle Linux distro doesn’t have it)
— Modifying the /etc/bashrc
— Adding a script to /etc/profile.d/
— Appending to /etc/profie.d/sh.local
Tried adding a command to my docker-compose.yml, but it said that wasn’t found.
My actual database upgrade script works great when I log in to the container manually and execute it. All of my experiments above have been just touching a file or echoing to a file as a proof of concept. Once I get that working, I'll add in the logic to wait for MySQL to start and then run my actual script.
Dockerfile:
FROM mysql:8.0.32
VOLUME /var/lib/mysql
## these are my experiments
RUN sed -i '/main "$#"/a echo "run the script here" > /usr/tmp/XXX' /entrypoint.sh
RUN sed -i '/main "$#"/a echo "run the script here" > /usr/tmp/XXX' /usr/local/bin/docker-entrypoint.sh
RUN echo "touch /usr/tmp/XXX" >> /etc/profile.d/sh.local
RUN sed -i '/doublesourcing/a echo “run the script here > /usr/tmp/XXX' etc/bashrc
I build and run it using:
docker build -t mysql-database -f Dockerfile .
docker run -it --rm -d -p 3306:3306 --name database -v ~/Docker-Volume-Share/database:/var/lib/mysql mysql-database
Some other information that may be useful
I am using a volume on the host. I’ve run my experiments with an existing schema as well as by deleting this directory so it starts fresh
I am using mysql:8.0.32 as the image (Oracle Linux Server release 8.7)
Docker version 20.10.22, build 3a2c30b
Host OS is macOS 13.2.1
Thanks in advance for any tips and guidance!

It sounds like you are trying to run a script after the MySQL container has started and the initial setup has been completed. Here are a few suggestions:
1-Use a custom entrypoint script
You can create a custom entrypoint script that runs after the default entrypoint script included in the MySQL container image. In your Dockerfile, copy your custom entrypoint script into the container and set it as the entrypoint. Here's an example:
FROM mysql:8.0.32
COPY custom-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/custom-entrypoint.sh
ENTRYPOINT ["custom-entrypoint.sh"]
In your custom entrypoint script, you can check if the database already exists and run your upgrade script if it does. Here's an example:
#!/bin/bash
set -e
# Run the default entrypoint script
/docker-entrypoint.sh "$#"
# Check if the database already exists
if mysql -uroot -p"$MYSQL_ROOT_PASSWORD" -e "use my_database"; then
# Run your upgrade script
/path/to/upgrade-script.sh
fi
2-Use a Docker Compose file
If you're using Docker Compose, you can specify a command to run after the container has started. Here's an example:
version: '3'
services:
database:
image: mysql:8.0.32
volumes:
- ~/Docker-Volume-Share/database:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: mypassword
command: >
bash -c "
/docker-entrypoint.sh mysqld &
while ! mysqladmin ping -hlocalhost --silent; do sleep 1; done;
/path/to/upgrade-script.sh
"
This command runs the default entrypoint script in the background, waits for MySQL to start, and then runs your upgrade script.
I hope these suggestions help you achieve your goal!

Related

why dockerfile can't start db with " CMD /etc/init.d/mysql start "?

I downloaded a docker image with mariadb and phpmyadmin,
then wrote two dockerfiles below..
# dockerfile A
FROM alfvenjohn/raspbian-buster-mariadb-phpmyadmin
CMD /etc/init.d/mysql start && /etc/init.d/apache2 start
# dockerfile B
FROM alfvenjohn/raspbian-buster-mariadb-phpmyadmin
CMD service mysql start && /usr/sbin/apachectl -D FOREGROUND
dockerfile B worked well,
but dockerfile A failed.
I can build image from dockerfileA,
then spin-up container from it docker run -it -p 80:80 <img id> bash
the container up successfully,
but while I inside the container, I found the services of mariadb and apache2 not working.
After I execute /etc/init.d/mysql start && /etc/init.d/apache2 start,
mariadb and apache2 works!
Trying to get error messages by docker logs <container id>, but got nothing.
What my question is
"If I run the docker image without dockerfile just by commands,
like what I did in dockerfile A. The container works well. "
$ docker run -it -p 80:80 alfvenjohn/raspbian-buster-mariadb-phpmyadmin bash
$ /etc/init.d/mysql start && /etc/init.d/apache2 start
Why? Didn't dockerfile A do the same thing, as I spin up my container with commands ?
You need to remove the bash at this end of the command. This replace the command inside your dockerfile.
docker run -d -p 80:80 <img id>
You can use this command to connect inside the container afterward:
docker exec -it <container_id> bash
A Docker image runs a single command only. When that command exits, the container exits too.
This combination usually means a container runs a single server-type process (so run MySQL and Apache in separate containers), and it means the process must run in the foreground (so the lifetime of the container is the lifetime of the server process).
In your setup where you launch an interactive shell instead of the image CMD, you say you can start the servers by running
/etc/init.d/mysql start && /etc/init.d/apache2 start
This is true, and then you get a command prompt back. That means the command completed. If you run this as an image's CMD then "you get a command prompt back" or "the command completed" means "the container will exit".
Generally you want to launch separate database and Web server containers; if you have other application containers you can add those to the mix too. Docker Compose is a common tool for this. You may want to look through Docker's sample applications for some examples, or other SO questions.

Exist a way to create a mysql docker image with volume attached and also executing sql script?

I am trying to use a MySQL image on docker, attaching a volume, furthermore I would like to add a sql script in order to create a table if not present yet.
So if the container is used in another machine the table will be Always present.
My command :
docker run -d -p 3306:3306 --name my-mysql --network sma -v /scripts:/docker-entrypoint-initdb.d/ -v /myvolume/:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=myDB mysql
My situation:
I am able to attach the volume with -v option (/myvolume/:/var/lib/mysql) during the run, and actually I am also able to insert the script in the init directory ( /docker-entrypoint-initdb.d/ ) but if I do these two things, only the volume attaching will work.
I guess it is something like the script is executed (because it is placed in the directory) but then the MySQL is overwritten by the volume attaching, so the only thing I am seeing is what is present in myvolume.
There is some way that makes that work?
I resolved using it in a swarm from a docker-compose with docker stack deploy -c docker-compose.yml swarm_name.
In the service definition of the docker-compose I added the command line in order to force it to execute the init script.
command: --init-file /docker-entrypoint-initdb.d/initDb.sql

Docker integration test with static data at launch

I have been struggling with this issue for a while now. I want to do a performance test using a specific set of data.
To achieve this I am using Docker-compose and I have exported a couple of .sql files.
When I first build and run the container using docker-compose build and docker-compose up the dataset is fine. Then I run my test (which also inserts) data. After running my test I want to perform it again, so I relaunch the container using docker-compose up. But this time, for some reason I don't understand, the data that was inserted the last time (by my test) is still there. I get different behaviour now.
At the moment I have the following Dockerfile:
FROM mysql:5.7
ENV MYSQL_DATABASE dev_munisense1\
MYSQL_ROOT_PASSWORD pass
EXPOSE 3306
ADD docker/data/ /docker-entrypoint-initdb.d/
Because I read that the mysql Docker image runs everything in /docker-entrypoint-initdb.d/. The first time it works properly.
I have also tried what these posts suggested:
How do i migrate mysql data directory in docker container?
http://txt.fliglio.com/2013/11/creating-a-mysql-docker-container/
How to create populated MySQL Docker Image on build time
How to make a docker image with a populated database for automated tests?
And a couple of identical other posts.
None of them seem to work currently.
How can I make sure the dataset is exactly the same each time I launch the container? Without having to rebuild the image each time (this takes kind of long because of a large dataset).
Thanks in advance
EDIT:
I have also tried running the container with different arguments like:
docker-compose up --force-recreate --build mysql but this has no success. The container is rebuilt and restarted but the db is still affected by my test. Currently the only solution to my problem is to remove the entire container and image.
I managed to fix the issue (with mysql image) by doing the following:
change the mounting point of the sql storage (This is what actually caused the problem) I used the solution suggested here: How to create populated MySQL Docker Image on build time but I did it by running a sed command: RUN sed -i 's|/var/lib/mysql|/var/lib/mysql2|g' /etc/mysql/my.cnf
add my scripts to a folder inside the container
run import.sh script that inserts data using the daemon (using the wait-for-it.sh script)
remove the sql scripts
expose port like regular
The docker file looks like this (the variables are used to select different SQL files, I wanted multiple versions of the image):
FROM mysql:5.5.54
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh /utils/wait-for-it.sh
COPY docker/import.sh /usr/local/bin/
RUN sed -i 's|/var/lib/mysql|/var/lib/mysql2|g' /etc/mysql/my.cnf
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_ROOT_PASSWORD
ARG MYSQL_ALLOW_EMPTY_PASSWORD
ARG DEVICE_INFORMATION
ARG LAST_NODE_STATUS
ARG V_NODE
ARG NETWORKSTATUS_EVENTS
ENV MYSQL_DATABASE=$MYSQL_DATABASE \
MYSQL_USER=$MYSQL_USER \
MYSQL_PASSWORD=$MYSQL_PASSWORD \
MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \
DEVICE_INFORMATION=$DEVICE_INFORMATION \
LAST_NODE_STATUS=$LAST_NODE_STATUS \
V_NODE=$V_NODE \
MYSQL_ALLOW_EMPTY_PASSWORD=$MYSQL_ALLOW_EMPTY_PASSWORD
#Set up tables
COPY docker/data/$DEVICE_INFORMATION.sql /usr/local/bin/device_information.sql
COPY docker/data/$NETWORKSTATUS_EVENTS.sql /usr/local/bin/networkstatus_events.sql
COPY docker/data/$LAST_NODE_STATUS.sql /usr/local/bin/last_node_status.sql
COPY docker/data/$V_NODE.sql /usr/local/bin/v_node.sql
RUN chmod 777 /usr/local/bin/import.sh && chmod 777 /utils/wait-for-it.sh && \
/bin/bash /entrypoint.sh mysqld --user='root' & /bin/bash /utils/wait-for-it.sh -t 0 localhost:3306 -- /usr/local/bin/import.sh; exit
RUN rm -f /usr/local/bin/*.sql
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
the script looks like this:
#!/bin/bash
echo "Going to insert the device information"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/device_information.sql
echo "Going to insert the last_node_status"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/last_node_status.sql
echo "Going to insert the v_node"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/v_node.sql
echo "Going to insert the networkstatus_events"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/networkstatus_events.sql
echo "Database now has the following tables"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE --execute="SHOW TABLES;"
So now all I have to do to start my performance tests is
#!/usr/bin/env bash
echo "Shutting down previous containers"
docker-compose -f docker-compose.yml down
docker-compose -f docker-compose-test-10k-half.yml down
docker-compose -f docker-compose-test-100k-half.yml down
docker-compose -f docker-compose-test-500k-half.yml down
docker-compose -f docker-compose-test-1m-half.yml down
echo "Launching rabbitmq container"
docker-compose up -d rabbitmq & sh wait-for-it.sh -t 0 -h localhost -p 5672 -- sleep 5;
echo "Going to execute 10k test"
docker-compose -f docker-compose-test-10k-half.yml up -d mysql_10k & sh wait-for-it.sh -t 0 -h localhost -p 3306 -- sleep 5 && ./networkstatus-event-service --env=performance-test --run-once=true;
docker-compose -f docker-compose-test-10k-half.yml stop mysql_10k
couple of more of these lines (slightly different, cause different container names)
Running docker-compose down after your tests will destroy everything associated with your docker-compose.yml
Docker Compose is a container life cycle manager and by default it tries to keep everything across multiple runs. As Stas Makarov mentions, there is a VOLUME defined in the mysql image that persists the data outside of the container.

ADD > LOAD .SQL using Docker Automated build and Compose

What is the optimal way to load in a sql dump when using docker-compose + docker automated builds?
Have been ignoring docker-compose for a moment and trying to understand docker and it's automated builds at first but have come to realize that i will probably need docker-compose if i want to accomplish my project goal that is to use one 1 command and from that have a fully working 3 site Docker cluster
1xHAProxy
3xUbuntu/wp
3xMysqld
In my Dockerfile i can just include the db.sql from my Github repo like
ADD db.sql /tmp/db.sql
Failing to find a best practise how i should load my DB without writing any commands outside of build.
Want to know your solution to this using Dockerfile or Compose
By just executing one of the commands below a mysql FROM mysql with ADD db.sql db.sql should be build / run while loading db.sql in to mysql db wp
Dockerfile
$docker run -d user/repo:tag
docker-compose.yml
$docker-compose up
If am totally on the wrong path here please give me some references. Could also mention that am planning to use CoreOS once i feel OK with Docker. So if best practices on a CoreOS > Docker setup is something else, let me know!
There are two options for initializing a SQL file during build or run time:
The first would be to just base your MySQL image on the official image and place your SQL file in /docker-entrypoint-initdb.d (using something like ADD my.sql /docker-entrypoint-initdb.d/ in the Dockerfile). The official image has a fairly complex entrypoint script (https://github.com/docker-library/mysql/blob/master/5.7/docker-entrypoint.sh) which starts MySQL, initializes a username and password, and scripts from the /docker-entrypoint-initdb.d folder.
The other option would be to do something like the answer at https://stackoverflow.com/a/25920875/684908 and just add a command such as:
COPY dump.sql /tmp/
RUN /bin/bash -c "/usr/bin/mysqld_safe &" && \
sleep 5 && \
mysql -u root -e "CREATE DATABASE mydb" && \
mysql -u root mydb < /tmp/dump.sql

Can I use fig to initialise a persisting database in docker?

I am trying to automate the installation and running of set of linked docker containers using fig. The configuration is composed of a container running RStudio linked to a container running MySQL, such that I can query the MySQL database from RStudio.
On first run, I would like to create the MySQL container from the base MySQL image, and populate it with a user and database. From the command line, something like this:
#Get the latest database file
wget -P /tmp http://ergast.com/downloads/f1db.sql.gz && gunzip -f /tmp/f1db.sql.gz
#Create the database container with user, password and database
docker run --name ergastdb -e MYSQL_USER=ergast -e MYSQL_ROOT_PASSWORD=mrd -e MYSQL_DATABASE=f1db -d mysql
#Populate the database
docker run -it --link=ergastdb:mysql -v /tmp:/tmp/import --rm mysql sh -c 'exec mysql -h$MYSQL_PORT_3306_TCP_ADDR -P$MYSQL_PORT_3306_TCP_PORT -uergast -pmrd f1db < /tmp/import/f1db.sql'
#Fire up RStudio and link to the MySQL db
docker run --name f1djd -p 8788:8787 --link ergastdb:db -d rocker/hadleyverse
If I could get hold of a database image with the data preloaded, I guess that something like the following fig.yml script could link the elements?
gdrive:
command: echo created
image: busybox
volumes:
- "~/Google Drive/shareddata:/gdrive"
dbdata:
image: mysql_preloaded
environment:
MYSQL_USER=ergast
MYSQL_ROOT_PASSWORD=mrd
MYSQL_DATABASE=f1db
rstudio:
image: rocker/hadleyverse
links:
- dbdata:db
ports:
- "8788:8787"
volumes_from:
- gdrive
My question is, can I use a one-shot fig step to create the dbdata container, then perhaps mount a persistent volume, link to it and initialise the database, presumably as part of an initial fig up. If I then start and stop containers, I don't want to run the db initialisation step again, just link to the data volume container that contains the data I previously installed.
I also notice that the MySQL docker image looks like it will support arbitrary datadir definitions (Update entrypoints to read DATADIR from the MySQL configuration directly instead of assuming /var/lib/docker). As I understand it, the current definition of the MySQL image prevents mounting (and hence persisting) the database contents within the database container. I guess this might make it possible to create a mysql_preloaded image, but I don't think the latest version of the MySQL docker script has been pushed to dockerhub just yet and I can't quite think my way to how fig might then be able to make use of this alternative pathway?
Some options:
Edit the fig.yml to run a custom command that is different than the default image command/entrypoint.
From http://www.fig.sh/yml.html (example):
command: bundle exec thin -p 3000
Start the container locally, modify it and then commit it as a new image.
Modify the MySQL image docker-entrypoint.sh file to do your custom initialization.
https://github.com/docker-library/mysql/blob/567028d4e177238c58760bcd69a8766a8f026e2a/5.7/docker-entrypoint.sh
Couldn't you just roll your own version of the MySQL docker image? The official one from MySQL "upstream" is available at https://github.com/mysql/mysql-docker/blob/mysql-server/5.7/Dockerfile
What if you simply make your own copy of that, remove the VOLUME line (line 11) and then you can
docker build -t my_mysql .
docker run -d --name=empty_db my_mysql ...
# add data to the database running in the container
docker commit empty_db primed_db
docker rm -v empty_db
docker run -d --name=instance1 primed_db
docker run -d --name=instance2 primed_db
which should leave you with two running "identical" but fully isolated instances.