Ansible - How to backup all MySQL databases? - mysql

I need to take a backup of all existing MySQL databases on my server with Ansible.
I'm aware of mysql_db module. It takes the names of the databases I'd like to manipulate on one by one, so I must get the list of existing databases before using that module.
Is there any way to backup all MySQL databases at once or to get a list of existing databases with Ansible?

A patch to adds name=all that allows a user to dump or import all data was merged into devel recently, it's not available yet in 1.9.1, but it's already shown in this part of the documentation.
# Dumps all databases to hostname.sql
- mysql_db: state=dump name=all target=/tmp/{{ inventory_hostname }}.sql
Hopefully this will soon be available in a stable release.
(Run sudo pip install ansible --upgrade to upgrade.)

The mysql_db module uses the mysqldump executable under the hood, which in turn provides an --all-databases switch, it's just that the Ansible module does not provide an option to use it via the module.
I would suggest using mysqldump executable via command module for now and in the meantime file a feature request on Ansible's GitHub to add support for it.
Something like this should get you going for now:
- name: Dump all MySQL databases to a single file
command: mysqldump --opt -uroot --all-databases --result-file=/tmp/all-dbs.sql
Adjust the options to mysqldump as desired: http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html
Update Nov 26, 2016:
A patch adding name=all was added to the mysql_db module on May 12, 2015, so the recommended way to dump all databases is:
# Dumps all databases to hostname.sql
- mysql_db: state=dump name=all target=/tmp/{{ inventory_hostname }}.sql

Alternative way, each database in separate file.
---
# This playbook backups all mysql databases into separate files.
- name: backup mysql
vars:
- exclude_db:
- "Database"
- "information_schema"
- "performance_schema"
- "mysql"
tasks:
- name: get db names
shell: 'mysql -u root -p{{ vault_root_passwd }} -e "show databases;" '
register: dblist
- name: backup databases
mysql_db:
state: dump
name: "{{ item }}"
target: "/tmp/{{ item }}.sql"
login_user: root
login_password: "{{ vault_root_passwd }}"
with_items: "{{ dblist.stdout_lines | difference(exclude_db) }}"

Related

Ansible Playbook didn't work if I executed it by crontab

It is a super weird issue. I have a simple playbook to execute the mysqldump command to backup MySQL:
tasks:
- name: Run mysqldump SQL server
ansible.builtin.shell:
cmd: mysqldump --max_allowed_packet=512M --set-gtid-purged=OFF -u root -p{{ mysql_root_password }} myDB > /tmp/myDB-{{ ansible_date_time.date|replace('-','') }}.sql
If I manually run this playbook, everything is good. The size of my backup file is 800MB. But If I run this playbook by cronjob. I lost some data and tables in the DB. The size of the backup file only has 450MB
* 11 * * * ansible-playbook -i ~/my_hosts.ini --vault-password-file ~/ansible_vault_password ~/mysql_backup.yml
Does anyone have the same issue?
I am very new to Ansible, any help is appreciated!

Backup docker volume or only mysqldump

I have an mysql instance running in a docker container. I mount a volume in /var/lib/mysql to preserve the data after shutting down the container. I think i have two options to backup my database to my host system:
Backup the complete volume:
docker run --rm --volumes-from db -v {BACKUP_PATH_ON_HOST_SYSTEM}:/backup ubuntu tar cvf /backup/backup.tar /var/lib/mysql
Only backup a mysqldump
Basically run above command but instead of backing up the volume i create a mysqldump which i would copy to /backup.
Which option is better?
I have a similar requirement. In my case, I'm using an old mysql Docker image on purpose like so:
db:
image: mysql:5.6
container_name: ${COMPOSE_SITE_NAME}_mysql
volumes:
- db_files:/var/lib/mysql
# Load the initial SQL dump into the DB when it is created.
# This only runs once if the DB is empty.
- ${SQL_DUMP_FILE}:/docker-entrypoint-initdb.d/dump.sql
environment:
MYSQL_ROOT_PASSWORD: ${WORDPRESS_DB_PASS}
...
volumes:
db_files:
name: ${COMPOSE_SITE_NAME}_db_files
If the volume is lost, then it can be recreated with a dump file. In my case, I prefer to make a dump file instead of preserving the cacophony of SQL files in that /var/lib/mysql folder.
docker-compose exec db sh -c '\
mysqldump -uroot -p$MYSQL_ROOT_PASSWORD --all-databases --routines --triggers \
' | gzip -c > /path/outside/docker/backup-`date '+%Y-%m-%d'`.sql.gz
This will create a compressed dump file on your host outside Docker due to the stdout redirect (>). I use the sh -c '' so I can reuse the MYSQL_ROOT_PASSWORD env var in the container. Feel free to adjust this to suit your MySQL requirements, like specifying a limited user.
With the default flags, the dump file will have DROP TABLE IF EXISTS statements so you can replace an existing DB without deleting the volume (docker-compose down then docker volume rm ...).

Docker compose: Enter Password access denied error or invalid file error [duplicate]

I am trying to create a container with a MySQL database and add a schema to these database.
My current Dockerfile is:
FROM mysql
MAINTAINER (me) <email>
# Copy the database schema to the /data directory
COPY files/epcis_schema.sql /data/epcis_schema.sql
# Change the working directory
WORKDIR data
CMD mysql -u $MYSQL_USER -p $MYSQL_PASSWORD $MYSQL_DATABASE < epcis_schema.sql
In order to create the container I am following the documentation provided on Docker and executing this command:
docker run --name ${CONTAINER_NAME} -e MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD} -e MYSQL_USER=${DB_USER} -e MYSQL_PASSWORD=${DB_USER_PASSWORD} -e MYSQL_DATABASE=${DB_NAME} -d mvpgomes/epcisdb
But when I execute this command the Container is not created and in the Container status it is possible to see that the CMD was not executed successfully, in fact only the mysql command is executed.
Anyway, is there a way to initialize the database with the schema or do I need to perform these operations manually?
I had this same issue where I wanted to initialize my MySQL Docker instance's schema, but I ran into difficulty getting this working after doing some Googling and following others' examples. Here's how I solved it.
1) Dump your MySQL schema to a file.
mysqldump -h <your_mysql_host> -u <user_name> -p --no-data <schema_name> > schema.sql
2) Use the ADD command to add your schema file to the /docker-entrypoint-initdb.d directory in the Docker container. The docker-entrypoint.sh file will run any files in this directory ending with ".sql" against the MySQL database.
Dockerfile:
FROM mysql:5.7.15
MAINTAINER me
ENV MYSQL_DATABASE=<schema_name> \
MYSQL_ROOT_PASSWORD=<password>
ADD schema.sql /docker-entrypoint-initdb.d
EXPOSE 3306
3) Start up the Docker MySQL instance.
docker-compose build
docker-compose up
Thanks to Setting up MySQL and importing dump within Dockerfile for clueing me in on the docker-entrypoint.sh and the fact that it runs both SQL and shell scripts!
I am sorry for this super long answer, but, you have a little way to go to get where you want. I will say that normally you wouldn't put the storage for the database in the same container as the database itself, you would either mount a host volume so that the data persists on the docker host, or, perhaps a container could be used to hold the data (/var/lib/mysql). Also, I am new to mysql, so, this might not be super efficient. That said...
I think there may be a few issues here. The Dockerfile is used to create an image. You need to execute the build step. At a minimum, from the directory that contains the Dockerfile you would do something like :
docker build .
The Dockerfile describes the image to create. I don't know much about mysql (I am a postgres fanboy), but, I did a search around the interwebs for 'how do i initialize a mysql docker container'. First I created a new directory to work in, I called it mdir, then I created a files directory which I deposited a epcis_schema.sql file which creates a database and a single table:
create database test;
use test;
CREATE TABLE testtab
(
id INTEGER AUTO_INCREMENT,
name TEXT,
PRIMARY KEY (id)
) COMMENT='this is my test table';
Then I created a script called init_db in the files directory:
#!/bin/bash
# Initialize MySQL database.
# ADD this file into the container via Dockerfile.
# Assuming you specify a VOLUME ["/var/lib/mysql"] or `-v /var/lib/mysql` on the `docker run` command…
# Once built, do e.g. `docker run your_image /path/to/docker-mysql-initialize.sh`
# Again, make sure MySQL is persisting data outside the container for this to have any effect.
set -e
set -x
mysql_install_db
# Start the MySQL daemon in the background.
/usr/sbin/mysqld &
mysql_pid=$!
until mysqladmin ping >/dev/null 2>&1; do
echo -n "."; sleep 0.2
done
# Permit root login without password from outside container.
mysql -e "GRANT ALL ON *.* TO root#'%' IDENTIFIED BY '' WITH GRANT OPTION"
# create the default database from the ADDed file.
mysql < /tmp/epcis_schema.sql
# Tell the MySQL daemon to shutdown.
mysqladmin shutdown
# Wait for the MySQL daemon to exit.
wait $mysql_pid
# create a tar file with the database as it currently exists
tar czvf default_mysql.tar.gz /var/lib/mysql
# the tarfile contains the initialized state of the database.
# when the container is started, if the database is empty (/var/lib/mysql)
# then it is unpacked from default_mysql.tar.gz from
# the ENTRYPOINT /tmp/run_db script
(most of this script was lifted from here: https://gist.github.com/pda/9697520)
Here is the files/run_db script I created:
# start db
set -e
set -x
# first, if the /var/lib/mysql directory is empty, unpack it from our predefined db
[ "$(ls -A /var/lib/mysql)" ] && echo "Running with existing database in /var/lib/mysql" || ( echo 'Populate initial db'; tar xpzvf default_mysql.tar.gz )
/usr/sbin/mysqld
Finally, the Dockerfile to bind them all:
FROM mysql
MAINTAINER (me) <email>
# Copy the database schema to the /data directory
ADD files/run_db files/init_db files/epcis_schema.sql /tmp/
# init_db will create the default
# database from epcis_schema.sql, then
# stop mysqld, and finally copy the /var/lib/mysql directory
# to default_mysql_db.tar.gz
RUN /tmp/init_db
# run_db starts mysqld, but first it checks
# to see if the /var/lib/mysql directory is empty, if
# it is it is seeded with default_mysql_db.tar.gz before
# the mysql is fired up
ENTRYPOINT "/tmp/run_db"
So, I cd'ed to my mdir directory (which has the Dockerfile along with the files directory). I then run the command:
docker build --no-cache .
You should see output like this:
Sending build context to Docker daemon 7.168 kB
Sending build context to Docker daemon
Step 0 : FROM mysql
---> 461d07d927e6
Step 1 : MAINTAINER (me) <email>
---> Running in 963e8de55299
---> 2fd67c825c34
Removing intermediate container 963e8de55299
Step 2 : ADD files/run_db files/init_db files/epcis_schema.sql /tmp/
---> 81871189374b
Removing intermediate container 3221afd8695a
Step 3 : RUN /tmp/init_db
---> Running in 8dbdf74b2a79
+ mysql_install_db
2015-03-19 16:40:39 12 [Note] InnoDB: Using atomics to ref count buffer pool pages
...
/var/lib/mysql/ib_logfile0
---> 885ec2f1a7d5
Removing intermediate container 8dbdf74b2a79
Step 4 : ENTRYPOINT "/tmp/run_db"
---> Running in 717ed52ba665
---> 7f6d5215fe8d
Removing intermediate container 717ed52ba665
Successfully built 7f6d5215fe8d
You now have an image '7f6d5215fe8d'. I could run this image:
docker run -d 7f6d5215fe8d
and the image starts, I see an instance string:
4b377ac7397ff5880bc9218abe6d7eadd49505d50efb5063d6fab796ee157bd3
I could then 'stop' it, and restart it.
docker stop 4b377
docker start 4b377
If you look at the logs, the first line will contain:
docker logs 4b377
Populate initial db
var/lib/mysql/
...
Then, at the end of the logs:
Running with existing database in /var/lib/mysql
These are the messages from the /tmp/run_db script, the first one indicates that the database was unpacked from the saved (initial) version, the second one indicates that the database was already there, so the existing copy was used.
Here is a ls -lR of the directory structure I describe above. Note that the init_db and run_db are scripts with the execute bit set:
gregs-air:~ gfausak$ ls -Rl mdir
total 8
-rw-r--r-- 1 gfausak wheel 534 Mar 19 11:13 Dockerfile
drwxr-xr-x 5 gfausak staff 170 Mar 19 11:24 files
mdir/files:
total 24
-rw-r--r-- 1 gfausak staff 126 Mar 19 11:14 epcis_schema.sql
-rwxr-xr-x 1 gfausak staff 1226 Mar 19 11:16 init_db
-rwxr-xr-x 1 gfausak staff 284 Mar 19 11:23 run_db
Another way based on a merge of serveral responses here before :
docker-compose file :
version: "3"
services:
db:
container_name: db
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=mysql
- MYSQL_DATABASE=db
volumes:
- /home/user/db/mysql/data:/var/lib/mysql
- /home/user/db/mysql/init:/docker-entrypoint-initdb.d/:ro
where /home/user.. is a shared folder on the host
And in the /home/user/db/mysql/init folder .. just drop one sql file, with any name, for example init.sql containing :
CREATE DATABASE mydb;
GRANT ALL PRIVILEGES ON mydb.* TO 'myuser'#'%' IDENTIFIED BY 'mysql';
GRANT ALL PRIVILEGES ON mydb.* TO 'myuser'#'localhost' IDENTIFIED BY 'mysql';
USE mydb
CREATE TABLE CONTACTS (
[ ... ]
);
INSERT INTO CONTACTS VALUES ...
[ ... ]
According to the official mysql documentation, you can put more than one sql file in the docker-entrypoint-initdb.d, they are executed in the alphabetical order
The other simple way, use docker-compose with the following lines:
mysql:
from: mysql:5.7
volumes:
- ./database:/tmp/database
command: mysqld --init-file="/tmp/database/install_db.sql"
Put your database schema into the ./database/install_db.sql. Every time when you build up your container, the install_db.sql will be executed.
I've tried Greg's answer with zero success, I must have done something wrong since my database had no data after all the steps: I was using MariaDB's latest image, just in case.
Then I decided to read the entrypoint for the official MariaDB image, and used that to generate a simple docker-compose file:
database:
image: mariadb
ports:
- 3306:3306
expose:
- 3306
volumes:
- ./docker/mariadb/data:/var/lib/mysql:rw
- ./database/schema.sql:/docker-entrypoint-initdb.d/schema.sql:ro
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
Now I'm able to persist my data AND generate a database with my own schema!
After Aug. 4, 2015, if you are using the official mysql Docker image, you can just ADD/COPY a file into the /docker-entrypoint-initdb.d/ directory and it will run with the container is initialized. See github: https://github.com/docker-library/mysql/commit/14f165596ea8808dfeb2131f092aabe61c967225 if you want to implement it on other container images
The easiest solution is to use tutum/mysql
Step1
docker pull tutum/mysql:5.5
Step2
docker run -d -p 3306:3306 -v /tmp:/tmp -e STARTUP_SQL="/tmp/to_be_imported.mysql" tutum/mysql:5.5
Step3
Get above CONTAINER_ID and then execute command docker logs to see the generated password information.
docker logs #<CONTAINER_ID>
Since I struggled with this problem recently, I'm adding a docker-compose file that really helped me:
version: '3.5'
services:
db:
image: mysql:5.7
container_name: db-container
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- "./scripts/schema.sql:/docker-entrypoint-initdb.d/1.sql"
- "./scripts/data.sql:/docker-entrypoint-initdb.d/2.sql"
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: test
MYSQL_USER: test-user
MYSQL_PASSWORD: password
ports:
- '3306:3306'
healthcheck:
test: "/usr/bin/mysql --user=root --password=password --execute \"SHOW DATABASES;\""
interval: 2s
timeout: 20s
retries: 10
You just need to create a scripts folder in the same location as the docker-compose.yml file above.
The scripts folder will have 2 files:
schema.sql: DDL scripts (create table...etc)
data.sql: Insert statements that you want to be executed right after schema creation.
After this, you can run the command below to erase any previous database info (for a fresh start):
docker-compose rm -v -f db && docker-compose up
For the ones not wanting to create an entrypoint script like me, you actually can start mysqld at build-time and then execute the mysql commands in your Dockerfile like so:
RUN mysqld_safe & until mysqladmin ping; do sleep 1; done && \
mysql -e "CREATE DATABASE somedb;" && \
mysql -e "CREATE USER 'someuser'#'localhost' IDENTIFIED BY 'somepassword';" && \
mysql -e "GRANT ALL PRIVILEGES ON somedb.* TO 'someuser'#'localhost';"
or source a prepopulated sql dump:
COPY dump.sql /SQL
RUN mysqld_safe & until mysqladmin ping; do sleep 1; done && \
mysql -e "SOURCE /SQL;"
RUN mysqladmin shutdown
The key here is to send mysqld_safe to background with the single & sign.
After to struggle a little bit with that, take a look the Dockerfile using named volumes (db-data).
It's important declare a plus at final part, where I mentioned that volume is [external]
All worked great this way!
version: "3"
services:
database:
image: mysql:5.7
container_name: mysql
ports:
- "3306:3306"
volumes:
- db-data:/docker-entrypoint-initdb.d
environment:
- MYSQL_DATABASE=sample
- MYSQL_ROOT_PASSWORD=root
volumes:
db-data:
external: true
Below is the Dockerfile I used successfully to install xampp, create a MariaDB with scheme and pre populated with the info used on local server(usrs,pics orders,etc..)
FROM ubuntu:14.04
COPY Ecommerce.sql /root
RUN apt-get update \
&& apt-get install wget -yq \
&& apt-get install nano \
&& wget https://www.apachefriends.org/xampp-files/7.1.11/xampp-linux-x64-7.1.11-0-installer.run \
&& mv xampp-linux-x64-7.1.11-0-installer.run /opt/ \
&& cd /opt/ \
&& chmod +x xampp-linux-x64-7.1.11-0-installer.run \
&& printf 'y\n\y\n\r\n\y\n\r\n' | ./xampp-linux-x64-7.1.11-0-installer.run \
&& cd /opt/lampp/bin \
&& /opt/lampp/lampp start \
&& sleep 5s \
&& ./mysql -uroot -e "CREATE DATABASE Ecommerce" \
&& ./mysql -uroot -D Ecommerce < /root/Ecommerce.sql \
&& cd / \
&& /opt/lampp/lampp reload \
&& mkdir opt/lampp/htdocs/Ecommerce
COPY /Ecommerce /opt/lampp/htdocs/Ecommerce
EXPOSE 80

How can I initialize a MySQL database with schema in a Docker container?

I am trying to create a container with a MySQL database and add a schema to these database.
My current Dockerfile is:
FROM mysql
MAINTAINER (me) <email>
# Copy the database schema to the /data directory
COPY files/epcis_schema.sql /data/epcis_schema.sql
# Change the working directory
WORKDIR data
CMD mysql -u $MYSQL_USER -p $MYSQL_PASSWORD $MYSQL_DATABASE < epcis_schema.sql
In order to create the container I am following the documentation provided on Docker and executing this command:
docker run --name ${CONTAINER_NAME} -e MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD} -e MYSQL_USER=${DB_USER} -e MYSQL_PASSWORD=${DB_USER_PASSWORD} -e MYSQL_DATABASE=${DB_NAME} -d mvpgomes/epcisdb
But when I execute this command the Container is not created and in the Container status it is possible to see that the CMD was not executed successfully, in fact only the mysql command is executed.
Anyway, is there a way to initialize the database with the schema or do I need to perform these operations manually?
I had this same issue where I wanted to initialize my MySQL Docker instance's schema, but I ran into difficulty getting this working after doing some Googling and following others' examples. Here's how I solved it.
1) Dump your MySQL schema to a file.
mysqldump -h <your_mysql_host> -u <user_name> -p --no-data <schema_name> > schema.sql
2) Use the ADD command to add your schema file to the /docker-entrypoint-initdb.d directory in the Docker container. The docker-entrypoint.sh file will run any files in this directory ending with ".sql" against the MySQL database.
Dockerfile:
FROM mysql:5.7.15
MAINTAINER me
ENV MYSQL_DATABASE=<schema_name> \
MYSQL_ROOT_PASSWORD=<password>
ADD schema.sql /docker-entrypoint-initdb.d
EXPOSE 3306
3) Start up the Docker MySQL instance.
docker-compose build
docker-compose up
Thanks to Setting up MySQL and importing dump within Dockerfile for clueing me in on the docker-entrypoint.sh and the fact that it runs both SQL and shell scripts!
I am sorry for this super long answer, but, you have a little way to go to get where you want. I will say that normally you wouldn't put the storage for the database in the same container as the database itself, you would either mount a host volume so that the data persists on the docker host, or, perhaps a container could be used to hold the data (/var/lib/mysql). Also, I am new to mysql, so, this might not be super efficient. That said...
I think there may be a few issues here. The Dockerfile is used to create an image. You need to execute the build step. At a minimum, from the directory that contains the Dockerfile you would do something like :
docker build .
The Dockerfile describes the image to create. I don't know much about mysql (I am a postgres fanboy), but, I did a search around the interwebs for 'how do i initialize a mysql docker container'. First I created a new directory to work in, I called it mdir, then I created a files directory which I deposited a epcis_schema.sql file which creates a database and a single table:
create database test;
use test;
CREATE TABLE testtab
(
id INTEGER AUTO_INCREMENT,
name TEXT,
PRIMARY KEY (id)
) COMMENT='this is my test table';
Then I created a script called init_db in the files directory:
#!/bin/bash
# Initialize MySQL database.
# ADD this file into the container via Dockerfile.
# Assuming you specify a VOLUME ["/var/lib/mysql"] or `-v /var/lib/mysql` on the `docker run` command…
# Once built, do e.g. `docker run your_image /path/to/docker-mysql-initialize.sh`
# Again, make sure MySQL is persisting data outside the container for this to have any effect.
set -e
set -x
mysql_install_db
# Start the MySQL daemon in the background.
/usr/sbin/mysqld &
mysql_pid=$!
until mysqladmin ping >/dev/null 2>&1; do
echo -n "."; sleep 0.2
done
# Permit root login without password from outside container.
mysql -e "GRANT ALL ON *.* TO root#'%' IDENTIFIED BY '' WITH GRANT OPTION"
# create the default database from the ADDed file.
mysql < /tmp/epcis_schema.sql
# Tell the MySQL daemon to shutdown.
mysqladmin shutdown
# Wait for the MySQL daemon to exit.
wait $mysql_pid
# create a tar file with the database as it currently exists
tar czvf default_mysql.tar.gz /var/lib/mysql
# the tarfile contains the initialized state of the database.
# when the container is started, if the database is empty (/var/lib/mysql)
# then it is unpacked from default_mysql.tar.gz from
# the ENTRYPOINT /tmp/run_db script
(most of this script was lifted from here: https://gist.github.com/pda/9697520)
Here is the files/run_db script I created:
# start db
set -e
set -x
# first, if the /var/lib/mysql directory is empty, unpack it from our predefined db
[ "$(ls -A /var/lib/mysql)" ] && echo "Running with existing database in /var/lib/mysql" || ( echo 'Populate initial db'; tar xpzvf default_mysql.tar.gz )
/usr/sbin/mysqld
Finally, the Dockerfile to bind them all:
FROM mysql
MAINTAINER (me) <email>
# Copy the database schema to the /data directory
ADD files/run_db files/init_db files/epcis_schema.sql /tmp/
# init_db will create the default
# database from epcis_schema.sql, then
# stop mysqld, and finally copy the /var/lib/mysql directory
# to default_mysql_db.tar.gz
RUN /tmp/init_db
# run_db starts mysqld, but first it checks
# to see if the /var/lib/mysql directory is empty, if
# it is it is seeded with default_mysql_db.tar.gz before
# the mysql is fired up
ENTRYPOINT "/tmp/run_db"
So, I cd'ed to my mdir directory (which has the Dockerfile along with the files directory). I then run the command:
docker build --no-cache .
You should see output like this:
Sending build context to Docker daemon 7.168 kB
Sending build context to Docker daemon
Step 0 : FROM mysql
---> 461d07d927e6
Step 1 : MAINTAINER (me) <email>
---> Running in 963e8de55299
---> 2fd67c825c34
Removing intermediate container 963e8de55299
Step 2 : ADD files/run_db files/init_db files/epcis_schema.sql /tmp/
---> 81871189374b
Removing intermediate container 3221afd8695a
Step 3 : RUN /tmp/init_db
---> Running in 8dbdf74b2a79
+ mysql_install_db
2015-03-19 16:40:39 12 [Note] InnoDB: Using atomics to ref count buffer pool pages
...
/var/lib/mysql/ib_logfile0
---> 885ec2f1a7d5
Removing intermediate container 8dbdf74b2a79
Step 4 : ENTRYPOINT "/tmp/run_db"
---> Running in 717ed52ba665
---> 7f6d5215fe8d
Removing intermediate container 717ed52ba665
Successfully built 7f6d5215fe8d
You now have an image '7f6d5215fe8d'. I could run this image:
docker run -d 7f6d5215fe8d
and the image starts, I see an instance string:
4b377ac7397ff5880bc9218abe6d7eadd49505d50efb5063d6fab796ee157bd3
I could then 'stop' it, and restart it.
docker stop 4b377
docker start 4b377
If you look at the logs, the first line will contain:
docker logs 4b377
Populate initial db
var/lib/mysql/
...
Then, at the end of the logs:
Running with existing database in /var/lib/mysql
These are the messages from the /tmp/run_db script, the first one indicates that the database was unpacked from the saved (initial) version, the second one indicates that the database was already there, so the existing copy was used.
Here is a ls -lR of the directory structure I describe above. Note that the init_db and run_db are scripts with the execute bit set:
gregs-air:~ gfausak$ ls -Rl mdir
total 8
-rw-r--r-- 1 gfausak wheel 534 Mar 19 11:13 Dockerfile
drwxr-xr-x 5 gfausak staff 170 Mar 19 11:24 files
mdir/files:
total 24
-rw-r--r-- 1 gfausak staff 126 Mar 19 11:14 epcis_schema.sql
-rwxr-xr-x 1 gfausak staff 1226 Mar 19 11:16 init_db
-rwxr-xr-x 1 gfausak staff 284 Mar 19 11:23 run_db
Another way based on a merge of serveral responses here before :
docker-compose file :
version: "3"
services:
db:
container_name: db
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=mysql
- MYSQL_DATABASE=db
volumes:
- /home/user/db/mysql/data:/var/lib/mysql
- /home/user/db/mysql/init:/docker-entrypoint-initdb.d/:ro
where /home/user.. is a shared folder on the host
And in the /home/user/db/mysql/init folder .. just drop one sql file, with any name, for example init.sql containing :
CREATE DATABASE mydb;
GRANT ALL PRIVILEGES ON mydb.* TO 'myuser'#'%' IDENTIFIED BY 'mysql';
GRANT ALL PRIVILEGES ON mydb.* TO 'myuser'#'localhost' IDENTIFIED BY 'mysql';
USE mydb
CREATE TABLE CONTACTS (
[ ... ]
);
INSERT INTO CONTACTS VALUES ...
[ ... ]
According to the official mysql documentation, you can put more than one sql file in the docker-entrypoint-initdb.d, they are executed in the alphabetical order
The other simple way, use docker-compose with the following lines:
mysql:
from: mysql:5.7
volumes:
- ./database:/tmp/database
command: mysqld --init-file="/tmp/database/install_db.sql"
Put your database schema into the ./database/install_db.sql. Every time when you build up your container, the install_db.sql will be executed.
I've tried Greg's answer with zero success, I must have done something wrong since my database had no data after all the steps: I was using MariaDB's latest image, just in case.
Then I decided to read the entrypoint for the official MariaDB image, and used that to generate a simple docker-compose file:
database:
image: mariadb
ports:
- 3306:3306
expose:
- 3306
volumes:
- ./docker/mariadb/data:/var/lib/mysql:rw
- ./database/schema.sql:/docker-entrypoint-initdb.d/schema.sql:ro
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
Now I'm able to persist my data AND generate a database with my own schema!
After Aug. 4, 2015, if you are using the official mysql Docker image, you can just ADD/COPY a file into the /docker-entrypoint-initdb.d/ directory and it will run with the container is initialized. See github: https://github.com/docker-library/mysql/commit/14f165596ea8808dfeb2131f092aabe61c967225 if you want to implement it on other container images
The easiest solution is to use tutum/mysql
Step1
docker pull tutum/mysql:5.5
Step2
docker run -d -p 3306:3306 -v /tmp:/tmp -e STARTUP_SQL="/tmp/to_be_imported.mysql" tutum/mysql:5.5
Step3
Get above CONTAINER_ID and then execute command docker logs to see the generated password information.
docker logs #<CONTAINER_ID>
Since I struggled with this problem recently, I'm adding a docker-compose file that really helped me:
version: '3.5'
services:
db:
image: mysql:5.7
container_name: db-container
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- "./scripts/schema.sql:/docker-entrypoint-initdb.d/1.sql"
- "./scripts/data.sql:/docker-entrypoint-initdb.d/2.sql"
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: test
MYSQL_USER: test-user
MYSQL_PASSWORD: password
ports:
- '3306:3306'
healthcheck:
test: "/usr/bin/mysql --user=root --password=password --execute \"SHOW DATABASES;\""
interval: 2s
timeout: 20s
retries: 10
You just need to create a scripts folder in the same location as the docker-compose.yml file above.
The scripts folder will have 2 files:
schema.sql: DDL scripts (create table...etc)
data.sql: Insert statements that you want to be executed right after schema creation.
After this, you can run the command below to erase any previous database info (for a fresh start):
docker-compose rm -v -f db && docker-compose up
For the ones not wanting to create an entrypoint script like me, you actually can start mysqld at build-time and then execute the mysql commands in your Dockerfile like so:
RUN mysqld_safe & until mysqladmin ping; do sleep 1; done && \
mysql -e "CREATE DATABASE somedb;" && \
mysql -e "CREATE USER 'someuser'#'localhost' IDENTIFIED BY 'somepassword';" && \
mysql -e "GRANT ALL PRIVILEGES ON somedb.* TO 'someuser'#'localhost';"
or source a prepopulated sql dump:
COPY dump.sql /SQL
RUN mysqld_safe & until mysqladmin ping; do sleep 1; done && \
mysql -e "SOURCE /SQL;"
RUN mysqladmin shutdown
The key here is to send mysqld_safe to background with the single & sign.
After to struggle a little bit with that, take a look the Dockerfile using named volumes (db-data).
It's important declare a plus at final part, where I mentioned that volume is [external]
All worked great this way!
version: "3"
services:
database:
image: mysql:5.7
container_name: mysql
ports:
- "3306:3306"
volumes:
- db-data:/docker-entrypoint-initdb.d
environment:
- MYSQL_DATABASE=sample
- MYSQL_ROOT_PASSWORD=root
volumes:
db-data:
external: true
Below is the Dockerfile I used successfully to install xampp, create a MariaDB with scheme and pre populated with the info used on local server(usrs,pics orders,etc..)
FROM ubuntu:14.04
COPY Ecommerce.sql /root
RUN apt-get update \
&& apt-get install wget -yq \
&& apt-get install nano \
&& wget https://www.apachefriends.org/xampp-files/7.1.11/xampp-linux-x64-7.1.11-0-installer.run \
&& mv xampp-linux-x64-7.1.11-0-installer.run /opt/ \
&& cd /opt/ \
&& chmod +x xampp-linux-x64-7.1.11-0-installer.run \
&& printf 'y\n\y\n\r\n\y\n\r\n' | ./xampp-linux-x64-7.1.11-0-installer.run \
&& cd /opt/lampp/bin \
&& /opt/lampp/lampp start \
&& sleep 5s \
&& ./mysql -uroot -e "CREATE DATABASE Ecommerce" \
&& ./mysql -uroot -D Ecommerce < /root/Ecommerce.sql \
&& cd / \
&& /opt/lampp/lampp reload \
&& mkdir opt/lampp/htdocs/Ecommerce
COPY /Ecommerce /opt/lampp/htdocs/Ecommerce
EXPOSE 80

Setting up MySQL and importing dump within Dockerfile

I'm trying to setup a Dockerfile for my LAMP project, but i'm having a few problems when starting MySQL. I have the folowing lines on my Dockerfile:
VOLUME ["/etc/mysql", "/var/lib/mysql"]
ADD dump.sql /tmp/dump.sql
RUN /usr/bin/mysqld_safe & sleep 5s
RUN mysql -u root -e "CREATE DATABASE mydb"
RUN mysql -u root mydb < /tmp/dump.sql
But I keep getting this error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
Any ideas on how to setup database creation and dump import during a Dockerfile build?
The latest version of the official mysql docker image allows you to import data on startup. Here is my docker-compose.yml
data:
build: docker/data/.
mysql:
image: mysql
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: 1234
volumes:
- ./docker/data:/docker-entrypoint-initdb.d
volumes_from:
- data
Here, I have my data-dump.sql under docker/data which is relative to the folder the docker-compose is running from. I am mounting that sql file into this directory /docker-entrypoint-initdb.d on the container.
If you are interested to see how this works, have a look at their docker-entrypoint.sh in GitHub. They have added this block to allow importing data
echo
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${mysql[#]}" < "$f" && echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
An additional note, if you want the data to be persisted even after the mysql container is stopped and removed, you need to have a separate data container as you see in the docker-compose.yml. The contents of the data container Dockerfile are very simple.
FROM n3ziniuka5/ubuntu-oracle-jdk:14.04-JDK8
VOLUME /var/lib/mysql
CMD ["true"]
The data container doesn't even have to be in start state for persistence.
Each RUN instruction in a Dockerfile is executed in a different layer (as explained in the documentation of RUN).
In your Dockerfile, you have three RUN instructions. The problem is that MySQL server is only started in the first. In the others, no MySQL are running, that is why you get your connection error with mysql client.
To solve this problem you have 2 solutions.
Solution 1: use a one-line RUN
RUN /bin/bash -c "/usr/bin/mysqld_safe --skip-grant-tables &" && \
sleep 5 && \
mysql -u root -e "CREATE DATABASE mydb" && \
mysql -u root mydb < /tmp/dump.sql
Solution 2: use a script
Create an executable script init_db.sh:
#!/bin/bash
/usr/bin/mysqld_safe --skip-grant-tables &
sleep 5
mysql -u root -e "CREATE DATABASE mydb"
mysql -u root mydb < /tmp/dump.sql
Add these lines to your Dockerfile:
ADD init_db.sh /tmp/init_db.sh
RUN /tmp/init_db.sh
What I did was download my sql dump in a "db-dump" folder, and mounted it:
mysql:
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: pass
ports:
- 3306:3306
volumes:
- ./db-dump:/docker-entrypoint-initdb.d
When I run docker-compose up for the first time, the dump is restored in the db.
Here is a working version using v3 of docker-compose.yml. The key is the volumes directive:
mysql:
image: mysql:5.6
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: theusername
MYSQL_PASSWORD: thepw
MYSQL_DATABASE: mydb
volumes:
- ./data:/docker-entrypoint-initdb.d
In the directory that I have my docker-compose.yml I have a data dir that contains .sql dump files. This is nice because you can have a .sql dump file per table.
I simply run docker-compose up and I'm good to go. Data automatically persists between stops. If you want remove the data and "suck in" new .sql files run docker-compose down then docker-compose up.
If anyone knows how to get the mysql docker to re-process files in /docker-entrypoint-initdb.d without removing the volume, please leave a comment and I will update this answer.
I used docker-entrypoint-initdb.d approach (Thanks to #Kuhess)
But in my case I want to create my DB based on some parameters I defined in .env file so I did these
1) First I define .env file something like this in my docker root project directory
MYSQL_DATABASE=my_db_name
MYSQL_USER=user_test
MYSQL_PASSWORD=test
MYSQL_ROOT_PASSWORD=test
MYSQL_PORT=3306
2) Then I define my docker-compose.yml file. So I used the args directive to define my environment variables and I set them from .env file
version: '2'
services:
### MySQL Container
mysql:
build:
context: ./mysql
args:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
ports:
- "${MYSQL_PORT}:3306"
3) Then I define a mysql folder that includes a Dockerfile. So the Dockerfile is this
FROM mysql:5.7
RUN chown -R mysql:root /var/lib/mysql/
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_ROOT_PASSWORD
ENV MYSQL_DATABASE=$MYSQL_DATABASE
ENV MYSQL_USER=$MYSQL_USER
ENV MYSQL_PASSWORD=$MYSQL_PASSWORD
ENV MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
ADD data.sql /etc/mysql/data.sql
RUN sed -i 's/MYSQL_DATABASE/'$MYSQL_DATABASE'/g' /etc/mysql/data.sql
RUN cp /etc/mysql/data.sql /docker-entrypoint-initdb.d
EXPOSE 3306
4) Now I use mysqldump to dump my db and put the data.sql inside mysql folder
mysqldump -h <server name> -u<user> -p <db name> > data.sql
The file is just a normal sql dump file but I add 2 lines at the beginning so the file would look like this
--
-- Create a database using `MYSQL_DATABASE` placeholder
--
CREATE DATABASE IF NOT EXISTS `MYSQL_DATABASE`;
USE `MYSQL_DATABASE`;
-- Rest of queries
DROP TABLE IF EXISTS `x`;
CREATE TABLE `x` (..)
LOCK TABLES `x` WRITE;
INSERT INTO `x` VALUES ...;
...
...
...
So what happening is that I used "RUN sed -i 's/MYSQL_DATABASE/'$MYSQL_DATABASE'/g' /etc/mysql/data.sql" command to replace the MYSQL_DATABASE placeholder with the name of my DB that I have set it in .env file.
|- docker-compose.yml
|- .env
|- mysql
|- Dockerfile
|- data.sql
Now you are ready to build and run your container
edit: I had misunderstand the question here. My following answer explains how to run sql commands at container creation time, but not at image creation time as desired by OP.
I'm not quite fond of Kuhess's accepted answer as the sleep 5 seems a bit hackish to me as it assumes that the mysql db daemon has correctly loaded within this time frame. That's an assumption, no guarantee. Also if you use a provided mysql docker image, the image itself already takes care about starting up the server; I would not interfer with this with a custom /usr/bin/mysqld_safe.
I followed the other answers around here and copied bash and sql scripts into the folder /docker-entrypoint-initdb.d/ within the docker container as this is clearly the intended way by the mysql image provider. Everything in this folder is executed once the db daemon is ready, hence you should be able rely on it.
As an addition to the others - since no other answer explicitely mentions this: besides sql scripts you can also copy bash scripts into that folder which might give you more control.
This is what I had needed for example as I also needed to import a dump, but the dump alone was not sufficient as it did not provide which database it should import into. So in my case I have a script named db_custom_init.sh with this content:
mysql -u root -p$MYSQL_ROOT_PASSWORD -e 'create database my_database_to_import_into'
mysql -u root -p$MYSQL_ROOT_PASSWORD my_database_to_import_into < /home/db_dump.sql
and this Dockerfile copying that script:
FROM mysql/mysql-server:5.5.62
ENV MYSQL_ROOT_PASSWORD=XXXXX
COPY ./db_dump.sql /home/db_dump.sql
COPY ./db_custom_init.sh /docker-entrypoint-initdb.d/
Based on Kuhess response, but without hard sleep:
RUN /bin/bash -c "/usr/bin/mysqld_safe --skip-grant-tables &" && \
while ! mysqladmin ping --silent; do sleep 1; echo "wait 1 second"; done && \
mysql -u root -e "CREATE DATABASE mydb" && \
mysql -u root mydb < /tmp/dump.sql
any file or script added to /docker-entrypoint-initdb.d will executed
at the starting of the container
make sure that you do not add or run any sql or sh file that can use
mysql servies from the Dockerfile .they will fail and stop the image
build becuase mysql servies did not start yet when this files or
scripts called .the best way to add .sh file is to ADD them on
/docker-entrypoint-initdb.d directory from your Dockerfile
working exmple
FROM mysql
ADD mysqlcode.sh /docker-entrypoint-initdb.d/mysqlcode.sh
ADD db.sql /home/db.sql
RUN chmod -R 775 /docker-entrypoint-initdb.d
ENV MYSQL_ROOT_PASSWORD mypassword
and the mysqlcode.sh will do some command when mysql service is active
mysqlcode.sh
#!/bin/bash
mysql -u root -pmypassword --execute "CREATE DATABASE IF NOT EXISTS mydatabase;"
mysql -u root -pmypassword mydatabase < /home/db.sql
I have experienced the same problem, but managed to get it working by separating the MySQL start-up commands:
sudo docker build -t MyDB_img -f Dockerfile.dev
sudo docker run --name SomeDB -e MYSQL_ROOT_PASSWORD="WhatEver" -p 3306:3306 -v $(pwd):/app -d MyDB_img
Then sleep for 20 seconds before running the MySQL scripts, it works.
sudo docker exec -it SomeDB sh -c yourscript.sh
I can only presume that the MySQL server takes a few seconds to start up before it can accept incoming connections and scripts.