How to run 'mysql' file? - mysql

In CircleCI 1.0, we used to have this code:
services:
- mysql
database:
override:
- mysql -u ubuntu circle_test < scripts/db/mysql_setup.sql
Now we are migrating to 2.0 and mysql is a docker instead of a service:
version: 2
jobs:
build:
docker:
- image: circleci/openjdk:8-jdk
- image: redis:3.2.11
- image: donilan/mysql-utf8mb4
We need to prepare our database, how can I execute this mysql -u ubuntu circle_test < scripts/db/mysql_setup.sql when mysql is dockerized?

I ended up connecting to database directly from our app.
There is another approach though (that I was not successful with):
Download mysql-client: apt-get mysql-client
mysql -h 127.0.0.1 -u root OR maybe using sockets mysql -u root
If anyone is successful, let me know please!

Related

CircleCI job creates docker MySQL 8 but nothing can connect

(See UPDATE at end of post for potentially helpful debug info.)
I have a CircleCI job that deploys MySQL 8 via - setup_remote_docker+docker-compose and then attempts to start a Java app to communicate with MySQL 8. Unfortunately, even though docker ps shows the container is up and running, any attempt to communicate with MySQL--either through the Java app or docker exec--fails, saying the container is not running (and Java throws a "Communications Link Failure" exception). It's a bit confusing because the container appears to be up, and the exact same commands work on my local machine.
Here's my CircleCI config.yml:
Build and Test:
<<: *configure_machine
steps:
- *load_repo
- ... other unrelated stuff ...
- *load_gradle_wrapper
- run:
name: Install Docker Compose
environment:
COMPOSE_VERSION: '1.29.2'
command: |
curl -L "https://github.com/docker/compose/releases/download/${COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o ~/docker-compose
chmod +x ~/docker-compose
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker
- run:
name: Start MySQL docker
command: docker-compose up -d
- run:
name: Check Docker MySQL
command: docker ps
- run:
name: Query MySQL #test that fails
command: docker exec -it mysql8_test_mysql mysql mysql -h 127.0.0.1 --port 3306 -u root -prootpass -e "show databases;"
And here's my docker-compose.yml that is run in one of the steps:
version: "3.1"
services:
# MySQL Dev Image
mysql-migrate:
container_name: mysql8_test_mysql
image: mysql:8.0
command:
mysqld --default-authentication-plugin=mysql_native_password
--character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
--log-bin-trust-function-creators=true
environment:
MYSQL_DATABASE: test_db
MYSQL_ROOT_PASSWORD: rootpass
ports:
- "3306:3306"
volumes:
- "./docker/mysql/data:/var/lib/mysql"
- "./docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf"
- "./mysql_schema_v1.sql:/docker-entrypoint-initdb.d/mysql_schema_v1.sql"
It's a fairly simple setup and the output from CircleCI is positive until it reaches the docker exec, which I added to test the connection. Here is what the output from CircleCI says per step:
Start MySQL Docker:
#!/bin/bash -eo pipefail
docker-compose up -d
Creating network "project_default" with the default driver
Pulling mysql-migrate (mysql:8.0)...
8.0: Pulling from library/mysql
5158dd02: Pulling fs layer
f6778b18: Pulling fs layer
a6c74a04: Pulling fs layer
4028a805: Pulling fs layer
7163f0f6: Pulling fs layer
cb7f57e0: Pulling fs layer
7a431703: Pulling fs layer
5fe86aaf: Pulling fs layer
add93486: Pulling fs layer
960383f3: Pulling fs layer
80965951: Pulling fs layer
Digest: sha256:b17a66b49277a68066559416cf44a185cfee538d0e16b5624781019bc716c122 121B/121BkBBB
Status: Downloaded newer image for mysql:8.0
Creating mysql8_******_mysql ...
Creating mysql8_******_mysql ... done
So we know MySQL 8 was pulled fine (and therefore the previous step worked). Next step is to ask Docker what's running.
Check Docker MySQL:
#!/bin/bash -eo pipefail
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb6b7941ad65 mysql:8.0 "docker-entrypoint.s…" 1 second ago Up Less than a second 0.0.0.0:3306->3306/tcp, 33060/tcp mysql8_test_mysql
CircleCI received exit code 0
Looks good so far. But now let's actually try to run a command against it via docker exec.
Query MySQL:
#!/bin/bash -eo pipefail
docker exec -it mysql8_test_mysql mysql mysql -h 127.0.0.1 --port 3306 -u root -prootpass -e "show databases;"
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1:3306' (111)
Exited with code exit status 1
CircleCI received exit code 1
So now we can't connect to MySQL even though docker ps showed it up and running. I even tried adding an absurd step to wait in case MySQL needed more time:
- run:
name: Start MySQL docker
command: docker-compose up -d
- run:
name: Check Docker MySQL
command: docker ps
- run:
name: Wait Until Ready
command: sleep 120
- run:
name: Query MySQL
command: docker exec -it mysql8_test_mysql mysql mysql -h 127.0.0.1 --port 3306 -u root -prootpass -e "show databases;"
Of course adding a 2 minute wait for MySQL to spin up didn't help. Any ideas as to why this is so difficult in CircleCI?
Thanks in advance.
UPDATE 1: I can successfully start MySQL if I SSH into the job's server and run the same command myself:
docker-compose up
Then in another terminal run this:
docker exec -it mysql8_test_mysql mysql mysql -h localhost --port 3306 -u root -prootpass -e "show databases;"
+--------------------+
| Database |
+--------------------+
| information_schema |
| test_db |
| mysql |
| performance_schema |
| sys |
+--------------------+
So it is possible to start MySQL. It's just not working right when through job steps.
UPDATE 2: I moved the two minute wait between docker-compose up -d and docker ps and now it shows nothing is running. So the container must be starting then crashing and that's the reason for why it's not available moments later.
The cause of the problem was the volumes entry in my docker-compose.yml with this line:
- "./mysql_schema_v1.sql:/docker-entrypoint-initdb.d/mysql_schema_v1.sql"
The container appeared to be up when I checked immediately after docker-compose up -d but in actuality it would crash seconds later because CircleCI appears to have an issue with Docker volume, potentially related to this: https://discuss.circleci.com/t/docker-compose-doesnt-mount-volumes-with-host-files-with-circle-ci/19099.
To make it work I removed that volume entry and added run commands to copy and import the schema like so:
- run:
name: Start MySQL docker
command: docker-compose up -d
# Manually copy schema file instead of using docker-compose volumes (has issues with CircleCI)
- run:
name: Copy Schema
command: docker cp mysql_schema_v1.sql mysql8_mobile_mysql:docker-entrypoint-initdb.d/mysql_schema_v1.sql
- run:
name: Import Schema
command: docker exec mysql8_mobile_mysql /bin/sh -c 'mysql -u root -prootpass < docker-entrypoint-initdb.d/mysql_schema_v1.sql'
With this new setup I've been able to create the tables and connect to MySQL. However, there appears to be an issue running tests against MySQL causing hangups but that might be unrelated. I will follow up with more information, but at least I hope this can help someone else.

How to create testing database under docker?

In Kubuntu 18 I create docker for laravel 6 app with mysql defined :
mysql:
container_name: "vanilla-crm-db"
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: "MYSQL_ROOT_PASSWORD"
MYSQL_DATABASE: "vanilla-crm-dev"
MYSQL_USER: "MYSQL_USER"
MYSQL_PASSWORD: "MYSQL_PASSWORD"
ports:
- "3330:3306"
volumes:
- "./docker/mysql/data:/var/lib/mysql"
and it works for me.
I use MySql Workbench 6.3 for db access.
Text I make http tests and I need to create new database for this and load dump of my database in it.
Name of this database is wriiten in
config/database.php under 'mysql_testing' block
I open Workbench and try to create new database : https://prnt.sc/unjie8
But I do not find “Create database” option, But I see “Create new schema” option
and in sql-statement preview I see command
CREATE SCHEMA `vanilla-crm-testing` ;
I expected
Create database ...
command.
Is it the same?
and error next:
Operation failed: There was an error while applying the SQL script to the database.
Executing:
CREATE SCHEMA `vanilla-crm-testing` ;
ERROR 1044: Access denied for user 'vanilla-crm-usr'#'%' to database 'vanilla-crm-testing'
SQL Statement:
CREATE SCHEMA `vanilla-crm-testing`
Which is valid way to create testing database?
UPDATED :
I tried to create new database in mysql console, like:
mysql
CREATE DATABASE vanilla-crm-testing;
but I got error in docker command line:
$ docker-compose exec app bash
root#09649d3a2b81:/app# mysql
bash: mysql: command not found
My docker app has in Dockerfile :
FROM php:7.3-apache
...
RUN apt-get update && apt-get install --no-install-recommends -y \
apt-utils ghostscript jq libicu-dev libmagick++-dev libpq-dev libfreetype6-dev libjpeg62-turbo-dev zlib1g-dev libzip-dev git zip && \
docker-php-ext-install intl && \
docker-php-ext-install opcache && \
docker-php-ext-install pdo_mysql && \
and no more mysql commands. Are there some more packages I need to install in Dockerfile to have mysql console under docker?
UPDATED # 2:
I enter the bash with command :
docker-compose exec mysql bash
root#f216ef80c104:/# uname -a
Linux f216ef80c104 4.15.0-118-generic #119-Ubuntu SMP Tue Sep 8 12:30:01 UTC 2020 x86_64 GNU/Linux
Where mysql is container_name in docker-compose.yml
Usually I enter mysql console with command:
mysql -u root -h localhost -p
But which must be format of this command in the docker console?
I tried several ways and failed...
UPDATED # 3:
I installed DBeaver Version 7.2.1.202009201907 and logged into
my database and tried to create new database for testing. I got error:
https://prnt.sc/uol0z7
How to fix it ?
Have I to add some more right my mysql container definitions?
Thanks!
To answer your question seems your main problem is you are trying to create a new test database but you are login with a non-root user. Non-root user has very little permission to do some operations. That's why you got that error message and can't create a new database. To solve this, try to log-in with the root user and make sure your Dbeaver config is correct:
host: 127.0.0.1
port: <Exposed MySQL port based on your docker-compose, e.g: 3330>
username: root
password: <MYSQL_ROOT_PASSWORD from your docker-compose>
With the root user, you should be able to create any new database. Another thing, if you wanted to connect via mysql cli command, make sure you also provide the correct port to the docker container. The command should be like this:
mysql -u root -h 127.0.0.1 -P <Exposed MySQL port based on your docker-compose, e.g: 3330> -p
Hope it helps and solved your problem. :)

Accessing mysql from build commands in bitbucket pipelines

I'm getting ERROR 2002 (HY000): Can't connect to local MySQL when trying to execute a mysql command during my CI process.
Here is my bitbucket-pipelines.yml file
image: theotherperson/php-ci:5.6
pipelines:
default:
- step:
caches:
- composer
script:
- apt-get update && apt-get install -y unzip mysql-client
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install --no-scripts --no-plugins
- cp test-assets/vhosts/000-default.conf /etc/apache2/sites-enabled/000-default.conf
- cp test-assets/hosts/hosts /etc/hosts
- rm /var/www/html/index.html
- cp -R $BITBUCKET_CLONE_DIR /var/www/html
- service apache2 restart
- mysql -u root -p$MYSQL_ROOT_PASSWORD -e "test < $BITBUCKET_CLONE_DIR/data/test/test.sql"
- phantomjs --webdriver=4444 &
- vendor/bin/behat -p test_behat
services:
- mysql
definitions:
services:
mysql:
image: mysql
environment:
MYSQL_DATABASE: 'test'
MYSQL_ROOT_PASSWORD: 'mypassword'
And here is the error:
+ mysql -u root -p$MYSQL_ROOT_PASSWORD -e "test < $BITBUCKET_CLONE_DIR/data/test/test.sql"
Enter password: ERROR 2002 (HY000): Can't connect to local MySQL
What do I need to do to be able to access mysql from this command line?
look at their documentation
Host name: 127.0.0.1 (avoid using localhost, as some clients will attempt to connect via a local "Unix socket", which will not work in Pipelines)

Adding Flyway to a MySQL Docker Container

I'm building an derivative to this Docker container for mysql (using it as a starting point): https://github.com/docker-library/mysql
I've amended the Dockerfile to add in Flyway. Everything is set up to edit the config file to connect to the local DB instance, etc. The intent is to call this command from inside the https://github.com/docker-library/mysql/blob/master/5.7/docker-entrypoint.sh file (which runs as the ENTRYPOINT) around line 186:
flyway migrate
I get a connection refused when this is run from inside the shell script:
Flyway 4.1.2 by Boxfuse
ERROR:
Unable to obtain Jdbc connection from DataSource
(jdbc:mysql://localhost:3306/db-name) for user 'root': Could not connect to address=(host=localhost)(port=3306)(type=master) : Connection refused
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08
Error Code : -1
Message : Could not connect to address=(host=localhost)(port=3306)(type=master) : Connection refused
But, if I remove the command from the shell script, rebuild and log in to the container, and run the same command manually, it works with no problems.
I suspect that there may be some differences with how the script connects to the DB to do its thing (it has a built in SQL "runner"), but I can't seem to hunt it down. The container restarts the server during the process, which is what may be the difference here.
Since this container is intended for development, one alternative (a work-around, really) is to use the built in SQL "runner" for this container, using the filename format that Flyway expects, then use Flyway to manage the production DB's versions.
Thanks in advance for any help.
I mean it's the good way to start from the ready image (for start).
You may start from image docker "mysql"
FROM mysql
If you start the finished image - when creating new version your docker then
will only update the difference.
Next, step you may install java and net-tools
RUN apt-get -y install apt-utils openjdk-8-jdk net-tools
Config mysql
ENV MYSQL_DATABASE=mydb
ENV MYSQL_ROOT_PASSWORD=root
Add flyway
ADD flyway /opt/flyway
Add migrations
ADD sql /opt/flyway/sql
Add config flyway
ADD config /opt/flyway/conf
Add script to start
ADD start /root/start.sh
Check start mysql
RUN netstat -ntlp
Check java version
RUN java -version
Example file: /opt/flyway/conf/flyway.conf
flyway.driver=com.mysql.jdbc.Driver
flyway.url=jdbc:mysql://localhost:3306/mydb
flyway.user=root
flyway.password=root
Example file: start.sh
#!/bin/bash
cd /opt/flyway
flyway migrate
# may change to start.sh to start product migration or development.
Flyway documentation
I mean that you in next step may use flyway as service:
For example:
docker run -it -p 3307:3306 my_docker_flyway /root/start << migration_prod.sh
docker run -it -p 3308:3306 my_docker_flayway /root/start << migration_dev.sh
etc ...
services:
# Standard Mysql Box, we have to add tricky things else logging by workbench is hard
supermonk-mysql:
image: mysql
command: --default-authentication-plugin=mysql_native_password --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
environment:
- MYSQL_ROOT_PASSWORD=P#ssw0rd
- MYSQL_ROOT_HOST=%
- MYSQL_DATABASE=test
ports:
- "3306:3306"
healthcheck:
test: ["CMD-SHELL", "nc -z 127.0.0.1 3306 || exit 1"]
interval: 1m30s
timeout: 60s
retries: 6
# Flyway is best for mysql schema migration history.
supermonk-flyway:
container_name: supermonk-flyway
image: boxfuse/flyway
command: -url=jdbc:mysql://supermonk-mysql:3306/test?verifyServerCertificate=false&useSSL=true -schemas=test -user=root -password=P#ssw0rd migrate
volumes:
- "./sql:/flyway/sql"
depends_on:
- supermonk-mysql
mkdir ./sql
vi ./sql/V1.1__Init.sql # and paste below
CREATE TABLE IF NOT EXISTS test.USER (
id VARCHAR(64),
fname VARCHAR(256),
lname VARCHAR(256),
CONSTRAINT pk PRIMARY KEY (id));
save and close
docker-compose up -d
wait for 2 minutes
docker-compose run supermonk-flyway
Ref :
https://github.com/supermonk/webapp/tree/branch-1/docker/docker-database
Thanks to docker community and mysql community
docker-compose logs -f

Setting up MySQL and importing dump within Dockerfile

I'm trying to setup a Dockerfile for my LAMP project, but i'm having a few problems when starting MySQL. I have the folowing lines on my Dockerfile:
VOLUME ["/etc/mysql", "/var/lib/mysql"]
ADD dump.sql /tmp/dump.sql
RUN /usr/bin/mysqld_safe & sleep 5s
RUN mysql -u root -e "CREATE DATABASE mydb"
RUN mysql -u root mydb < /tmp/dump.sql
But I keep getting this error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
Any ideas on how to setup database creation and dump import during a Dockerfile build?
The latest version of the official mysql docker image allows you to import data on startup. Here is my docker-compose.yml
data:
build: docker/data/.
mysql:
image: mysql
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: 1234
volumes:
- ./docker/data:/docker-entrypoint-initdb.d
volumes_from:
- data
Here, I have my data-dump.sql under docker/data which is relative to the folder the docker-compose is running from. I am mounting that sql file into this directory /docker-entrypoint-initdb.d on the container.
If you are interested to see how this works, have a look at their docker-entrypoint.sh in GitHub. They have added this block to allow importing data
echo
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${mysql[#]}" < "$f" && echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
An additional note, if you want the data to be persisted even after the mysql container is stopped and removed, you need to have a separate data container as you see in the docker-compose.yml. The contents of the data container Dockerfile are very simple.
FROM n3ziniuka5/ubuntu-oracle-jdk:14.04-JDK8
VOLUME /var/lib/mysql
CMD ["true"]
The data container doesn't even have to be in start state for persistence.
Each RUN instruction in a Dockerfile is executed in a different layer (as explained in the documentation of RUN).
In your Dockerfile, you have three RUN instructions. The problem is that MySQL server is only started in the first. In the others, no MySQL are running, that is why you get your connection error with mysql client.
To solve this problem you have 2 solutions.
Solution 1: use a one-line RUN
RUN /bin/bash -c "/usr/bin/mysqld_safe --skip-grant-tables &" && \
sleep 5 && \
mysql -u root -e "CREATE DATABASE mydb" && \
mysql -u root mydb < /tmp/dump.sql
Solution 2: use a script
Create an executable script init_db.sh:
#!/bin/bash
/usr/bin/mysqld_safe --skip-grant-tables &
sleep 5
mysql -u root -e "CREATE DATABASE mydb"
mysql -u root mydb < /tmp/dump.sql
Add these lines to your Dockerfile:
ADD init_db.sh /tmp/init_db.sh
RUN /tmp/init_db.sh
What I did was download my sql dump in a "db-dump" folder, and mounted it:
mysql:
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: pass
ports:
- 3306:3306
volumes:
- ./db-dump:/docker-entrypoint-initdb.d
When I run docker-compose up for the first time, the dump is restored in the db.
Here is a working version using v3 of docker-compose.yml. The key is the volumes directive:
mysql:
image: mysql:5.6
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: theusername
MYSQL_PASSWORD: thepw
MYSQL_DATABASE: mydb
volumes:
- ./data:/docker-entrypoint-initdb.d
In the directory that I have my docker-compose.yml I have a data dir that contains .sql dump files. This is nice because you can have a .sql dump file per table.
I simply run docker-compose up and I'm good to go. Data automatically persists between stops. If you want remove the data and "suck in" new .sql files run docker-compose down then docker-compose up.
If anyone knows how to get the mysql docker to re-process files in /docker-entrypoint-initdb.d without removing the volume, please leave a comment and I will update this answer.
I used docker-entrypoint-initdb.d approach (Thanks to #Kuhess)
But in my case I want to create my DB based on some parameters I defined in .env file so I did these
1) First I define .env file something like this in my docker root project directory
MYSQL_DATABASE=my_db_name
MYSQL_USER=user_test
MYSQL_PASSWORD=test
MYSQL_ROOT_PASSWORD=test
MYSQL_PORT=3306
2) Then I define my docker-compose.yml file. So I used the args directive to define my environment variables and I set them from .env file
version: '2'
services:
### MySQL Container
mysql:
build:
context: ./mysql
args:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
ports:
- "${MYSQL_PORT}:3306"
3) Then I define a mysql folder that includes a Dockerfile. So the Dockerfile is this
FROM mysql:5.7
RUN chown -R mysql:root /var/lib/mysql/
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_ROOT_PASSWORD
ENV MYSQL_DATABASE=$MYSQL_DATABASE
ENV MYSQL_USER=$MYSQL_USER
ENV MYSQL_PASSWORD=$MYSQL_PASSWORD
ENV MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
ADD data.sql /etc/mysql/data.sql
RUN sed -i 's/MYSQL_DATABASE/'$MYSQL_DATABASE'/g' /etc/mysql/data.sql
RUN cp /etc/mysql/data.sql /docker-entrypoint-initdb.d
EXPOSE 3306
4) Now I use mysqldump to dump my db and put the data.sql inside mysql folder
mysqldump -h <server name> -u<user> -p <db name> > data.sql
The file is just a normal sql dump file but I add 2 lines at the beginning so the file would look like this
--
-- Create a database using `MYSQL_DATABASE` placeholder
--
CREATE DATABASE IF NOT EXISTS `MYSQL_DATABASE`;
USE `MYSQL_DATABASE`;
-- Rest of queries
DROP TABLE IF EXISTS `x`;
CREATE TABLE `x` (..)
LOCK TABLES `x` WRITE;
INSERT INTO `x` VALUES ...;
...
...
...
So what happening is that I used "RUN sed -i 's/MYSQL_DATABASE/'$MYSQL_DATABASE'/g' /etc/mysql/data.sql" command to replace the MYSQL_DATABASE placeholder with the name of my DB that I have set it in .env file.
|- docker-compose.yml
|- .env
|- mysql
|- Dockerfile
|- data.sql
Now you are ready to build and run your container
edit: I had misunderstand the question here. My following answer explains how to run sql commands at container creation time, but not at image creation time as desired by OP.
I'm not quite fond of Kuhess's accepted answer as the sleep 5 seems a bit hackish to me as it assumes that the mysql db daemon has correctly loaded within this time frame. That's an assumption, no guarantee. Also if you use a provided mysql docker image, the image itself already takes care about starting up the server; I would not interfer with this with a custom /usr/bin/mysqld_safe.
I followed the other answers around here and copied bash and sql scripts into the folder /docker-entrypoint-initdb.d/ within the docker container as this is clearly the intended way by the mysql image provider. Everything in this folder is executed once the db daemon is ready, hence you should be able rely on it.
As an addition to the others - since no other answer explicitely mentions this: besides sql scripts you can also copy bash scripts into that folder which might give you more control.
This is what I had needed for example as I also needed to import a dump, but the dump alone was not sufficient as it did not provide which database it should import into. So in my case I have a script named db_custom_init.sh with this content:
mysql -u root -p$MYSQL_ROOT_PASSWORD -e 'create database my_database_to_import_into'
mysql -u root -p$MYSQL_ROOT_PASSWORD my_database_to_import_into < /home/db_dump.sql
and this Dockerfile copying that script:
FROM mysql/mysql-server:5.5.62
ENV MYSQL_ROOT_PASSWORD=XXXXX
COPY ./db_dump.sql /home/db_dump.sql
COPY ./db_custom_init.sh /docker-entrypoint-initdb.d/
Based on Kuhess response, but without hard sleep:
RUN /bin/bash -c "/usr/bin/mysqld_safe --skip-grant-tables &" && \
while ! mysqladmin ping --silent; do sleep 1; echo "wait 1 second"; done && \
mysql -u root -e "CREATE DATABASE mydb" && \
mysql -u root mydb < /tmp/dump.sql
any file or script added to /docker-entrypoint-initdb.d will executed
at the starting of the container
make sure that you do not add or run any sql or sh file that can use
mysql servies from the Dockerfile .they will fail and stop the image
build becuase mysql servies did not start yet when this files or
scripts called .the best way to add .sh file is to ADD them on
/docker-entrypoint-initdb.d directory from your Dockerfile
working exmple
FROM mysql
ADD mysqlcode.sh /docker-entrypoint-initdb.d/mysqlcode.sh
ADD db.sql /home/db.sql
RUN chmod -R 775 /docker-entrypoint-initdb.d
ENV MYSQL_ROOT_PASSWORD mypassword
and the mysqlcode.sh will do some command when mysql service is active
mysqlcode.sh
#!/bin/bash
mysql -u root -pmypassword --execute "CREATE DATABASE IF NOT EXISTS mydatabase;"
mysql -u root -pmypassword mydatabase < /home/db.sql
I have experienced the same problem, but managed to get it working by separating the MySQL start-up commands:
sudo docker build -t MyDB_img -f Dockerfile.dev
sudo docker run --name SomeDB -e MYSQL_ROOT_PASSWORD="WhatEver" -p 3306:3306 -v $(pwd):/app -d MyDB_img
Then sleep for 20 seconds before running the MySQL scripts, it works.
sudo docker exec -it SomeDB sh -c yourscript.sh
I can only presume that the MySQL server takes a few seconds to start up before it can accept incoming connections and scripts.