I am building an application that uses nodeJS and backend and mySQL as backend, and currently, my steps to bring up the app (without docker) is by:
Install NodeJS
Install MYSQL
Launch mysqld on port 3306
Manually create a MYSQL user dedicated for the NodeJS backend. This
user should have only basic previliges to only my desired schema.
Run sequelize commands to perform data migration and seeding using
the user generated in 4)
npm install and npm start to launch NodeJS on port 8080
Now I want to dokerize my application, and I already have the following Dockerfile:
#node version: carbon
#app version: 1.0.0
FROM node:8.11.2
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
I have put a init.sql file within ./docker_db folder which does the following:
CREATE USER 'app_user'#'%' IDENTIFIED BY 'password';
CREATE SCHEMA `myapp` DEFAULT CHARACTER SET utf8;
GRANT INSERT, CREATE, ALTER, UPDATE, SELECT, REFERENCES on myapp.*
TO 'app_user'#'%' IDENTIFIED BY 'password'
WITH GRANT OPTION;
and the following docker-compose.yaml:
version: '3.6'
services:
mysql1:
image: mysql/mysql-server:5.7
environment:
MYSQL_ROOT_PASSWORD: password
ports:
- "127.0.0.1:3306:3306"
volumes:
- type: bind
source: ./docker_db
target: /docker-entrypoint-initdb.d
expose:
- "3306"
networks:
- app-network
myapp:
build:
context: .
dockerfile: Dockerfile
command: npm start
depends_on:
- mysql1
ports:
- "127.0.0.1:8080:8080"
expose:
- "8080"
links:
- mysql1
networks:
- app-network
command: ["./wait-for-db.sh"]
networks:
app-network:
driver: bridge
where my ./wait-for-db.sh does the following:
#!/bin/bash
until mysql -h mysql1 -u app_user -p password -e 'select 1'; do
echo "still waiting for mysql"; sleep 1; done
exec node ./db/scripts/generateSequelizeCLIConfig.js
exec node_modules/sequelize-cli/bin/sequelize db:migrate
exec node_modules/sequelize-cli/bin/sequelize db:seed:all
exec npm start
(BTW I do want to expose 3306 to host machine so that I can use workbench to connect to the mysql server, which I have successfully connected.)
In my sequelize config file I do have:
"username": "app_user",
"password": "password",
"database": "myapp",
"host": "mysql1",
"port": "3306"
With the above setting, I executed docker-compose up, and then I got the following lines:
mysql1_1 | [Entrypoint] MySQL Docker Image 5.7.22-1.1.5
mysql1_1 | [Entrypoint] Initializing database
myapp_1 | standard_init_linux.go:190: exec user process caused "no such file or directory"
myapp_myapp_1 exited with code 1
mysql1_1 | [Entrypoint] Database initialized
mysql1_1 | Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
mysql1_1 | Warning: Unable to load '/usr/share/zoneinfo/leapseconds' as time zone. Skipping it.
mysql1_1 | Warning: Unable to load '/usr/share/zoneinfo/tzdata.zi' as time zone. Skipping it.
mysql1_1 | Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
mysql1_1 | Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
mysql1_1 |
mysql1_1 | [Entrypoint] running /docker-entrypoint-initdb.d/init.sql
mysql1_1 |
mysql1_1 |
mysql1_1 | [Entrypoint] Server shut down
mysql1_1 |
mysql1_1 | [Entrypoint] MySQL init process done. Ready for start up.
mysql1_1 |
mysql1_1 | [Entrypoint] Starting MySQL 5.7.22-1.1.5
The problems now I face are:
1) The script's execution is hanging on the last line of Starting MySQL 5.7.22-1.1.5 and not going anywhere.
2) In the output, the 3rd and 4th lines shows an error about exec user process caused "no such file or directory". I don't think it is caused by the commands in the wait-for-db.sh because if I removed the lines after the until command, the problem still persist. In fact, I doubt the command execution ever reaching those lines and it feels like it is still within the until command.
I think it's really close to the final solution though :)
Use the name of your db service, which is mysql, as your database host. Docker will resolve it to the actually IP. Also why do you have FROM mysql:5.7 in your Dockerfile, I don't think it is of any uses.
Updated
Alright, seems like myapp runs db scripts before the db is ready. See here for solution https://docs.docker.com/compose/startup-order/
The problem is probably related to timing. Both containers will start at the same time and your node-app will try to connect to mysql almost immediately, while the MySQL server is still starting.
docker-compose doesn't have any kind of structure for this so you will have to build an entrypoint in your node-app that first waits for mysql to respond.
So, in your case, the entrypoint would be something like
#!/bin/bash
until mysql -h mysql1 -uapp_user -ppassword -e'select 1'; do echo "still waiting for mysql"; sleep 1; done
exec npm start
Related
(See UPDATE at end of post for potentially helpful debug info.)
I have a CircleCI job that deploys MySQL 8 via - setup_remote_docker+docker-compose and then attempts to start a Java app to communicate with MySQL 8. Unfortunately, even though docker ps shows the container is up and running, any attempt to communicate with MySQL--either through the Java app or docker exec--fails, saying the container is not running (and Java throws a "Communications Link Failure" exception). It's a bit confusing because the container appears to be up, and the exact same commands work on my local machine.
Here's my CircleCI config.yml:
Build and Test:
<<: *configure_machine
steps:
- *load_repo
- ... other unrelated stuff ...
- *load_gradle_wrapper
- run:
name: Install Docker Compose
environment:
COMPOSE_VERSION: '1.29.2'
command: |
curl -L "https://github.com/docker/compose/releases/download/${COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o ~/docker-compose
chmod +x ~/docker-compose
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker
- run:
name: Start MySQL docker
command: docker-compose up -d
- run:
name: Check Docker MySQL
command: docker ps
- run:
name: Query MySQL #test that fails
command: docker exec -it mysql8_test_mysql mysql mysql -h 127.0.0.1 --port 3306 -u root -prootpass -e "show databases;"
And here's my docker-compose.yml that is run in one of the steps:
version: "3.1"
services:
# MySQL Dev Image
mysql-migrate:
container_name: mysql8_test_mysql
image: mysql:8.0
command:
mysqld --default-authentication-plugin=mysql_native_password
--character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
--log-bin-trust-function-creators=true
environment:
MYSQL_DATABASE: test_db
MYSQL_ROOT_PASSWORD: rootpass
ports:
- "3306:3306"
volumes:
- "./docker/mysql/data:/var/lib/mysql"
- "./docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf"
- "./mysql_schema_v1.sql:/docker-entrypoint-initdb.d/mysql_schema_v1.sql"
It's a fairly simple setup and the output from CircleCI is positive until it reaches the docker exec, which I added to test the connection. Here is what the output from CircleCI says per step:
Start MySQL Docker:
#!/bin/bash -eo pipefail
docker-compose up -d
Creating network "project_default" with the default driver
Pulling mysql-migrate (mysql:8.0)...
8.0: Pulling from library/mysql
5158dd02: Pulling fs layer
f6778b18: Pulling fs layer
a6c74a04: Pulling fs layer
4028a805: Pulling fs layer
7163f0f6: Pulling fs layer
cb7f57e0: Pulling fs layer
7a431703: Pulling fs layer
5fe86aaf: Pulling fs layer
add93486: Pulling fs layer
960383f3: Pulling fs layer
80965951: Pulling fs layer
Digest: sha256:b17a66b49277a68066559416cf44a185cfee538d0e16b5624781019bc716c122 121B/121BkBBB
Status: Downloaded newer image for mysql:8.0
Creating mysql8_******_mysql ...
Creating mysql8_******_mysql ... done
So we know MySQL 8 was pulled fine (and therefore the previous step worked). Next step is to ask Docker what's running.
Check Docker MySQL:
#!/bin/bash -eo pipefail
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb6b7941ad65 mysql:8.0 "docker-entrypoint.s…" 1 second ago Up Less than a second 0.0.0.0:3306->3306/tcp, 33060/tcp mysql8_test_mysql
CircleCI received exit code 0
Looks good so far. But now let's actually try to run a command against it via docker exec.
Query MySQL:
#!/bin/bash -eo pipefail
docker exec -it mysql8_test_mysql mysql mysql -h 127.0.0.1 --port 3306 -u root -prootpass -e "show databases;"
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1:3306' (111)
Exited with code exit status 1
CircleCI received exit code 1
So now we can't connect to MySQL even though docker ps showed it up and running. I even tried adding an absurd step to wait in case MySQL needed more time:
- run:
name: Start MySQL docker
command: docker-compose up -d
- run:
name: Check Docker MySQL
command: docker ps
- run:
name: Wait Until Ready
command: sleep 120
- run:
name: Query MySQL
command: docker exec -it mysql8_test_mysql mysql mysql -h 127.0.0.1 --port 3306 -u root -prootpass -e "show databases;"
Of course adding a 2 minute wait for MySQL to spin up didn't help. Any ideas as to why this is so difficult in CircleCI?
Thanks in advance.
UPDATE 1: I can successfully start MySQL if I SSH into the job's server and run the same command myself:
docker-compose up
Then in another terminal run this:
docker exec -it mysql8_test_mysql mysql mysql -h localhost --port 3306 -u root -prootpass -e "show databases;"
+--------------------+
| Database |
+--------------------+
| information_schema |
| test_db |
| mysql |
| performance_schema |
| sys |
+--------------------+
So it is possible to start MySQL. It's just not working right when through job steps.
UPDATE 2: I moved the two minute wait between docker-compose up -d and docker ps and now it shows nothing is running. So the container must be starting then crashing and that's the reason for why it's not available moments later.
The cause of the problem was the volumes entry in my docker-compose.yml with this line:
- "./mysql_schema_v1.sql:/docker-entrypoint-initdb.d/mysql_schema_v1.sql"
The container appeared to be up when I checked immediately after docker-compose up -d but in actuality it would crash seconds later because CircleCI appears to have an issue with Docker volume, potentially related to this: https://discuss.circleci.com/t/docker-compose-doesnt-mount-volumes-with-host-files-with-circle-ci/19099.
To make it work I removed that volume entry and added run commands to copy and import the schema like so:
- run:
name: Start MySQL docker
command: docker-compose up -d
# Manually copy schema file instead of using docker-compose volumes (has issues with CircleCI)
- run:
name: Copy Schema
command: docker cp mysql_schema_v1.sql mysql8_mobile_mysql:docker-entrypoint-initdb.d/mysql_schema_v1.sql
- run:
name: Import Schema
command: docker exec mysql8_mobile_mysql /bin/sh -c 'mysql -u root -prootpass < docker-entrypoint-initdb.d/mysql_schema_v1.sql'
With this new setup I've been able to create the tables and connect to MySQL. However, there appears to be an issue running tests against MySQL causing hangups but that might be unrelated. I will follow up with more information, but at least I hope this can help someone else.
I had a spring-boot project that used mysql docker-image so I didn't need to download the mysql benchwork. For other reasons I had to start over so I created a new project that uses the same mysql docker image I previously used.
My docker-compose.yml mysql service looks like this
version: "3.7"
services:
db:
image: mysql:5.7
command: --lower_case_table_names=1
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: farming_db
MYSQL_USER: root
MYSQL_PASSWORD: root
restart: always
volumes:
- "./database/farming_db/:/var/lib/mysql" #local
- farming_db:/var/lib/mysql/data #docker
ports:
- "3306:3306"
container_name: farming_mysql
networks:
- backend-network
When I run
docker-compose up
This is the error :
Attaching to farming_mysql, farming_server_springboot_1
farming_mysql | 2021-03-18 07:03:20+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.33-1debian10 started.
farming_mysql | 2021-03-18 07:03:20+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
farming_mysql | 2021-03-18 07:03:20+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.33-1debian10 started.
farming_mysql | 2021-03-18 07:03:21+00:00 [Note] [Entrypoint]: Initializing database files
farming_mysql | 2021-03-18T07:03:21.058436Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server opti
on (see documentation for more details).
farming_mysql | 2021-03-18T07:03:21.063630Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
farming_mysql | 2021-03-18T07:03:21.063710Z 0 [ERROR] Aborting
farming_mysql |
farming_mysql exited with code 1
springboot_1 |
I understood that my directory is not empty. I am trying to use "./database/farming_db/:/var/lib/mysql" and "farming_db:/var/lib/mysql/data" both as the volume directories. I think the problem is with the latter directory because the prior directory is empty. I'm having a problem deleting the contents in the latter directory because I don't know how to access it.
So this is what I've tried :
I deleted all the containers and then deleted all the volumes.docker volume prune but didn't work.
I searched that I could do rm -rf /usr/local/var/mysql but I don't know where I can execute this command since the container won't run properly at all.
I deleted the mysql image and just ran docker-compose up again. This seems to pull a new mysql image from somewhere? but I still get the same error. I guess volume directory has nothing do with the docker image itself.
I deleted the "- farming_db:/var/lib/mysql/data #docker" line from the docker-compose. But the same error is still occuring!
I'm using Windows10.
My question :
How can I access the directory? I don't know where to use the rm -rf command.
Why does this error still occur even when I erase "- farming_db:/var/lib/mysql/data #docker" from the docker-compose?
And also could anyone explain what I am doing? I'm new to docker and I don't really understand these volume problems.
Run docker system prune --volumes
This frees up the memory by removing all unused containers. Sometimes, the mentioned issue can occur due to memory limitations
Generally I emptied the volume's data directory and just changed the versions of the MySQL.
So in steps:
empty volume directory content
modify docker-compose.yml mysql version from 5.7 to 5.7.16
This line indicate that mysql container is storing the data inside a directory database in the same directory than your docker-compose.yml:
volumes:
- "./database/farming_db/:/var/lib/mysql" #local
This kind of volume isn't managed by Docker, it's just a directory in your filesystem, this is why docker volume prune doesn't work. I know that, because it starts with a "path" relative or absolute.
The other volume, farming_db, are managed by Docker. I know that because it starts with a simple name. This kinds of volume are managed by Docker and are removed with prune.
So, answering:
In the same directory than your docker-compose.yml you can remove that database folder.
Because the first volume, the one with /var/lib/mysql still exists. MySQL keeps all files inside this directory and any other child directory are a database.
You're just trying to put a container running and docker-compose hides a lot of details.
This is just a detail, but MYSQL_USER should be different than root.
You can let Docker manage the entire volume, creating a single volume to hold all data, in this case I named it as mysql_data:
volumes:
- mysql_data:/var/lib/mysql
Or, you can explore a bit more the docker run equivalent command to get used with it:
docker run -d --name mysql \
-e MYSQL_ROOT_PASSWORD=root \
-e MYSQL_DATABASE=farming_db \
-e MYSQL_USER=myuser \
-e MYSQL_PASSWORD=mypass \
-v mysql_data:/var/lib/mysql \
-p 3306:3306 \
mysql:5.7
As vencedor's answer, it worked for me. If anyone need stay with mysql 5.7, you can add these lines to your db service in docker-compose.yml:
- /etc/group:/etc/group:ro
- /etc/passwd:/etc/passwd:ro
user: "1000:1000"
I used docker-compose to run mysql image and encountered the error.
I use the following configuration to set volume.
- ./mysql/data:/var/lib/mysql/data
Then I changed it to the following and the error was solved.
- ./mysql:/var/lib/mysql
I am attempting to create a docker container using the mysql:5 docker image. Once the MySQL server is up and running I want to create some databases, users and tables.
My Dockerfile looks like this;
FROM mysql:5
# Add db.data with correct permissions
RUN mkdir /server_data
WORKDIR /server_data
ADD --chown="root:root" ./db.data .
# Copy setup directory
COPY ./setup setup
COPY ./config /etc/mysql/conf.d
CMD ["./setup/setup.sh", "mysql", "-u", "root", "<", "./setup/schema.sql"]
My ./setup/setup.sh script looks like this;
#!/bin/bash
# wait-for-mysql.sh
set -e
shift
cmd="$#"
until mysql -uroot -c '\q'; do
>&2 echo "mysql is unavailable - sleeping"
sleep 1
done
>&2 echo "mysql is up - executing command"
exec $cmd
My docker-compose.yml looks like this;
version: "3"
services:
db:
build: ./db
volumes:
- data-db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=password
restart: always
container_name: db
volumes:
data-db:
When I run 'docker-compose up --build' I get the following output;
Building db
Step 1/7 : FROM mysql:5
---> 0d16d0a97dd1
Step 2/7 : RUN mkdir /server_data
---> Using cache
---> 087b5ded3a53
Step 3/7 : WORKDIR /server_data
---> Using cache
---> 5a32ea1b0a49
Step 4/7 : ADD --chown="root:root" ./db.data .
---> Using cache
---> 5d453c52a9f1
Step 5/7 : COPY ./setup setup
---> 9c5359818748
Step 6/7 : COPY ./config /etc/mysql/conf.d
---> b663a380813f
Step 7/7 : CMD ["./setup/setup.sh", "mysql", "-u", "root", "<", "./setup/schema.sql"]
---> Running in 4535b2620141
Removing intermediate container 4535b2620141
---> 2d2fb7e308ad
Successfully built 2d2fb7e308ad
Successfully tagged wasdbsandbox_db:latest
Recreating db ... done
Attaching to db
db | ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
db | mysql is unavailable - sleeping
db | ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
db | mysql is unavailable - sleeping
This goes on interminably until I press Ctrl + c.
If I comment out the CMD line in my Dockerfile the output from running 'docker-compose up --build' is the output of the ENTRYPOINT command that is defined in the official mysql Dockerfile.
Why is mysql never starting when I use my own CMD command?
This is supported already by the official mysql image. No need to make your own custom solution.
Look at the Docker Hub README under "Initializing a fresh instance".
You can see in the official image under the 5.7 Dockerfile (for example) that it copies in a ENTRYPOINT script. That script doesn't run at build time, but at run-time right before the CMD starts the daemon.
In that existing ENTRYPOINT script you'll see that it will process any files you put in /docker-entrypoint-initdb.d/
So in short, when you start a new container from that existing official image it will:
start the mysqld in local-only mode
creates default user, db, pw, etc.
runs any scripts you put in /docker-entrypoint-initdb.d/
stops the mysqld and hands off to the Dockerfile CMD
CMD will run the mysqld to listen on the network
currently working on moving our application to start using docker. It's a typical app with backend and frontend. I don't have any troubles with front, while still can't launch back.
I have Docker file for backend:
FROM williamyeh/java8
RUN apt-get -y update && apt-get install -y maven
WORKDIR /explorerbackend
ADD settings.xml /root/.m2/settings.xml
ADD pom.xml /explorerbackend
ADD src /explorerbackend/src
RUN ["mvn", "clean", "install"]
ADD target/explorer-backend-1.0.jar /explorerbackend/app.jar
RUN sh -c 'touch /explorerbackend/app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /explorerbackend/app.jar" ]
and Docker file for mysql:
FROM mysql
ADD createDB.sql /docker-entrypoint-initdb.d
The reason i'm using a separate Docker file for mysql instead of just using image in docker-compose is necessity to create 2 databases on start (otherwise backend will not launch)
createDB.sql file looks as:
CREATE DATABASE IE;
CREATE DATABASE IE_test;
Now i have docker-compose.yml file which is supposed to start 2 containers and make backend connect to database:
version: "3.0"
services:
database:
environment:
MYSQL_ROOT_PASSWORD: root
build:
context: *PATH_TO_DIR_WITH_DOCKERFILE*
dockerfile: Dockerfile
ports:
- 3306:3306
volumes:
- db_data:/var/lib/mysql
backend:
build:
context: *PATH_TO_DIR_WITH_DOCKERFILE*
dockerfile: Dockerfile
ports:
- 3000:3000
depends_on:
- database
volumes:
db_data:
When I run the command docker-compose up database container is up and running while backend is failing:
backend_1 | java.sql.SQLNonTransientConnectionException: Could not create connection to database server. Attempted reconnect 3 times. Giving up.
However I'm able to log in to database container and I do see databases created:
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| IE |
| IE_test |
| mysql |
| performance_schema |
| sys |
+--------------------+
6 rows in set (0.00 sec)
The only reason I see might be related to yml property file of backend:
app:
data-base:
name: IE
link: database
port: 3306
.................
From the frontend container I'm able to ping database (but am I allowed to put into property file just link:database):
root#897b187f9042:/frontend# ping database
PING database (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: icmp_seq=0 ttl=64 time=0.086 ms
64 bytes from 172.19.0.2: icmp_seq=1 ttl=64 time=0.088 ms
So, I assume it's pingable from backend container as well, but why it's not able to connect to db server?
MySQL takes a few seconds to start-up. In-order to confirm this is a race-condition, try the following:
$ docker-compose up -d database && sleep 5 && docker-compose up
When/if this confirms the race-condition, you can alleviate that with a HEALTHCHECK on your database image.
See: https://github.com/docker-library/healthcheck/tree/master/mysql
Script from above link:
#!/bin/bash
set -eo pipefail
if [ "$MYSQL_RANDOM_ROOT_PASSWORD" ] && [ -z "$MYSQL_USER" ] && [ -z "$MYSQL_PASSWORD" ]; then
# there's no way we can guess what the random MySQL password was
echo >&2 'healthcheck error: cannot determine random root password (and MYSQL_USER and MYSQL_PASSWORD were not set)'
exit 0
fi
host="$(hostname --ip-address || echo '127.0.0.1')"
user="${MYSQL_USER:-root}"
export MYSQL_PWD="${MYSQL_PASSWORD:-$MYSQL_ROOT_PASSWORD}"
args=(
# force mysql to not use the local "mysqld.sock" (test "external" connectibility)
-h"$host"
-u"$user"
--silent
)
if select="$(echo 'SELECT 1' | mysql "${args[#]}")" && [ "$select" = '1' ]; then
exit 0
fi
exit 1
Eventually, we found the problem which is a kind of oversight.
The root cause was backend dockerfile:
FROM williamyeh/java8
RUN apt-get -y update && apt-get install -y maven
WORKDIR /explorerbackend
ADD settings.xml /root/.m2/settings.xml
ADD pom.xml /explorerbackend
ADD src /explorerbackend/src
RUN ["mvn", "clean", "install"]
ADD target/explorer-backend-1.0.jar /explorerbackend/app.jar
RUN sh -c 'touch /explorerbackend/app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /explorerbackend/app.jar" ]
The idea is pretty simple:
1. Take java image
2. install maven
3. copy src folder of my project from host
4. install with maven in container
5. move jar to workdir inside container
6. launch it
However, option 5. doesn't look correct, as instead of copying jar file what was just created by maven inside container i was copying it from my host.
Issue was resolved simply replacing
ADD target/explorer-backend-1.0.jar /explorerbackend/app.jar
with
RUN cp /explorerbackend/target/explorer-backend-1.0.jar /explorerbackend/app.jar
Thanks Rawcode for looking into it!
I'm building an derivative to this Docker container for mysql (using it as a starting point): https://github.com/docker-library/mysql
I've amended the Dockerfile to add in Flyway. Everything is set up to edit the config file to connect to the local DB instance, etc. The intent is to call this command from inside the https://github.com/docker-library/mysql/blob/master/5.7/docker-entrypoint.sh file (which runs as the ENTRYPOINT) around line 186:
flyway migrate
I get a connection refused when this is run from inside the shell script:
Flyway 4.1.2 by Boxfuse
ERROR:
Unable to obtain Jdbc connection from DataSource
(jdbc:mysql://localhost:3306/db-name) for user 'root': Could not connect to address=(host=localhost)(port=3306)(type=master) : Connection refused
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08
Error Code : -1
Message : Could not connect to address=(host=localhost)(port=3306)(type=master) : Connection refused
But, if I remove the command from the shell script, rebuild and log in to the container, and run the same command manually, it works with no problems.
I suspect that there may be some differences with how the script connects to the DB to do its thing (it has a built in SQL "runner"), but I can't seem to hunt it down. The container restarts the server during the process, which is what may be the difference here.
Since this container is intended for development, one alternative (a work-around, really) is to use the built in SQL "runner" for this container, using the filename format that Flyway expects, then use Flyway to manage the production DB's versions.
Thanks in advance for any help.
I mean it's the good way to start from the ready image (for start).
You may start from image docker "mysql"
FROM mysql
If you start the finished image - when creating new version your docker then
will only update the difference.
Next, step you may install java and net-tools
RUN apt-get -y install apt-utils openjdk-8-jdk net-tools
Config mysql
ENV MYSQL_DATABASE=mydb
ENV MYSQL_ROOT_PASSWORD=root
Add flyway
ADD flyway /opt/flyway
Add migrations
ADD sql /opt/flyway/sql
Add config flyway
ADD config /opt/flyway/conf
Add script to start
ADD start /root/start.sh
Check start mysql
RUN netstat -ntlp
Check java version
RUN java -version
Example file: /opt/flyway/conf/flyway.conf
flyway.driver=com.mysql.jdbc.Driver
flyway.url=jdbc:mysql://localhost:3306/mydb
flyway.user=root
flyway.password=root
Example file: start.sh
#!/bin/bash
cd /opt/flyway
flyway migrate
# may change to start.sh to start product migration or development.
Flyway documentation
I mean that you in next step may use flyway as service:
For example:
docker run -it -p 3307:3306 my_docker_flyway /root/start << migration_prod.sh
docker run -it -p 3308:3306 my_docker_flayway /root/start << migration_dev.sh
etc ...
services:
# Standard Mysql Box, we have to add tricky things else logging by workbench is hard
supermonk-mysql:
image: mysql
command: --default-authentication-plugin=mysql_native_password --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
environment:
- MYSQL_ROOT_PASSWORD=P#ssw0rd
- MYSQL_ROOT_HOST=%
- MYSQL_DATABASE=test
ports:
- "3306:3306"
healthcheck:
test: ["CMD-SHELL", "nc -z 127.0.0.1 3306 || exit 1"]
interval: 1m30s
timeout: 60s
retries: 6
# Flyway is best for mysql schema migration history.
supermonk-flyway:
container_name: supermonk-flyway
image: boxfuse/flyway
command: -url=jdbc:mysql://supermonk-mysql:3306/test?verifyServerCertificate=false&useSSL=true -schemas=test -user=root -password=P#ssw0rd migrate
volumes:
- "./sql:/flyway/sql"
depends_on:
- supermonk-mysql
mkdir ./sql
vi ./sql/V1.1__Init.sql # and paste below
CREATE TABLE IF NOT EXISTS test.USER (
id VARCHAR(64),
fname VARCHAR(256),
lname VARCHAR(256),
CONSTRAINT pk PRIMARY KEY (id));
save and close
docker-compose up -d
wait for 2 minutes
docker-compose run supermonk-flyway
Ref :
https://github.com/supermonk/webapp/tree/branch-1/docker/docker-database
Thanks to docker community and mysql community
docker-compose logs -f