I am trying to set up a mysql docker container and execute init sql script. Unfortunately the sql script is not executed. What am I doing wrong?
version: '3.3'
services:
api:
container_name: 'api'
build: './api'
ports:
- target: 8080
published: 8888
protocol: tcp
mode: host
volumes:
- './api:/go/src/app'
depends_on:
- 'mysql'
mysql:
image: 'mysql:latest'
container_name: 'mysql'
volumes:
- ./db_data:/var/lib/mysql:rw
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
restart: always
environment:
MYSQL_USER: test
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
ports:
- '3306:3306'
volumes:
db_data:
I execute file with docker-compose up -d --build
The docker-entrypoint-initdb.d folder will only be run once while the container is created (instantiated) so you actually have to do a docker-compose down -v to re-activate this for the next run.
If you want to be able to add sql files at any moment you can look here at a specialized MySql docker image... http://ivo2u.nl/o4
Update for M1 arch:
Here an almost drop-in replacement in MariaDB: http://ivo2u.nl/V1
Many containerized applications, especially stateful ones, have a way of running init scripts (like the sql scripts here) and they are supposed to run only once.
And since they are stateful, the volumes are a source of truth for the containers on whether to run the init scripts or not on container restart.
Like in your case, deleting the folder used for bind mount or using a new named volume should re-run any init scripts present.
These scripts run when you create the container, not every time you start it.
You can docker-compose up --force-recreate mysql to force those scripts to re-run.
Additionally, if you have a volume like this ./db_data:/var/lib/mysql:rw, then you also need to remove ./db_data before recreating the container.
I'm not a docker expert, but this worked for me.
Related
I have managed to create a MySQL and PHP container and my scripts execute and all my tables are there.
However I have a database that I call "myDb" and a user that is called "someuser" and when the database is created for some reason the name of the database is "somedatabase"
my docker-compose.yaml file:
services:
mysql:
image: mysql:latest
ports:
- 3307:3306
environment:
MYSQL_DATABASE: myDb
MYSQL_ROOT_PASSWORD: SomeRootPassword1!
MYSQL_USER: someuser
MYSQL_PASSWORD: Password1!
volumes:
- ./dbScript/winit_Script2.sql:/docker-entrypoint-initdb.d/winit_Script2.sql
- db_data:/var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: dev_pma
links:
- mysql
environment:
PMA_HOST: mysql
PMA_PORT: 3307
PMA_ARBITRARY: 1
restart: always
ports:
- 8183:80
volumes:
db_data:
phpAdmin:
Mysqlworkbench:
What have I done wrong here?
A little edit after the comments:
It would seem that when having a volumes section you create volumes in docker
and when you create a volume on a specific port once then it gets reused every time you do docker-compose up. This was the case for me.
More details in accepted answer.
The mysql image does not initialize the database if the volume is not clean.
When you are stopping and starting the database from the same compose file, the volume is always the same, hence you want the data to be persisted even after an app restart.
To force the re-initialization of the data, you can delete that docker volume(only if you no longer need that database! this cannot be undone):
First, stop and delete the containers.
Then list and delete the volume that persists the database:
docker volume ls
DRIVER VOLUME NAME
local <your-deployment-name>_db_data
docker volume rm <your-deployment-name>_db_data
Then run again the docker-compose up command and you'll be able to find the myDb in phpMyAdmin instead of somedb
Edit:
Unless you change yourself the entrypoint and rebuild the image to force it initialize your DB according to the ENV you're passing, even if the volume is not clean, the only option that comes to my mind is to create the new DB manually. Here is the conditional that skips the re-initialization of the DB and here is the script that is invoked if the volume is clean.
I am trying to run a one time command on my application container using the command
docker-compose run --entrypoint="/usr/src/app/migrate.sh" app
app is the name of my service and the said entrypoint contains the one-time command that I'm trying to run.
Here's my docker-compose.yml file
version: '3'
services:
app:
build: .
# mount the current directory (on the host) to /usr/src/app on the container, any changes in either would be reflected in both the host and the container
volumes:
- .:/usr/src/app
# expose application on localhost:36081
ports:
- "36081:36081"
# application restarts if stops for any reason - required for the container to restart when the application fails to start due to the database containers not being ready
restart: always
depends_on:
- db1
- db2
# the environment variables are used in docker/config/env_config.rb to connect to different database containers
environment:
MYSQL_DB1_HOST: db1
MYSQL_DB1_PORT: 3306
MYSQL_DB2_HOST: db2
MYSQL_DB2_PORT: 3306
db1:
image: mysql/mysql-server:5.7
environment:
MYSQL_DATABASE: test1
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
# mount volume of the schema script to /docker-entrypoint-initdb.d to execute the script on startup
volumes:
- ./docker/seed/db1:/docker-entrypoint-initdb.d
- db1-volume:/var/lib/mysql
restart: always
# to connect locally from SequelPro
ports:
- "1200:3306"
db2:
image: mysql/mysql-server:5.7
environment:
MYSQL_DATABASE: test2
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
# mount volume of the schema script to /docker-entrypoint-initdb.d to execute the script on startup
volumes:
- ./docker/seed/db2:/docker-entrypoint-initdb.d
- db2-volume:/var/lib/mysql
restart: always
# to connect locally from SequelPro
ports:
- "1201:3306"
Everything works as expected when I start docker-compose up, but when I invoke docker-compose run, the dependent services db1 and db2 containers are up, but they are not initialised with the entrypoint script(as a result the mySQL database is not created). The volume is attached though.
How can I ensure that the entrypoint script of the dependent containers is invoked as well?
I have a requirement where I need to wait for a few commands before I seed the data for the database:
I have some Migration scripts that create the schema in the database (this command runs from my app container). After this executes, I want to seed data to the database.
As I read, the docker-entrypoint-initdb scripts is executed when the container is initialized. If I mount my seed.sql script to it, the data is seeded before the Migrate scripts. (The Migrate scripts actually drop all tables and create them from scratch). The seeded data is therefore lost.
How can I achieve this? (I cannot change the Migrate scripts)
Here's my docker-compose.yml file
version: '3'
services:
app:
build: .
# mount the current directory (on the host) to /usr/src/app on the container, any changes in either would be reflected in both the host and the container
volumes:
- .:/usr/src/app
# expose application on localhost:36081
ports:
- "36081:36081"
# application restarts if stops for any reason - required for the container to restart when the application fails to start due to the database containers not being ready
restart: always
environment:
MIGRATE: Y
<some env variables here>
config-dev:
image: mysql/mysql-server:5.7
environment:
MYSQL_DATABASE: config_dev
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
# to persist data
- config-dev-volume:/var/lib/mysql
restart: always
# to connect locally from SequelPro
ports:
- "1200:3306"
<other database containers>
My Dockerfile for app container has the following ENTRYPOINT
# start the application
ENTRYPOINT /usr/src/app/docker-entrypoint.sh
Here's the docker-entrypoint.sh file
#!/bin/bash
if [ "$MIGRATE" = "Y" ];
then
<command to start migration scripts>
echo "------------starting application--------------"
<command to start application>
else
echo "------------starting application--------------"
<command to start application>
fi
Edit: Is there a way I can run a script in config-db container from the docker-entrypoint.sh file in app container?
This can be solved in two steps:
You need to wait until your db container is started and is ready.
Wait until started can be handled by adding depends_on in docker-compose file:
version: '3'
services:
app:
build: .
# mount the current directory (on the host) to /usr/src/app on the container, any changes in either would be reflected in both the host and the container
depends_on:
- config-dev
- <other containers (if any)>
volumes:
- .:/usr/src/app
# expose application on localhost:36081
ports:
- "36081:36081"
# application restarts if stops for any reason - required for the container to restart when the application fails to start due to the database containers not being ready
restart: always
environment:
MIGRATE: Y
<some env variables here>
config-dev:
image: mysql/mysql-server:5.7
environment:
MYSQL_DATABASE: config_dev
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
# to persist data
- config-dev-volume:/var/lib/mysql
restart: always
# to connect locally from SequelPro
ports:
- "1200:3306"
<other database containers>
Wait until db is ready is another case because sometimes it takes time for the db process to start listening on the tcp port.
Unfortunately, Docker does not provide a way to hook onto container state. There are many tools and scripts to have a workaround this.
You can go through this to implement the workaround.
https://docs.docker.com/compose/startup-order/
TL;DR
Download https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh inside the container and delete the ENTRYPOINT field (Not required for your use case) and use CMD field instead:
CMD ["./wait-for-it.sh", "<db_service_name_as_per_compose_file>:<port>", "--", "/usr/src/app/docker-entrypoint.sh"]
Now, That this is complete. Next part is to execute your seed.sql script.
That is easy and can be executed by adding following line into your /usr/src/app/docker-entrypoint.sh script.
sqlcmd -S -U -P -i inputquery_file_name -o outputfile_name
Place above command after migrate script in /usr/src/app/docker-entrypoint.sh
I'm using the yobasystems/alpine-mariadb docker image to run an instance for a development environment. I'm mounting the data directory for MySQL to a docker volume and this has worked in the past. Every so often I lose data but not the table structure and I cannot work out why.
db:
image: yobasystems/alpine-mariadb
restart: always
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=database
- MYSQL_USER=user
- MYSQL_PASSWORD=password
ports:
- "33333:3306"
volumes:
- mariadb:/var/lib/mysql
I suspect that in your case the volume is getting removed(may be via docker-compose down -v or dockere-compose rm -v).
Please specify that the volume is external using -
volumes:
mariadb:
external: true
From docker docs - external: If set to true, specifies that this volume has been created outside of Compose. docker-compose up does not attempt to create it, and raises an error if it doesn’t exist.
You may create the volume prior to docker-compose up with docker volume create mariadb
What I'm trying to do is, connect from my spring-boot app to mysql database in Docker. Each in their own container.
But I must be having something wrong because I can't do it.
To keep it simple :
application-properties :
# URL for the mysql db
spring.datasource.url=jdbc:mysql://workaround-mysql:3308/workaround?serverTimezone=UTC&max_allowed_packet=15728640
# User name in mysql
spring.datasource.username=springuser
# Password for mysql
spring.datasource.password=admin
#Port at which application runs
server.port=8080
docker-compose for MySQL:
version: '3'
services:
workaround-mysql:
container_name: workaround-mysql
image: mysql
environment:
MYSQL_DATABASE: workaround
MYSQL_USER: springuser
MYSQL_PASSWORD: admin
MYSQL_ROOT_PASSWORD: admin
MYSQL_ROOT_HOST: '%'
ports:
- "3308:3306"
restart: always
So pretty simple right ? Database I start with docker-compose up:
All seems to be working fine so far.
Now that I have db started, to the application, this is its docker-compose.yml:
version: '3'
services:
workaround:
restart: always
# will build ./docker/workaround/Dockerfile
build: ./docker/workaround
working_dir: /workaround
volumes:
- ./:/workaround
- ~/.m2:/root/.m2
expose:
- "8080"
command: "mvn clean spring-boot:run"
For its Dockerfile I use Linux Alpine and Java.
FROM alpine:3.9
....add java...
RUN apk update
RUN apk add dos2unix --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/community/ --allow-untrusted
RUN apk add bash
RUN apk add maven
Super simple. Now let's start the application :
Unknown host, so let's try the IP then :
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' workaround-mysql
# URL for the mysql db
spring.datasource.url=jdbc:mysql://172.20.0.2:3308/workaround?serverTimezone=UTC&max_allowed_packet=15728640
Now I get timeout:
As you can see I get error. What is wrong with my setup and how to fix
this? Either I have unknown host exception or Refused to connect or connection timeout.
I have tried:
Using ip of a container in my application.properties, didn't work
Different ports for MySQL and application
Different images and versions of MySQL
Having everything in one docker compose with wait
timer for database.
Minimal setup with
https://github.com/hellokoding/hellokoding-courses/tree/master/docker-examples/dockercompose-springboot-mysql-nginx
Also resulted in communication link failure, Site was accessible but I
doubt that db was connected properly.
Notes:
I run this all on one computer I use port 3308 because I have local
MySQL db at 3306.
Here is docker ps -a
#Vusal ANSWER output :
Only thing different from code in answer I did wait for database to be ready 30 seconds
command: /bin/bash -c "sleep 30;mvn clean spring-boot:run;"
Try this docker-compose.yml:
version: '3'
services:
workaround-mysql:
container_name: workaround-mysql
image: mysql
environment:
MYSQL_DATABASE: workaround
MYSQL_USER: springuser
MYSQL_PASSWORD: admin
MYSQL_ROOT_PASSWORD: admin
MYSQL_ROOT_HOST: '%'
ports:
- "3308:3306"
restart: always
workaround:
depends_on:
- workaround-mysql
restart: always
# will build ./docker/workaround/Dockerfile
build: ./docker/workaround
working_dir: /workaround
volumes:
- ./:/workaround
- ~/.m2:/root/.m2
expose:
- "8080"
command: "mvn clean spring-boot:run"
And update your application.properties to use the next JDBC connection url:
spring.datasource.url=jdbc:mysql://workaround-mysql:3306/workaround?serverTimezone=UTC&max_allowed_packet=15728640
It should work when both containers in the same docker-compose file, because docker-compose creates default network for containers, so they can resolve each other by name.
What you haven't tried so far is running both containers on the same Docker network.
First, forget about IP addressing - using it should be avoided by all means.
Second, launch both compose instances with the same Docker network.
Third, do not expose ports - inside bridge network all ports are accessible to running containers.
Create global network
docker network create foo
Modify both compose files so that they use this network instead of creating each one its own:
version: '3.5'
services:
....
networks:
default:
external: true
name: foo
Remove expose directives from compose files - inside one network all ports are exposed by default
Modify connection strings to use default 3306 port instead of 3308
Enjoy
In order for the service to connect with MySql through docker it has to be in same network, look into Docker network
But for better solution I would suggest you to write a single docker compose file for MySql and Spring boot.The reason is it will easily be linked when you do that.No need any other configuration.
version: "3"
services:
mysql-service:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_DATABASE=db
- MYSQL_USER=root
- MYSQL_PASSWORD=pass
- MYSQL_ROOT_PASSWORD=pass
spring-service:
image: springservce:latest
ports:
- "8080:8080"
depends_on:
- mysql-service
Before you try to connect to the Docker container you should stop mysql in your computer then go to the application.properties and type:
spring.datasource.url=jdbc:mysql://localhost:3306/NAME_OF_YOUR_DB_HERE?useSSL=false&allowPublicKeyRetrieval=true
Regarding localhost, you should inspect the mysql container and pick the IP address and use it instead. most likely is 172.17.0.2. If it did not work then use localhost.