Seeding a database using Docker Mysql COPY fails - mysql

Using docker-compose and a build file, I am unable to COPY files to a mysql container. Here's the compose.yml:
version: '3'
services:
db:
#image: mysql:5.7
build:
context: .
dockerfile: Dockerfile-mysql
environment:
MYSQL_ROOT_PASSWORD: drupal
MYSQL_DATABASE: drupal
MYSQL_USER: drupal
MYSQL_PASSWORD: drupal
#volumes:
# - /var/lib/mysql
ports:
- "3300:3306"
Then the Dockerfile-mysql:
FROM mysql:5.7
COPY ./drupal.sql /docker-entrypoint-initdb.d/
I see no errors, and the file isn't there. The container starts up, mysql is good, all that, but NO FILE! Can someone point out what I've missed? I'm assuming after the .sql file is transferred to that directory, the .sql file will run as well?
Thanks!

Figured it out. You have to run docker-compose build first to get docker to run any kind of COPY commands. If I understand it all correctly docker-compose build will create the containers, and docker-compose up will launch said containers.

Related

How to get docker compose to read changes to docker-compose.yml?

I have a docker-compose.yml file, and part of it sets up a MySQL docker container.
Below is the part of the file:
version: "2.2"
services:
mysql:
image: mysql:5.7
hostname: mysql
container_name: mysql
environment:
MYSQL_ROOT_PASSWORD: rootpassword
MYSQL_DATABASE: slurm_acct_db
MYSQL_USER: slurm
MYSQL_PASSWORD: password
MYSQL_ROOT_HOST: "%"
MYSQL_HOST: "%"
ports:
- '3306:3306'
volumes:
- var_lib_mysql:/var/lib/mysql
I can log into the MySQL server from the container with:
mysql -u slurm -ppassword
But if I remove the volumes and make changes to the docker-compose.yml e.g. change MYSQL_PASSWORD in the docker-compose.yml file it doesn't seem to have an effect and the old password is still used.
It is probably very obvious, but I can't seem to find a way for the changes to take effect. Can someone point me in the right direction?
look to make your compose file appears immediately then you have to run the command:
docker-compose up -d
this will have the update done for you.
or you have another solution that you create another environment variable outside the compose file and then you will need to update the run again to have an effect as it reads the variables immediately from the current file and any effects have to be applied first.
So I found the issue.
I had started this app using docker-compose in one directory, whilst playing around. I then started the project in a different directory, and docker-compose continued to read the .yml file from the first directory, not the current directory I was working in.
Deleted the old directory, and started with docker-compose up -d from the second directory I was currently working in, and now the changes have been read.

How can I import a dump sql file -DOCKER

I'm having trouble importing an .sql dump file with docker-compose.
With docker-entrypoint-initdb.d I should be able to load the .sql file...
.However, when I run docker-compose up, the sql file is not copied over to the container.
What am I doing wrong in my .yml script?
I have init.sql in the directory in the root directory where my compose file is.
Furthermore I the database but not the data (tables, inserts, more) are on adminer :(
version: '3'
services:
mysql-dev:
image: mysql:8.0.2
#command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: sdaapp
ports:
- "3308:3306"
volumes:
- "./data:/var/lib/mysql:rw"
- "./init:/docker-enttrypoint-initdb.d"
pgdb-dev:
image: postgres
restart: always
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: sdaapp
admin:
build:
context: .
dockerfile: Dockerfile
image: adminer
restart: always
ports:
- 8080:8080
THANKS for your help :)
Since your volume is pointed to ./init folder, you have to put your .sql script inside of it (or change the path of your volume). Also note that there is a typo in your docker-compose.yml file: docker-enttrypoint-initdb.d should be docker-entrypoint-initdb.d
And as pointed by MySQL's Documentation, the script is executed only for the first time you run the container. So you have to delete the database before running the container again and then be able to execute the script correctly.

docker-compose mysql init sql is not executed

I am trying to set up a mysql docker container and execute init sql script. Unfortunately the sql script is not executed. What am I doing wrong?
version: '3.3'
services:
api:
container_name: 'api'
build: './api'
ports:
- target: 8080
published: 8888
protocol: tcp
mode: host
volumes:
- './api:/go/src/app'
depends_on:
- 'mysql'
mysql:
image: 'mysql:latest'
container_name: 'mysql'
volumes:
- ./db_data:/var/lib/mysql:rw
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
restart: always
environment:
MYSQL_USER: test
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
ports:
- '3306:3306'
volumes:
db_data:
I execute file with docker-compose up -d --build
The docker-entrypoint-initdb.d folder will only be run once while the container is created (instantiated) so you actually have to do a docker-compose down -v to re-activate this for the next run.
If you want to be able to add sql files at any moment you can look here at a specialized MySql docker image... http://ivo2u.nl/o4
Update for M1 arch:
Here an almost drop-in replacement in MariaDB: http://ivo2u.nl/V1
Many containerized applications, especially stateful ones, have a way of running init scripts (like the sql scripts here) and they are supposed to run only once.
And since they are stateful, the volumes are a source of truth for the containers on whether to run the init scripts or not on container restart.
Like in your case, deleting the folder used for bind mount or using a new named volume should re-run any init scripts present.
These scripts run when you create the container, not every time you start it.
You can docker-compose up --force-recreate mysql to force those scripts to re-run.
Additionally, if you have a volume like this ./db_data:/var/lib/mysql:rw, then you also need to remove ./db_data before recreating the container.
I'm not a docker expert, but this worked for me.

Docker-compose mysql mount volume turns sql into folder

I am trying to use Docker to create a set of containers (wordpress and MySQL) that will help my local development with Wordpress. As we are running a live database, I want to mount a dump.sql file into the Docker mysql container. Below is my .yml file.
version: '2'
services:
db:
image: mysql:latest
volumes:
- ./data:/docker-entrypoint-initdb.d #./data holders my dump.sql file
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
volumes:
- ./wp-content/themes/portalV3:/var/www/html/wp-content/themes/portalV3
- ./wp-content/plugins:/var/www/html/wp-content/plugins
- ./wp-content/uploads:/var/www/html/wp-content/uploads
Everything works, but after ~10 seconds the docker container for mysql crashes. Going through the logs, I get the following error:
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/dump.sql
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR: Can't initialize batch_readline - may be the input source is a directory or a block device.
On closer inspection (attaching to the rebooted mysql container) I see that indeed my dump.sql file wasn't transferred to the container, but a folder with the same name was created in /docker-entrypoint-initdb.d.
Can anyone help me understand how I get docker-compose to copy my dump.sql file and import into the database?
Cheers,
Pieter
The problem you got with docker-entrypoint-initdb.d is that because your source 'data' is a directory and not a file, The destination file (docker-entrypoint-initdb.d) must be a directory too. And vice versa.
So either do
volumes:
- ./data:/docker-entrypoint-initdb.d/
or
volumes:
- ./data/mydump.sql:/docker-entrypoint-initdb.d/mydump.sql
Yes, that is how you should mount the .sql or .sh files i.e by adding a volume by mapping the SQL or .sh files to the docker container's docker-entrypoint-initdb.d folder. But, it's raising an error for some strange reason maybe because the MySQL docker version is old.
You could solve this by creating a custom image i.e,
Dockerfile
FROM mysql:5.7
COPY init.sql /docker-entrypoint-initdb.d/
It creates an image and also helps in running a init script while starting the container.
To use this in a compose file, put your SQL files and Dockerfile in a folder.
database
|---init.sql
|---Dockerfile
docker-compose.yml
version: '3'
services:
mysqldb:
image: mysqldb
build: ./database
container_name: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=test
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=test
By this, you could configure the environment variables easily.

Docker-compose: Copying files from local env to EC2 instance

Hello I have a configuration that builds docker containers for a flask app and a mysql instance.
I create a new VM with
docker-machine create -d amazonec2 --....... production
and then (after setting the correct environment)
docker-compose build -> docker-compose up -d
The problem is that all these happen whilst CWD is a local repo with the files I need. It turns out these files are not copied over.
I have looked at docker cp and docker scp but it seems they do not solve the problem. E.g. with SCP I cannot reference the specific machine I need to copy the repo over (xow_web_1)
Here is the .yml
web:
restart: always
volumes:
- .:/xow
build: .
ports:
- "80:80"
links:
- db
hostname: xowflask
command: python xow.py
db:
restart: always
hostname: xowmysql
image: mysql:latest
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: somepasswordhere
MYSQL_DATABASE: somedatabase
data:
restart: always
image: mysql:latest
volumes:
- /var/lib/mysql
command: "true"
How would be the most appropriate way to solve this? Is docker-compose the right approach? Looks awesome, but it doesn't solve an issue like this
The way we solved it in our organization is by using the COPY command to copy all of the data in the folder to the container.
For example, copying all of the files from the current dir to the container /src folder will look like this -
### Copy Code
COPY . /src
It looks like you should add this line into the web container in your docker-compose configuration.