I have one windows and one ubuntu 22.04 OS on my device. I have a special shared partition on my harddisk that I can enter from both of the OS'es.
On the shared partition I have a docker setup and I would like it to store the mysql data (volume) in a folder inside the host folder (./mysql_data)
The disk is mounted ntfs and thus, has as permission for all files and folders 1000:1000. Now, that is not docker (gid=999) so I have a hard time getting all permissions correct. Even when I set the -user 1000:1000 flag, issues keep piling up all the time:
corrupt files
not being able to login into phpmyadmin
permission issues
What would be a correct configuration for both the partition mount and the docker file?
partition mount (/etc/fstab):
UUID=xxx /mnt/share ntfs defaults,uid=1000,gid=999,fmask=0022,dmask=0000 0 0
docker-compose.yml:
version: "3.1"
services:
www:
build:
context: .
dockerfile: Dockerfile.lamp
user: 1000:1000
ports:
- "${WEBSERVER_PORT}:80"
volumes:
- ./www:/var/www/html/
links:
- db
networks:
- default
db:
image: mysql:8.0
user: 1000:1000
ports:
- "3306:3306"
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- ./sql:/docker-entrypoint-initdb.d
- ./conf:/etc/mysql/conf.d
- mysql_data:/var/lib/mysql
networks:
- default
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- db:db
ports:
- ${PHPMYADMIN_PORT}:80
networks:
- default
environment:
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MYSQL_PASSWORD}
UPLOAD_LIMIT: 64M
volumes:
mysql_data:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: './mysql_data'
networks:
default:
Any suggestions to how I could successfully achieve this? given that a mount has issues with permissions and will only set permissions on all files as given by the /etc/fstab file or default 1000:1000
Why default is 1000:1000?
The reason it set 1000:1000 as default is used to prevent docker created a file/folder own by root, once it own by root lot of situation will cause permission error if any process want to create or write file/folder, but if own by 1000:1000, you can change owner easily to which you want.
And also, set to 1000:1000 doesn't means you or any user have permissions to acceess with it, it still followed Linux permission rules, 1000:1000 is just Linux normal level user gid:uid count where it start with.
How mysql work around?
For example, Here is the mysql8.0 image (https://github.com/docker-library/mysql/blob/master/8.0/Dockerfile.oracle).
You can see it create a mysql:mysql user at beginning.
RUN set -eux; \
groupadd --system --gid 999 mysql; \
useradd --system --uid 999 --gid 999 --home-dir /var/lib/mysql --no-create-home mysql
And it changed these folders owner to mysql user on line 85, and it's the folder where mysql data stored by.
# ensure these directories exist and have useful permissions
# the rpm package has different opinions on the mode of `/var/run/mysqld`, so this needs to be after install
mkdir -p /var/lib/mysql /var/run/mysqld; \
chown mysql:mysql /var/lib/mysql /var/run/mysqld; \
# ensure that /var/run/mysqld (used for socket and lock files) is writable regardless of the UID our mysqld instance ends up having at runtime
chmod 1777 /var/lib/mysql /var/run/mysqld; \
What should you do?
Change your volume folder owner to which the container user, in mysql container you should change owner to mysql:mysql, or mysql won't able to write.
In other container do as same, change owner to which user your use to write/read, but docker compose can't volume as a specific user, so you might need try this: https://stackoverflow.com/a/56990338.
Notice that in your case user flag is not needed.
Related
I have a mysql docker container that has its data and logs dirs separately mapped to host folders for performance reasons.
I'm using docker-compose to start the container with a group of other related services.
--datadir=/var/lib/mysql/innodb-data
--innodb_log_group_home_dir=/var/lib/mysql/innodb-logs
The container dirs are mapped to the host files system via:
volumes:
- /db/mysql-innodb-data:/var/lib/mysql/innodb-data
- /db/mysql-innodb-logs:/var/lib/mysql/innodb-logs
My problem is that the MySQL container is setting the owner uid to 999.
On the host system this maps to the user 'systemd-coredump'.
Instead I want the container to apply the uid for the hosts 'mysql' user.
I've looked at the MySQL docker container and it has the following logic:
docker_create_db_directories() {
local user; user="$(id -u)"
# TODO other directories that are used by default? like /var/lib/mysql-files
# see https://github.com/docker-library/mysql/issues/562
mkdir -p "$DATADIR"
if [ "$user" = "0" ]; then
# this will cause less disk access than `chown -R`
find "$DATADIR" \! -user mysql -exec chown mysql '{}' +
fi
}
We can see that the above script applies the uid user the container runs under to the data directory. By default the container runs as root.
Given that root is uid 0 I don't actually see how this code is change the data-dirs directory to 999 and as such I suspect this code isn't actually the problem.
So I tried changing the user the container runs as to 'mysql'
mysql:
container_name: mysql
image: mysql:8.0
user: mysql
This changes the container user as expected but then MySQL couldn't start up as there are a number of config files that it can no longer read as it's not running as root.
Here is the full service section from my docker-compose:
mysql:
container_name: mysql
image: mysql:8.0
restart: on-failure
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ADMIN_PASSWORD}
MYSQL_DATABASE: ${MYSQL_SCHEMA}
command: >
--user=mysql
--lower-case-table-names=1
--datadir=/var/lib/mysql/innodb-data
--innodb_log_group_home_dir=/var/lib/mysql/innodb-logs
--default-authentication-plugin=mysql_native_password
--max-allowed-packet=512M
--innodb_buffer_pool_instances=${MYSQL_INNODB_BUFFER_POOL_INSTANCES-32}
--innodb_buffer_pool_chunk_size=${MYSQL_INNODB_BUFFER_POOL_CHUNK_SIZE-8M}
--innodb_buffer_pool_size=${MYSQL_INNODB_BUFFER_POOL_SIZE-512M}
--table_open_cache=${MYSQL_TABLE_OPEN_CACHE-512}
--max_connections=${MYSQL_MAX_CONNECTIONS-98}
--innodb_flush_neighbors=0
--innodb_fast_shutdown=2
--innodb_flush_log_at_trx_commit=1
--innodb_flush_method=fsync
--innodb_doublewrite=0
--innodb_use_native_aio=0
--innodb_read_io_threads=10
--innodb_write_io_threads=10
--slow_query_log_file=/tmp/mysql-slow.log --long-query-time=1
--slow_query_log
# mem_limit: ${MYSQL_MEMORY}
volumes:
- /db/mysql-innodb-data:/var/lib/mysql/innodb-data
- /db/mysql-innodb-logs:/var/lib/mysql/innodb-logs
network_mode: "host"
logging:
driver: "journald"
What I want to do is:
Create a MySQL8 docker container
The MySQL container should run a dump file
I was successful in creating the basic container, however, that are several issues:
The password that I added in docker-compose.yml is ignored, when I run:
"docker exec -it mysqlDB bash" followed by "MySQL -u admin -p" I get Acess denied, and also with root
I don't know if the dump is being used because I can access the DB
I'm also getting this error:
[ERROR] [MY-000061] [Server] 1105 Input Output error while reading file /docker-entrypoint-initdb.d/, line 0, I/O error code 1
I tried many things for hours and it only got worse like not running at all.
I always run with: "docker-compose --log-level DEBUG -verbose up"
I always retry with the sequence:
ctrl+c
docker-compose down
docker system prune -a
docker volume prune
After running these prunes I need to run twice, or else I got the error:
"The designated data directory /var/lib/mysql/ is unusable. You can remove all files that the server added to it."
Dockerfile(at /MySQL), there's also a LastDump.sql in this directory
EDIT: Later I deleted this file, and got the same result
FROM mysql:8.0.21
RUN chown -R mysql:root /var/lib/mysql/
ENV MYSQL_DATABASE=Olimpo
ENV MYSQL_USER=admin
ENV MYSQL_PASSWORD=senha
ENV MYSQL_ROOT_PASSWORD=senha
ADD LastDump.sql /etc/mysql/LastDump.sql
RUN sed -i 's/MYSQL_DATABASE/'$MYSQL_DATABASE'/g' /etc/mysql/LastDump.sql
RUN cp /etc/mysql/LastDump.sql /docker-entrypoint-initdb.d
EXPOSE 3306
docker-compose.yml(at / main folder)
version: "3.8"
# Define services
services:
# Database Service (Mysql)
mysqldb:
image: mysql:8.0.21
container_name: mysqlDB
command: --default-authentication-plugin=mysql_native_password --init-file /docker-entrypoint-initdb.d/
ports:
- "3307:3306"
restart: always
environment:
MYSQL_DATABASE: Olimpo
MYSQL_USER: admin
MYSQL_PASSWORD: senha
MYSQL_ROOT_PASSWORD: senha
volumes:
- mysql_data:/var/lib/mysql
# next line is commented doesn't run with it
#- ./MySQL/LastDump.sql:/docker-entrypoint-initdb.d
networks:
- backend
# Volumes
volumes:
mysql_data:
driver: local
# Networks to be created to facilitate communication between containers
networks:
backend:
Your problem seem to be the parameter in the command of your yml file. It worked in my machine when I took it out.
Change from command: --default-authentication-plugin=mysql_native_password --init-file /docker-entrypoint-initdb.d/ to command: command: --default-authentication-plugin=mysql_native_password
Fixed file is below:
# Define services
services:
# Database Service (Mysql)
mysqldb:
image: mysql:8.0.21
container_name: mysqlDB
command: --default-authentication-plugin=mysql_native_password
ports:
- "3307:3306"
restart: always
environment:
MYSQL_DATABASE: Olimpo
MYSQL_USER: admin
MYSQL_PASSWORD: senha
MYSQL_ROOT_PASSWORD: senha
volumes:
- mysql_data:/var/lib/mysql
# next line is commented doesn't run with it
#- ./MySQL/LastDump.sql:/docker-entrypoint-initdb.d
networks:
- backend
# Volumes
volumes:
mysql_data:
driver: local
# Networks to be created to facilitate communication between containers
networks:
backend:
The mysql dockerfile is useless.
Create a folder called mysql-dump with the dump.sql inside.
Add the line "USE db_name" to dump.sql
In docker-compose.yml:
remove "--init-file /docker-entrypoint-initdb.d/"
add the line in volumes "- ./mysql-dump:/docker-entrypoint-initdb.d"
I'm using docker-compose v 1.27 and Docker v 19.03. I have this in my docker-compose.yml file ...
version: '3'
services:
mysql:
restart: always
image: mysql:8.0
cap_add:
- SYS_NICE # CAP_SYS_NICE
environment:
MYSQL_DATABASE: 'directory_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'root'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
MYSQL_ROOT_HOST: '%'
ports:
- "3406:3306"
volumes:
- my-db:/var/lib/mysql
- ./mysql/mysqlconf:/etc/mysql/conf.d
command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']
Note that I have no Dockerfile (didn't think I needed it). My "my.cnf" file, is below
davea$ cat mysql/mysqlconf/my.cnf
bind-address = 0.0.0.0
From Docker, how do I set the permissions of the my.cnf file to be read-only? This comes into play on Windows 10 in which running "docker-compose up" results in this warning
mysqld: [Warning] World-writable config file '/etc/mysql/conf.d/my.cnf' is ignored.
Note, this answer -- https://stackoverflow.com/questions/64327260/in-docker-compose-how-do-i-set-perms-on-a-my-cnf-file-if-i-dont-have-a-dockerf, doesn't cut it, because it relies on setting th
I think the underlying problem here is that you are mounting a NTFS directory volume inside of an ext filesystem. Below are some possible solutions that may be helpful.
Docker-level: Use Read-only Volume Mounts
You can use a read-only volume mount instead of the default read-write setting.
For example, add :ro (read-only) to the end of the volume specification:
volumes:
- ...
- ./mysql/mysqlconf:/etc/mysql/conf.d:ro
Container-level: chmod the configuration file
If you want to suppress the warning, you can try setting the permissions of the files at run-time to read-only by expanding the command configuration to several commands. I think this is what you are referring to as the client's OS level, though. The mount would not be read-only.
For example:
command: bash -c "
chmod -R 0444 /etc/mysql/conf.d/ &&
mysqld --user=root --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
"
Note that this is incompatible with read-only mounts, as you cannot adjust the permissions since it is a read-only filesystem.
In your docker-compose yaml file, you can define the read-only access to the mounted volume
by adding :ro at the end of the volume definition.
version: '3'
services:
mysql:
restart: always
image: mysql:8.0
cap_add:
- SYS_NICE # CAP_SYS_NICE
environment:
MYSQL_DATABASE: 'directory_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'root'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
MYSQL_ROOT_HOST: '%'
ports:
- "3406:3306"
volumes:
- my-db:/var/lib/mysql
- ./mysql/mysqlconf:/etc/mysql/conf.d:ro
command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']
I suggest you set the permissions in a custom entrypoint script. This ensures they are adjusted on every container start, plays nicely with the "official" mysql image (it has custom entry point script support baked in) and does not collide with docker best practices (keeps mysqld running as pid 1).
It's three steps.
Create a script that makes all necessary adjustments and make it executable:
cat <<EOF > ./adjust-permissions.sh
#!/bin/sh
set -ex
chown -R root:root /etc/mysql/conf.d/
chmod -R 0644 /etc/mysql/conf.d/
EOF
chmod +x ./adjust-permissions.sh
You might want to leave out chown, personally, I like to ensure there are no surprises with mounted files.
Mount it into /docker-entrypoint-initdb.d/ inside the container (see docker-entrypoint.sh):
volumes:
[...]
- ./adjust-permissions.sh:/docker-entrypoint-initdb.d/adjust-permissions.sh
Enjoy.
Ive been making new sites with Wordpress & Docker recently and have a reasonable grasp of how it all works and Im now looking to move some established sites into Docker.
Ive been following this guide:
https://stephenafamo.com/blog/moving-wordpress-docker-container/
I have everything setup as it should be but when I go to my domain.com:1234 I get the error message 'Error establishing a database connection'. I have changed 'DB HOST' to 'mysql' in wp-config.php as advised and all the DB details from the site Im bringing in are correct.
I have attached to the mysql container and checked that the db is there and with the right user and also made sure the pw is correct via mysql CLI too.
SELinux is set to permissive and I havent changed any dir/file ownership nor permissions and for the latter dirs are all 755 and files 644 as they should be.
Edit: I should mention that database/data and everything under that seem to be owned by user/group 'polkitd input' instead of root.
Docker logs aren't really telling me much either apart from the 500 error messages for the WP container when I browse the site on port 1234 (as expected though).
This is the docker-compose file:
version: '2'
services:
example_db:
image: mysql:latest
container_name: example_db
volumes:
- ./database/data:/var/lib/mysql
- ./database/initdb.d:/docker-entrypoint-initdb.d
restart: always
environment:
MYSQL_ROOT_PASSWORD: password123 # any random string will do
MYSQL_DATABASE: mydomin_db # the name of your mysql database
MYSQL_USER: my domain_me # the name of the database user
MYSQL_PASSWORD: password123 # the password of the mysql user
example:
depends_on:
- example_db
image: wordpress:php7.1 # we're using the image with php7.1
container_name: example
ports:
- "1234:80"
restart: always
links:
- example_db:mysql
volumes:
- ./src:/var/www/html
Suggestions most welcome as Im out of ideas!
With the new version of docker-compose it will look like this (if you don't want to use PhpMyAdmin you can leave it out):
version: '3.7'
volumes:
wp-data:
networks:
wp-back:
services:
db:
image: mysql:5.7
volumes:
- wp-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: rootPassword
MYSQL_DATABASE: wordpress
MYSQL_USER: wp-user
MYSQL_PASSWORD: wp-pass
ports:
- 8889:3306
networks:
- wp-back
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: db
MYSQL_USER: wp-user
MYSQL_PASSWORD: wp-pass
MYSQL_ROOT_PASSWORD: rootPassword
ports:
- 3001:80
networks:
- wp-back
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- 8888:80
- 443:443
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wp-user
WORDPRESS_DB_PASSWORD: wp-pass
volumes:
- ./wordpress-files:/var/www/html
container_name: wordpress-site
networks:
- wp-back
The database volume is a named volume wp-data, while the wordpress html is a bind-mount to your current directory ./wordpress-files .
make sure that the wp-config.php file has same credentials defined for db_user, db_password as in docker-composer yml file. I too had similar problem i deleted all the files and re-installed and saw that docker-composer up -d would start everything but the wp-config.php file contents for mysql settings were not defined as in docker. so i changed it accordingly and started working eventually
Please take a look at the following compose script. I tried and tested. It works fine.
version: '2'
services:
db:
image: mysql:latest
container_name: db_server
volumes:
- ./database/data:/var/lib/mysql
- ./database/initdb.d:/docker-entrypoint-initdb.d
restart: always
environment:
MYSQL_ROOT_PASSWORD: password123 # any random string will do
MYSQL_DATABASE: udb_test # the name of your mysql database
MYSQL_USER: me_prname # the name of the database user
MYSQL_PASSWORD: password123 # the password of the mysql user
example:
depends_on:
- db
image: wordpress:php7.1 # we're using the image with php7.1
container_name: wp-web
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: me_prname
WORDPRESS_DB_PASSWORD: password123
WORDPRESS_DB_NAME: udb_test
ports:
- "1234:80"
restart: always
volumes:
- ./src:/var/www/html
Let me know if you encounter further issues.
if you want it all in one container you can refer this repo here,
https://github.com/akshayshikre/lamp-alpine/tree/development
Here from lamp-alpine image is used
Then mysql, php, apache2 (lamp stack) is installed and copied local wordpress demosite and db for demo purpose
if you do not want any kind of continuous integration part ignore .circleci folder
Check docker-compose file and Dockerfile, Environment variables are in .env file
I share with you my approach
Show running version, question to see if all is well on your side!
$ docker --version && docker-compose --version
run Docker Copose file
$ docker-compose -f docker-compose.yml up -d
after you wait fast forward
show running containers and name of the Wordpress Container is listening on port 8000
$ docker ps
you will see the name of your WordPress container on the table as follows if you have followed the steps listed on their site
https://hub.docker.com/_/wordpress
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
xxxxxxxxxxxx wordpress:latest "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:8000->80/tcp cms_wordpress_1
xxxxxxxxxxxx mysql:5.7 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 3306/tcp, 33060/tcp cms_db_1
and if you check your browser with the address : localhost:8000
you will get the message "error establishing DB connection"
launch bash inside the Wordpress container
$ docker exec -it cms_wordpress_1 bash
apt update fails as there is no connectivity
$ apt update
open up new terminal and show current Firewalld configuration
$ sudo cat /etc/firewalld/firewalld-workstation.conf | greb 'FirewallBackend'
currently set to 'nftables'
set value to 'iptables'
$ sudo sed -i 's/FirewallBackend=nftables/FirewallBackend=iptables/g' /etc/firewalld/firewalld-workstation.conf
confirme new value
$ sudo cat /etc/firewalld/firewalld-workstation.conf | grep 'FirewallBackend'
restart Firwalld service to apply change
$ sudo systemctl restart firewalld.service
Refresh the running Wordpress session in your browser and that's good.
good work.
In some cases a probable cause of this issue could be, you have made volumes using docker compose up and then when you did docker compose down you expected the volumes to be deleted as well as the docker images, but this is not how it works.
From the doc you could read this:
For data that needs to persist between updates, use host or named volumes.
It implicitly means that named volumes will not get deleted with down, so what happens is, when you do an up and then add a row to a table and then do a subsequent down, then on the next up you will get the same old volume and so querying the same table would give you the same row you created previously!
What does this have to do with the error Error establishing DB connection, you may ask. To answer your question, let's assume one scenario: What if you changed some MYSQL passwords in the docker compose file in between running the down command and the second up command?
MYSQL keeps its own data just like any other data in its tables, so when you do the second up, Docker loads the old volume (the one created by the first up) and thus the old credential information will be used by MYSQL and Docker will not even have the opportunity to insert your new information (the ones you changed in the docker compose file) in the administration tables. So obviously, you will be rejected.
The solution thus now would be very simple. To fix it, either do:
docker-compose down -v
to remove the named volumes as well as the images when running the down, or do:
docker volume rm [volname]
if you've done the down before, and now you want to delete the named volumes.
If you follow this tutorials ,https://stephenafamo.com/blog/moving-wordpress-docker-container/, your site wil not work properly. Coz It doesn't restore database and you need to restore manually .sql dump file existed in initdb.d dir by using this command.
cat backup.sql | docker exec -i CONTAINER /usr/bin/mysql -u root --password=root DATABASE
I also stuck in this and my CSS are not working properly.
Please let me know when you have new idea .
I'm having trouble importing an .sql dump file with docker-compose. I've followed the docs, which apparently will load the .sql file from docker-entrypoint-initdb.d. However, when I run docker-compose up, the sql file is not copied over to the container.
I've tried stopping the containers with -vf flag, but that didn't work either. Am I doing something wrong in my .yml script?
I have dump.sql in the directory database/db-dump/ in the root where my compose file is.
frontend:
image: myimage
ports:
- "80:80"
links:
- mysql
mysql:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_USER: dbuser
MYSQL_PASSWORD: userpass
MYSQL_DATABASE: myimage_db
volumes:
- ./database/db-dump:/docker-entrypoint-initdb.d
This worked for me,
version: '3.1'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
volumes:
- ./mysql-dump:/docker-entrypoint-initdb.d
environment:
MYSQL_ROOT_PASSWORD: example
MYSQL_DATABASE: ecommerce
adminer:
image: adminer
restart: always
ports:
- 8080:8080
mysql-dump must be a directory. All the .sql's in the directory will be imported.
After many attempts with the volumes setting i found a workaround
I created another image based on mysql with the following in the Dockerfile
FROM mysql:5.6
ADD dump.sql /docker-entrypoint-initdb.d
Then removed the volumes from compose and ran the new image
frontend:
image: myimage
ports:
- "80:80"
links:
- mysql
mysql:
image: mymysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_USER: dbuser
MYSQL_PASSWORD: userpass
MYSQL_DATABASE: myimage_db
This way the dump is always copied over and run on startup
This appears on the documentation page of Docker MySQL image: https://hub.docker.com/_/mysql/
Initializing a fresh instance
When a container is started for the first time, a new database with
the specified name will be created and initialized with the provided
configuration variables. Furthermore, it will execute files with
extensions .sh, .sql and .sql.gz that are found in
/docker-entrypoint-initdb.d. Files will be executed in alphabetical
order. You can easily populate your mysql services by mounting a SQL
dump into that
directory
and provide custom
images with contributed
data. SQL files will be imported by default to the database specified
by the MYSQL_DATABASE variable.
Mysql database dump schema.sql is residing in the /mysql-dump/schema.sql directory and it creates tables during the initialization process.
docker-compose.yml:
mysql:
image: mysql:5.7
command: mysqld --user=root
volumes:
- ./mysql-dump:/docker-entrypoint-initdb.d
environment:
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
I was having a similar issue with mysql where I would mount a local directory at /configs/mysql/data containing a mydatabasedump.sql file via docker-compose to the docker-entrypoint-initdb.d volume,
the file would get loaded on to the container but not execute or populate the database when the container initialized. My intial docker-compose.yml looke like this:
#docker-compose.yml
version: '3'
services:
db:
build: ./build/mysql/ #this is pointing to my Dockerfile
container_name: MYSQL_Database
restart: always
environment:
MYSQL_PORT: 3306
MYSQL_ROOT_PASSWORD: admin
MYSQL_DATABASE: my_app_database
MYSQL_USER: admin
MYSQL_PASSWORD: admin
volumes:
- ./configs/mysql/data:/docker-entrypoint-initdb.d:
I found two working solutions for this problem:
The first came after I logged in the running container and confirmed that mydatabasedump.sq file was present and executable in the container's docker-entrypoint-initdb.d directory; I created and added
a bash script to my local /configs/mysql/data directory called dump.sh that excuted after the container was initialized. It contains a single mysql command that copies my_database_dump.sql to my_app_database.
The bash script looks like this
#!/bin/bash
#dump.sh
mysql -uadmin -padmin my_app_database < my_database_dump.sql
#end of dump.sh
I executed this script via my Dockerfile in the ENTRYPOINT directive like this:
#Dockerfile
FROM mysql:5.5
ENTRYPOINT [ "dump.sh" ]
EXPOSE 80
#end of Dockerfile
After realizing the initial issue was due to the volumes being mouted after the cotainer is built and therefore not intilizing the database with the dump file (or executing any scripts in that directory) at boot time, the second solution was simply to
move the volumes directive in my compose-file above the built directive. This worked and allowed me to remove the dump.sh scrip and the DOCKERENTRY directive in my Dockerfile.
The modified docker-compose.yml looks like this
#docker-compose.yml
version: '3'
services:
db:
volumes:
- ./configs/mysql/data:/docker-entrypoint-initdb.d
build: ./build/mysql/ #this is pointing to my Dockerfile
container_name: MYSQL_Database
restart: always
environment:
MYSQL_PORT: 3306
MYSQL_ROOT_PASSWORD: admin
MYSQL_DATABASE: my_app_database
MYSQL_USER: admin
MYSQL_PASSWORD: admin
I also have this problem. I mount a local directory at ./mysql-dump containing a init.sql file via docker-compose to the docker-entrypoint-initdb.d volume, the file would get loaded on to the container but not execute or populate the database when the container initialized.
My intial docker-compose.yml looke like this:
mysqld:
image: mysql
container_name: mysqld
volumes:
- ./mysql/data:/var/lib/mysql
- ./mysql/my.cnf:/etc/my.cnf
- ./init:/docker-entrypoint-initdb.d
env_file: .env
restart: always
environment:
- MYSQL_ROOT_PASSWORD=123456
- MYSQL_DATABASE=fendou
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
--default-authentication-plugin=mysql_native_password
but it doesn't work for me.
I found another working solutions for this problem:
add --init-file /data/application/init.sql to mysql command.change above configuration like
mysqld:
image: mysql
container_name: mysqld
volumes:
- ./mysql/data:/var/lib/mysql
- ./mysql/my.cnf:/etc/my.cnf
# - ./init:/docker-entrypoint-initdb.d
env_file: .env
restart: always
environment:
- MYSQL_ROOT_PASSWORD=123456
- MYSQL_DATABASE=fendou
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
--default-authentication-plugin=mysql_native_password
--init-file /docker-entrypoint-initdb.d/init.sql #attention here
hope it help for you
I wanted to keep the original setup of the container, so I tried a restore on the already running container. This seemed to work:
cat dump.sql | docker-compose exec -T db mysql -h localhost -u root -psomewordpress -v
But it was very slow and the verbose output seemed to be buffered, so I tried:
docker-compose cp dump.sql db:/tmp/
docker-compose exec db sh -c "mysql -h localhost -u root -psomewordpress -v < /tmp/dump.sql"
Which at least provided faster feedback.
Might be useful for someone? Looks like it was mainly slow because I used --skip-extended-insert on the dump, without the extended inserts it went faster 🙂