My company has been working with local setups of our mysql database for years. We have recently decided to adopting a containerized approach to local development, and we want to add the database into being run in a container. The issue is, because all of our data is already set up locally, we want to be able to just use the same data in the mysql container. I have tried using volumes to mount the directory storing all the mysql data into the container to no avail. Has anyone had success with doing this?
db part of docker-compose.yml:
db:
image: mysql:5.6
container_name: mysql
ports:
- "3306:3306"
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- /usr/local/mysql/data:/var/lib/mysql
I am able to get mysql running fine and am able to connect to it easily from my local machine, but when I connect, none of the local databases that already exist are there. Is there something that I'm overlooking?
#yourknightmares,
So I just ran a test and it worked for me. Here is what I did:
docker-compose.yml
version: "3.9"
services:
nginx:
image: nginx:latest
ports:
- "9999:9999"
command: tail -f /dev/null
volumes:
- "/etc/nginx/nginx.conf:/opt/nginx/nginx.conf"
In my host machine, I have the file at /etc/nginx/nginx.conf
, then:
$ docker-compose up -d
$ docker exec -it 02ba7032d699 bash
$ root#02ba7032d699:/# cat /opt/nginx/nginx.conf
#hello
The file was mounted just fine from the host to the container. I would suggest you to do the same exercise just for troubleshooting purposes. Also, have you looked at the container logs with docker logs container_id?
I'm using the yobasystems/alpine-mariadb docker image to run an instance for a development environment. I'm mounting the data directory for MySQL to a docker volume and this has worked in the past. Every so often I lose data but not the table structure and I cannot work out why.
db:
image: yobasystems/alpine-mariadb
restart: always
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=database
- MYSQL_USER=user
- MYSQL_PASSWORD=password
ports:
- "33333:3306"
volumes:
- mariadb:/var/lib/mysql
I suspect that in your case the volume is getting removed(may be via docker-compose down -v or dockere-compose rm -v).
Please specify that the volume is external using -
volumes:
mariadb:
external: true
From docker docs - external: If set to true, specifies that this volume has been created outside of Compose. docker-compose up does not attempt to create it, and raises an error if it doesn’t exist.
You may create the volume prior to docker-compose up with docker volume create mariadb
Ive been making new sites with Wordpress & Docker recently and have a reasonable grasp of how it all works and Im now looking to move some established sites into Docker.
Ive been following this guide:
https://stephenafamo.com/blog/moving-wordpress-docker-container/
I have everything setup as it should be but when I go to my domain.com:1234 I get the error message 'Error establishing a database connection'. I have changed 'DB HOST' to 'mysql' in wp-config.php as advised and all the DB details from the site Im bringing in are correct.
I have attached to the mysql container and checked that the db is there and with the right user and also made sure the pw is correct via mysql CLI too.
SELinux is set to permissive and I havent changed any dir/file ownership nor permissions and for the latter dirs are all 755 and files 644 as they should be.
Edit: I should mention that database/data and everything under that seem to be owned by user/group 'polkitd input' instead of root.
Docker logs aren't really telling me much either apart from the 500 error messages for the WP container when I browse the site on port 1234 (as expected though).
This is the docker-compose file:
version: '2'
services:
example_db:
image: mysql:latest
container_name: example_db
volumes:
- ./database/data:/var/lib/mysql
- ./database/initdb.d:/docker-entrypoint-initdb.d
restart: always
environment:
MYSQL_ROOT_PASSWORD: password123 # any random string will do
MYSQL_DATABASE: mydomin_db # the name of your mysql database
MYSQL_USER: my domain_me # the name of the database user
MYSQL_PASSWORD: password123 # the password of the mysql user
example:
depends_on:
- example_db
image: wordpress:php7.1 # we're using the image with php7.1
container_name: example
ports:
- "1234:80"
restart: always
links:
- example_db:mysql
volumes:
- ./src:/var/www/html
Suggestions most welcome as Im out of ideas!
With the new version of docker-compose it will look like this (if you don't want to use PhpMyAdmin you can leave it out):
version: '3.7'
volumes:
wp-data:
networks:
wp-back:
services:
db:
image: mysql:5.7
volumes:
- wp-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: rootPassword
MYSQL_DATABASE: wordpress
MYSQL_USER: wp-user
MYSQL_PASSWORD: wp-pass
ports:
- 8889:3306
networks:
- wp-back
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: db
MYSQL_USER: wp-user
MYSQL_PASSWORD: wp-pass
MYSQL_ROOT_PASSWORD: rootPassword
ports:
- 3001:80
networks:
- wp-back
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- 8888:80
- 443:443
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wp-user
WORDPRESS_DB_PASSWORD: wp-pass
volumes:
- ./wordpress-files:/var/www/html
container_name: wordpress-site
networks:
- wp-back
The database volume is a named volume wp-data, while the wordpress html is a bind-mount to your current directory ./wordpress-files .
make sure that the wp-config.php file has same credentials defined for db_user, db_password as in docker-composer yml file. I too had similar problem i deleted all the files and re-installed and saw that docker-composer up -d would start everything but the wp-config.php file contents for mysql settings were not defined as in docker. so i changed it accordingly and started working eventually
Please take a look at the following compose script. I tried and tested. It works fine.
version: '2'
services:
db:
image: mysql:latest
container_name: db_server
volumes:
- ./database/data:/var/lib/mysql
- ./database/initdb.d:/docker-entrypoint-initdb.d
restart: always
environment:
MYSQL_ROOT_PASSWORD: password123 # any random string will do
MYSQL_DATABASE: udb_test # the name of your mysql database
MYSQL_USER: me_prname # the name of the database user
MYSQL_PASSWORD: password123 # the password of the mysql user
example:
depends_on:
- db
image: wordpress:php7.1 # we're using the image with php7.1
container_name: wp-web
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: me_prname
WORDPRESS_DB_PASSWORD: password123
WORDPRESS_DB_NAME: udb_test
ports:
- "1234:80"
restart: always
volumes:
- ./src:/var/www/html
Let me know if you encounter further issues.
if you want it all in one container you can refer this repo here,
https://github.com/akshayshikre/lamp-alpine/tree/development
Here from lamp-alpine image is used
Then mysql, php, apache2 (lamp stack) is installed and copied local wordpress demosite and db for demo purpose
if you do not want any kind of continuous integration part ignore .circleci folder
Check docker-compose file and Dockerfile, Environment variables are in .env file
I share with you my approach
Show running version, question to see if all is well on your side!
$ docker --version && docker-compose --version
run Docker Copose file
$ docker-compose -f docker-compose.yml up -d
after you wait fast forward
show running containers and name of the Wordpress Container is listening on port 8000
$ docker ps
you will see the name of your WordPress container on the table as follows if you have followed the steps listed on their site
https://hub.docker.com/_/wordpress
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
xxxxxxxxxxxx wordpress:latest "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:8000->80/tcp cms_wordpress_1
xxxxxxxxxxxx mysql:5.7 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 3306/tcp, 33060/tcp cms_db_1
and if you check your browser with the address : localhost:8000
you will get the message "error establishing DB connection"
launch bash inside the Wordpress container
$ docker exec -it cms_wordpress_1 bash
apt update fails as there is no connectivity
$ apt update
open up new terminal and show current Firewalld configuration
$ sudo cat /etc/firewalld/firewalld-workstation.conf | greb 'FirewallBackend'
currently set to 'nftables'
set value to 'iptables'
$ sudo sed -i 's/FirewallBackend=nftables/FirewallBackend=iptables/g' /etc/firewalld/firewalld-workstation.conf
confirme new value
$ sudo cat /etc/firewalld/firewalld-workstation.conf | grep 'FirewallBackend'
restart Firwalld service to apply change
$ sudo systemctl restart firewalld.service
Refresh the running Wordpress session in your browser and that's good.
good work.
In some cases a probable cause of this issue could be, you have made volumes using docker compose up and then when you did docker compose down you expected the volumes to be deleted as well as the docker images, but this is not how it works.
From the doc you could read this:
For data that needs to persist between updates, use host or named volumes.
It implicitly means that named volumes will not get deleted with down, so what happens is, when you do an up and then add a row to a table and then do a subsequent down, then on the next up you will get the same old volume and so querying the same table would give you the same row you created previously!
What does this have to do with the error Error establishing DB connection, you may ask. To answer your question, let's assume one scenario: What if you changed some MYSQL passwords in the docker compose file in between running the down command and the second up command?
MYSQL keeps its own data just like any other data in its tables, so when you do the second up, Docker loads the old volume (the one created by the first up) and thus the old credential information will be used by MYSQL and Docker will not even have the opportunity to insert your new information (the ones you changed in the docker compose file) in the administration tables. So obviously, you will be rejected.
The solution thus now would be very simple. To fix it, either do:
docker-compose down -v
to remove the named volumes as well as the images when running the down, or do:
docker volume rm [volname]
if you've done the down before, and now you want to delete the named volumes.
If you follow this tutorials ,https://stephenafamo.com/blog/moving-wordpress-docker-container/, your site wil not work properly. Coz It doesn't restore database and you need to restore manually .sql dump file existed in initdb.d dir by using this command.
cat backup.sql | docker exec -i CONTAINER /usr/bin/mysql -u root --password=root DATABASE
I also stuck in this and my CSS are not working properly.
Please let me know when you have new idea .
I'm very new to Docker and after reading about data volumes I'm still somewhat confused by the behaviour I'm seeing.
In my compose file I had an entry for mysql like this:
db:
image: mysql
restart: always
volumes:
- ./database:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: p4ssw0rd!
networks:
- back
This mapped the /database directory to /var/lib/mysql. The database files where created and I could start Wordpress, install, add a post. The problem as it never persisted any created data. If I restarted Docker and executed:
docker-compose up -d
The database was empty.
Changing this to:
db:
image: mysql
restart: always
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: p4ssw0rd!
networks:
- back
And adding in a volume like this:
volumes:
db_data:
Now persists the data in the Docker data volume and restarting works. Any data created during the last run is still present.
How would I get this to work using the host mapped directory?
Am I right in thinking the second example using volumes is the way to go?
Docker volumes on windows work a bit different way than Linux. Basically on Windows, docker runs a VM and the docker is setup inside the VM. So it seems to you that you run docker commands locally on Windows but the actual stuff happens in background inside a VM.
docker run -v d:/data:/data alpine ls /data
First you need to make share the D: in docker settings. You can find a detailed article explaining the steps for doing so
https://rominirani.com/docker-on-windows-mounting-host-directories-d96f3f056a2c
Hello I have a configuration that builds docker containers for a flask app and a mysql instance.
I create a new VM with
docker-machine create -d amazonec2 --....... production
and then (after setting the correct environment)
docker-compose build -> docker-compose up -d
The problem is that all these happen whilst CWD is a local repo with the files I need. It turns out these files are not copied over.
I have looked at docker cp and docker scp but it seems they do not solve the problem. E.g. with SCP I cannot reference the specific machine I need to copy the repo over (xow_web_1)
Here is the .yml
web:
restart: always
volumes:
- .:/xow
build: .
ports:
- "80:80"
links:
- db
hostname: xowflask
command: python xow.py
db:
restart: always
hostname: xowmysql
image: mysql:latest
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: somepasswordhere
MYSQL_DATABASE: somedatabase
data:
restart: always
image: mysql:latest
volumes:
- /var/lib/mysql
command: "true"
How would be the most appropriate way to solve this? Is docker-compose the right approach? Looks awesome, but it doesn't solve an issue like this
The way we solved it in our organization is by using the COPY command to copy all of the data in the folder to the container.
For example, copying all of the files from the current dir to the container /src folder will look like this -
### Copy Code
COPY . /src
It looks like you should add this line into the web container in your docker-compose configuration.