My Docker Wordpress Container some how cannot connect to my database container. I tried to pass the credentials through the environment key.
I'm using external volumes that stores the Data from my previous Wordpress build as well as the data from the Database.
docker-compose.yml
version: '3'
services:
db:
image: mysql:5.7
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MYSQL_DATABASE=wordpress_oxygen
volumes:
- wordpress_db-data:/var/lib/mysql
networks:
- conturas-network
wordpress:
depends_on:
- db
image: wordpress:5.6.0-fpm-alpine
container_name: wordpress
restart: unless-stopped
env_file: .env
environment:
- WORDPRESS_DB_HOST=db:3306
- WORDPRESS_DB_USER=$MYSQL_USER
- WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
- WORDPRESS_DB_NAME=wordpress_oxygen
volumes:
- wordpress_data:/var/www/html
networks:
- conturas-network
webserver:
depends_on:
- wordpress
image: nginx:1.19.6-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- wordpress_data:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
networks:
- conturas-network
certbot:
depends_on:
- webserver
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- wordpress_data:/var/www/html
command: certonly --webroot --webroot-path=/var/www/html --email username#xyz.io --agree-tos --no-eff-email --force-renewal -d xyz.io -d www.xyz.io
volumes:
certbot-etc:
wordpress_data:
external: true
wordpress_db-data:
external: true
networks:
conturas-network:
driver: bridge
Error Logs from the db-container
...
2020-12-27T15:53:26.593191Z 2 [Note] Access denied for user 'wordpress_oxygen'#'172.30.0.3' (using password: YES)
Thanks for helping!
The very last line of the logs give the important hint on the underlying issue:
[Note] Access denied for user 'wordpress_oxygen'#'172.30.0.3' (using password: YES)
It means that the database credentials used to connect to the database are incorrect - i.e. the username/password combination.
But since this is logged by the database it means WordPress is actually able to connect to the database - i.e. your docker networks are setup correctly.
You should verify now that the values of configured credentials (WORDPRESS_DB_USER, WORDPRESS_DB_PASSWORD etc.) are actually correct - which is the same as MYSQL_USER and MYSQL_PASSWORD used at the time of initializing the database - that is, the environment variables MYSQL_USER, MYSQL_PASSWORD etc. are only used when the database volume is empty and needs to be initialized, see Environment Variables in the image description:
Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
You may also try to re-initialize the database by deleting the volume and starting a fresh instance of the service.
Also note that environment and env_file two different ways of specifying environment variables for the service, but mixing those two is bad practice since it can lead to unexpected behavior.
If you want to import values from your .env file to use for variable substitution you don't need to "import" it with env_file since it is loaded automatically for docker-compose! I.e. your current configuration does not do what you probably think it does.
What I did to fix the issue
I removed the the environment keys from the whole compose file.
Why did I do this?
I realized with the help of #acran answers, that the volume that I passed into the docker-compose file, was already a ready to use copy of my Initial Wordpress build/installation the same goes for the MySQL Database. (This means all credentials was already stored inside of each volume) Because of that I was not able to pass environment-variables to the composition, to be more precise, you can pass environment-variables to a build but they would simple have no effect on the finished container build.
You can only set environment-variables at the initial build.
Result
version: '3.3'
services:
db:
image: mysql:5.7
container_name: db
restart: unless-stopped
volumes:
- wordpress_db-data:/var/lib/mysql
networks:
- my-network
wordpress:
depends_on:
- db
image: wordpress:5.6.0-fpm-alpine
container_name: wordpress
restart: unless-stopped
volumes:
- wordpress_data:/var/www/html
networks:
- my-network
webserver:
depends_on:
- wordpress
image: nginx:1.19.6-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- wordpress_data:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
networks:
- my-network
certbot:
depends_on:
- webserver
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- wordpress_data:/var/www/html
command: certonly --webroot --webroot-path=/var/www/html --email my.name#xyz.io --agree-tos --no-eff-email --expand --noninteractive -d xyz.io -d www.xyz.io -d dev.xyz.io
volumes:
certbot-etc:
wordpress_data:
external: true
wordpress_db-data:
external: true
networks:
my-network:
driver: bridge
Related
My mysql container in docker, via the feature I gave it from the dockerfile to restart itself, keeps turning itself on and off after I attempted the docker-compose up --build command.
My Dockerfile contains several containers, including an apache, a mysql with a volume to save the data of a database and a php.
I ran into this problem as I was carrying out various tests since I had just created the volume and I wanted to see if the database was not losing the data inside it.
After various commands of docker-compose down and docker-compose up it happened that the mysql container did not work anymore, not even after other commands docker-compose down and docker-compose build --no-cache.
below i added my docker.compose.yml
all versions of the images and data for the database are taken from the various dockerfiles
version: "3.2"
services:
php:
build:
context: './php/'
args:
PHP_VERSION: ${PHP_VERSION}
networks:
- backend
volumes:
- ${PROJECT_ROOT}/:/var/www/html/
container_name: php
apache:
build:
context: './apache/'
args:
APACHE_VERSION: ${APACHE_VERSION}
depends_on:
- php
- mysql
networks:
- frontend
- backend
ports:
- "80:80"
volumes:
- ${PROJECT_ROOT}/:/var/www/html/
container_name: apache
mysql:
image: mysql:${MYSQL_VERSION:-latest}
restart: always
ports:
- "3306:3306"
volumes:
- data:/var/lib/mysql
networks:
- backend
environment:
MYSQL_ROOT_PASSWORD: "${DB_ROOT_PASSWORD}"
MYSQL_DATABASE: "${DB_NAME}"
MYSQL_USER: "${DB_USERNAME}"
MYSQL_PASSWORD: "${DB_PASSWORD}"
container_name: mysql networks:
frontend:
backend:
volumes:
data:
Anyone can describe me what I can do in this case in order not to lose the progress made so far within the database since doing a little research they tell me that the volume and the image of mysql may have been corrupted?
I'm trying to create a docker-compose.yml file that will bring up JIRA and MySQL. Here's my file:
version: '3'
services:
jira:
depends_on:
- mysql
container_name: jira
restart: always
networks:
- jiranet
build:
context: .
dockerfile: Dockerfile.jira
environment:
- ATL_DB_TYPE=mysql
- ATL_DB_DRIVER=com.mysql.cj.jdbc.Driver
- ATL_JDBC_URL=jdbc:mysql://mysql:3306/jiradb
- ATL_JDBC_USER=jira
- ATL_JDBC_PASSWORD=jellyfish
ports:
- 8080:8080
volumes:
- jira-data:/var/atlassian-data/jira
mysql:
container_name: mysql
restart: always
image: mysql:5.7
networks:
- jiranet
environment:
- MYSQL_ROOT_PASSWORD=ChangeMe!
- MYSQL_DATABASE=jiradb
- MYSQL_USER=jira
- MYSQL_PASSWORD=jellyfish
command: [mysqld, --character-set-server=utf8, --collation-server=utf8_bin, --default-storage-engine=INNODB, --max_allowed_packet=256M, --innodb_log_file_size=2GB, --transaction-isolation=READ-COMMITTED, --binlog_format=row]
volumes:
- mysql-data:/var/lib/mysql
networks:
jiranet: {}
volumes:
jira-data:
mysql-data:
Unfortunately, I'm getting JIRA startup errors when it tried to initialize the database, of the form:
CREATE command denied to user 'jira'#'172.22.0.3' for table 'jiraaction'
I'm guessing it's because the mysql container is creating user jira, but only allowing it to connect from localhost. But, the JIRA container is being seen as coming from an an external IP.
Any ideas on how I can get the jiradb database in mysql to be accessible by the JIRA container by user jira?
I figured out the problem -- I was missing an environment variable in the jira container:
ATL_DB_SCHEMA_NAME=jiradb
After that, things worked fine!
i have an existing Project ( API - portal - Mysql )
i have used Docker-compose without dockerfile
i publish the API - Portal and put them all in folder and then Docker-compose up
i can reach the api by getting the local values in it
but if i tried to reach mysql trough the API using postman its not working even when i open the frontend website
ConnectionString:
"ConnectionString": "server=xmysql;port=4406;Database=sbs_hani;User ID=hani;Password=123456; persistsecurityinfo=True;Charset=utf8; TreatTinyAsBoolean=false;"
This is my docker-compose file :
version: '3'
services:
xmysql:
container_name: xmysql
hostname: xmysql
image: mysql:latest
restart: always
environment:
MYSQL_ROOT_PASSWORD: "123456"
MYSQL_DATABASE: "sbs_hani"
MYSQL_USER: "hani"
MYSQL_PASSWORD: "123456"
ports:
- "3306:4406"
networks:
- xnetwork
volumes:
- data-volume:/var/lib/mysql
- ./hanimysql/sbs_hani.sql:/docker-entrypoint-initdb.d/sbs_hani.sql
xapi:
container_name: xapi
hostname: xapi
image: microsoft/dotnet:latest
# restart: always
tty: true
command: ["dotnet", "/var/lib/volhaniapi/hani.APIs.dll"]
ports:
- "8081:80"
- "8444:443"
networks:
- xnetwork
links:
- xmysql:xmysql
depends_on:
- xmysql
volumes:
- ./haniapi/:/var/lib/volhaniapi/
xportal:
container_name: xportal
hostname: xportal
image: microsoft/dotnet:latest
# restart: always
tty: true
command: ["dotnet", "/var/lib/volhaniportal/hani.Portal.dll"]
ports:
- "8083:80"
- "8446:443"
networks:
- xnetwork
links:
- xmysql:xmysql
depends_on:
- xmysql
volumes:
- ./haniportal/:/var/lib/volhaniportal/
xfront:
container_name: xfront
hostname: xfront
image: nginx:stable-alpine
# restart: always
ports:
- "8082:80"
- "4445:443"
networks:
- xnetwork
links:
- xapi:xapi
depends_on:
- xapi
volumes:
- ./hanifront/:/usr/share/nginx/html
volumes:
data-volume: {}
# xvolmysql:
# driver: "local"
# xvolmongo:
# driver: "local"
# xvolrabbitmq:
# driver: "local"
# xvolstarapi:
# driver: "local"
networks:
xnetwork:
driver: bridge
When you make a connection from one Docker container to another, you always connect to the port the service inside the container is actually listening on. Any ports: mappings are ignored (and in fact you don't need ports: if you don't want the service to be accessible from outside Docker container space).
In your example, you need to change the port number in the connection string to the default MySQL port 3306.
(Consider removing all of the container_name:, hostname:, networks:, and links: blocks in the file. You should have an equivalent container stack with the same functionality; the two most observable differences are that if you directly use docker commands then the container names will be prefixed with the directory names, and the Docker-internal network will be named default. You can still use the service block names like xmysql as host names.)
I'm trying to run both a spring boot app and mysql in separate docker containers and I'm having trouble debugging issues because I can't see any logs. When I run docker-compose up I see the start up logs (Spring Boot banner) and see the app start, but after that no more logging. I'm getting a 404 hitting one of my end points but I can't debug it without seeing the logs.
docker-compose.yml:
version: "3.3"
services:
database:
build:
context: ./database
image: pensionator_db
# set default mysql root password, change as needed
environment:
MYSQL_USER: pensionatoruser
MYSQL_DATABASE: pensionatordb
# Expose port 3306 to host. Not for the application but
# handy to inspect the database from the host machine.
ports:
- "3306:3306"
restart: always
appserver:
build:
context: .
dockerfile: app/src/main/docker/Dockerfile
image: pensionator_app
# mount point for application in tomcat
# open ports for tomcat and remote debugging
ports:
- "8080:8080"
- "8000:8000"
restart: always
How do I get logging to work?
There was nothing wrong with the logging, the issue was with my docker-compose.yml file. I needed to link the database correctly.
docker-compose.yml:
version: '3'
services:
database:
image: mysql:5.7
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
MYSQL_USER: root
MYSQL_DATABASE: pensionator
ports:
- '3307:3306'
restart: always
appserver:
build:
context: .
dockerfile: src/main/docker/Dockerfile
depends_on:
- database
image: pensionator_app
environment:
SPRING_DATASOURCE_URL: 'jdbc:mysql://database:3306/pensionator'
links:
- database
ports:
- '8080:8080'
- '8000:8000'
restart: always
I'm running Docker server in Digital Ocean. There I have two containers Nodejs and Mysql. Mysql container has open port to 3306.
When trying to access mysql via nodejs by Docker Server ip + port. I get Error: connect ETIMEDOUT.
When I run same nodejs docker setup in my local computer it works fine. Is there something i'm missing?
Here is nodejs docker-composer.yml:
version: '2'
services:
test-web-install:
image: example-nodejs:latest
working_dir: /home/app
volumes:
- ./:/home/app
command: sh -c 'nodemon'
environment:
- NODE_ENV=development
- DB_HOST=192.168.11.207 #or public ip in internet
- DB_PORT=3036
- DB_PASSWORD=root
- DB_USER=root
- DB_DATABASE=root
ports:
- "3000:3000"
Here is docker-composer.yml for mysql
mysql:
container_name: flask_mysql
restart: always
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: 'root' # TODO: Change this
MYSQL_USER: 'root'
MYSQL_PASS: 'root'
MYSQL_DATABASE: 'root'
volumes:
- ./data:/var/lib/mysql
ports:
- "3036:3306"
restart: always
I'll modify answer as we advance - Following your comments, while I can not access to your env, lets try to solve this incrementally:
Let's make the db visible to the node.js server
See how it works and then probably dive into env networking configuration.
There 2 ways to solve 1st and may be 2nd problem as i see without being able to touch your env:
1st one will ensure that the server sees the database, but if you can not connect to the db from outside seems there firewall/droplet networking configuration issue, and you can try 2nd way (wont likely to change, but it's good to try). This assumes you use same docker compose and same bridge cusom network:
version: '2'
services:
test-web-install:
image: example-nodejs:latest
working_dir: /home/app
volumes:
- ./:/home/app
command: sh -c 'nodemon'
environment:
- NODE_ENV=development
- DB_HOST= mysql
- DB_PORT=3036
- DB_PASSWORD=root
- DB_USER=root
- DB_DATABASE=root
ports:
- "3000:3000"
networks:
inner:
alias: server
mysql:
container_name: flask_mysql
restart: always
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: 'root' # TODO: Change this
MYSQL_USER: 'root'
MYSQL_PASS: 'root'
MYSQL_DATABASE: 'root'
volumes:
- ./data:/var/lib/mysql
ports:
- "<externalEnvIp>:3036:3306"
restart: always
networks:
inner:
alias: mysql
networks:
inner:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "true"
com.docker.network.bridge.enable_ip_masquerade: "true"
ipam:
driver: default
config:
- subnet: 172.16.100.0/24
gateway: 172.16.100.1
Option 2 :
version: '2'
services:
test-web-install:
image: example-nodejs:latest
working_dir: /home/app
volumes:
- ./:/home/app
command: sh -c 'nodemon'
environment:
- NODE_ENV=development
- DB_HOST= mysql
- DB_PORT=3036
- DB_PASSWORD=root
- DB_USER=root
- DB_DATABASE=root
ports:
- "3000:3000"
network_mode: "host"
mysql:
container_name: flask_mysql
restart: always
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: 'root' # TODO: Change this
MYSQL_USER: 'root'
MYSQL_PASS: 'root'
MYSQL_DATABASE: 'root'
volumes:
- ./data:/var/lib/mysql
ports:
- "3036:3306"
restart: always
network_mode: "host"
More precise solution (to find the roots of the problem) would involve into deep digging into your env network configuration, docker networking settings etc., but those solutions may help and fix your problem for now.
Pleasse after you try please output the results.
The docker networking doesn't allow you to go from inside a container back out to the host IP to connect to a port exposed by another container. I haven't dug into this enough to see if that's due to the iptables rules or perhaps something inside of docker-proxy. Either way, it's never been worth investigating since container-to-container networking is a built in feature of docker.
To use docker's networking, the containers need to be on the same docker network, and you reference them by their container name in DNS. With docker-compose, normally you can use the service name in place of the container name (e.g. test-web-install and mysql from your examples) since compose creates an alias for these. However, since you've overridden the container name for mysql, use your flask_mysql container name instead.
In your scenario, since you've split up the startup with two separate docker-compose.yml files, you'll be on separate networks created by compose. You have two options to resolve this:
Merge the two into a single docker-compose.yml (BlackStork gave an example of this).
Use an externally defined network that you create in advance.
To do the latter, first create your network:
docker network create dbnet
Then update your docker-compose.yml for the app to look like:
version: '2'
networks:
dbnet:
external: true
services:
test-web-install:
image: example-nodejs:latest
working_dir: /home/app
volumes:
- ./:/home/app
command: sh -c 'nodemon'
environment:
- NODE_ENV=development
- DB_HOST=flask_mysql
- DB_PORT=3036
- DB_PASSWORD=root
- DB_USER=root
- DB_DATABASE=root
ports:
- "3000:3000"
networks:
- dbnet
And the docker-compose.yml for mysql:
version: '2'
networks:
dbnet:
external: true
services:
mysql:
container_name: flask_mysql
restart: always
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: 'root' # TODO: Change this
MYSQL_USER: 'root'
MYSQL_PASS: 'root'
MYSQL_DATABASE: 'root'
volumes:
- ./data:/var/lib/mysql
ports:
- "3036:3306"
restart: always
networks:
- dbnet