docker wordpress container can't connect to database container - mysql

I've been using docker to build wordpress apps for some days. I got some working but now I don't know why I can get to connect the database container and the wordpress container.
I've reduced the failing condiguration to the simplest possible.
Right now I have the following docker-compose.yml file:
wordpress:
image: wordpress
links:
- db:mysql
ports:
- 8080:80
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: example
which is an exact copy of this official example:
https://hub.docker.com/_/wordpress/
(scroll down to "... via docker-compose").
If I do composer up with this file I got the following relevant log entries:
Creating miqueladell_db_1
Creating miqueladell_wordpress_1
Attaching to miqueladell_db_1, miqueladell_wordpress_1
db_1 | Initializing database
…lots of initialization…
wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 10
…this goes on for a while, db_1 says it's initializing and wordpress_1 says it can connect
and then…
db_1 | MySQL init process done. Ready for start up.
…some more database messages…
db_1 | 2016-01-12 14:34:46 139698309449664 [Note] mysqld: ready for connections.
wordpress_1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.5. Set the 'ServerName' directive globally to suppress this message
wordpress_1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.5. Set the 'ServerName' directive globally to suppress this message
wordpress_1 | [Tue Jan 12 14:34:47.180996 2016] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/5.6.17 configured -- resuming normal operations
wordpress_1 | [Tue Jan 12 14:34:47.181253 2016] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
…at that momment if I browse to the wordpress endpoint I got…
wordpress_1 | 192.168.99.1 - - [12/Jan/2016:14:34:47 +0000] "GET / HTTP/1.1" 500 586 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36"
And an Error establishing a database connection on the front end.
I've pasted the full log here in case someone want's to take a look: http://pastebin.com/Z9U2iMsH
I've got my environment running before and I am sure that if not this particular example I have been able to run the containers and connect to the database with some of the examples I tried (without luck) today so I guess there is something wrong with my environment but I don't know how to debug it.
I've removed all containers, all images, re-donwloaded the images and rebuilt de containers. Even tested everyithing on a enpty folder with a newly created docker-compose.yml file.
In fact as reading the logs it seemed that maybe the wordpress process was exhausting its tries I have even restarted the wordpress container while the database container was already up and the result is the same.
Just in case it's relevant I'm running all of this locally on a mac using the docker quickstart terminal as descrived here:
https://docs.docker.com/mac/step_one/
and docker -v says:
Docker version 1.9.1, build a34a1d5
EDIT: Just tried using:
image: wordpress:4.4
instead of "no tag" which downloads latest (4.4.1 at the time of writing) and it works. So it seems that is a bug introduced on 4.4.1
I've oppened one here
https://github.com/docker-library/wordpress/issues/120
I'll keep the question open just in case but it seems quite clear that is a bug

It was a bug on the version 4.4.1 of the wordpress container.
I've oppened an issue here https://github.com/docker-library/wordpress/issues/120 and it's solved now.
Thanks all!

Related

phpmyadmin docker site can't be reached

Very frustrating error. I hope you can advise me something.
I removed all images, all containers, made system prune with docker.
Then, I run the following command. I know I am not specifying mysql host and password, but who cares, this still should work or show me main page of phpmyadmin where I can login and it should say mysql can't connect.
sudo docker run --name adminphp1 -d -p 8000:80 phpmyadmin/phpmyadmin.
After running this command, seeing the container list, it shows me the following:
389268e87d4b phpmyadmin/phpmyadmin "/run.sh supervisord…" 2 minutes ago Up 2 minutes 9000/tcp, 0.0.0.0:8000->80/tcp adminphp1
Where does 9000/tcp come from?
After running, docker logs adminphp1, it shows the following:
Complete! phpMyAdmin has been successfully copied to /var/www/html
/usr/lib/python2.7/site-packages/supervisor/options.py:461: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2019-04-11 15:15:09,745 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2019-04-11 15:15:09,746 INFO Included extra file "/etc/supervisor.d/nginx.ini" during parsing
2019-04-11 15:15:09,746 INFO Included extra file "/etc/supervisor.d/php.ini" during parsing
2019-04-11 15:15:09,756 INFO RPC interface 'supervisor' initialized
2019-04-11 15:15:09,756 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2019-04-11 15:15:09,756 INFO supervisord started with pid 1
2019-04-11 15:15:10,760 INFO spawned: 'php-fpm' with pid 21
2019-04-11 15:15:10,762 INFO spawned: 'nginx' with pid 22
[11-Apr-2019 15:15:10] NOTICE: fpm is running, pid 21
[11-Apr-2019 15:15:10] NOTICE: ready to handle connections
2019-04-11 15:15:11,826 INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-04-11 15:15:11,827 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Then I try to access it with website.com:8000 and browser shows me site can't be reached after thinking out some time.
Just would appreciate anything you can suggest.

building a docker with mysql and nodeJS

I am building an application that uses nodeJS and backend and mySQL as backend, and currently, my steps to bring up the app (without docker) is by:
Install NodeJS
Install MYSQL
Launch mysqld on port 3306
Manually create a MYSQL user dedicated for the NodeJS backend. This
user should have only basic previliges to only my desired schema.
Run sequelize commands to perform data migration and seeding using
the user generated in 4)
npm install and npm start to launch NodeJS on port 8080
Now I want to dokerize my application, and I already have the following Dockerfile:
#node version: carbon
#app version: 1.0.0
FROM node:8.11.2
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
I have put a init.sql file within ./docker_db folder which does the following:
CREATE USER 'app_user'#'%' IDENTIFIED BY 'password';
CREATE SCHEMA `myapp` DEFAULT CHARACTER SET utf8;
GRANT INSERT, CREATE, ALTER, UPDATE, SELECT, REFERENCES on myapp.*
TO 'app_user'#'%' IDENTIFIED BY 'password'
WITH GRANT OPTION;
and the following docker-compose.yaml:
version: '3.6'
services:
mysql1:
image: mysql/mysql-server:5.7
environment:
MYSQL_ROOT_PASSWORD: password
ports:
- "127.0.0.1:3306:3306"
volumes:
- type: bind
source: ./docker_db
target: /docker-entrypoint-initdb.d
expose:
- "3306"
networks:
- app-network
myapp:
build:
context: .
dockerfile: Dockerfile
command: npm start
depends_on:
- mysql1
ports:
- "127.0.0.1:8080:8080"
expose:
- "8080"
links:
- mysql1
networks:
- app-network
command: ["./wait-for-db.sh"]
networks:
app-network:
driver: bridge
where my ./wait-for-db.sh does the following:
#!/bin/bash
until mysql -h mysql1 -u app_user -p password -e 'select 1'; do
echo "still waiting for mysql"; sleep 1; done
exec node ./db/scripts/generateSequelizeCLIConfig.js
exec node_modules/sequelize-cli/bin/sequelize db:migrate
exec node_modules/sequelize-cli/bin/sequelize db:seed:all
exec npm start
(BTW I do want to expose 3306 to host machine so that I can use workbench to connect to the mysql server, which I have successfully connected.)
In my sequelize config file I do have:
"username": "app_user",
"password": "password",
"database": "myapp",
"host": "mysql1",
"port": "3306"
With the above setting, I executed docker-compose up, and then I got the following lines:
mysql1_1 | [Entrypoint] MySQL Docker Image 5.7.22-1.1.5
mysql1_1 | [Entrypoint] Initializing database
myapp_1 | standard_init_linux.go:190: exec user process caused "no such file or directory"
myapp_myapp_1 exited with code 1
mysql1_1 | [Entrypoint] Database initialized
mysql1_1 | Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
mysql1_1 | Warning: Unable to load '/usr/share/zoneinfo/leapseconds' as time zone. Skipping it.
mysql1_1 | Warning: Unable to load '/usr/share/zoneinfo/tzdata.zi' as time zone. Skipping it.
mysql1_1 | Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
mysql1_1 | Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
mysql1_1 |
mysql1_1 | [Entrypoint] running /docker-entrypoint-initdb.d/init.sql
mysql1_1 |
mysql1_1 |
mysql1_1 | [Entrypoint] Server shut down
mysql1_1 |
mysql1_1 | [Entrypoint] MySQL init process done. Ready for start up.
mysql1_1 |
mysql1_1 | [Entrypoint] Starting MySQL 5.7.22-1.1.5
The problems now I face are:
1) The script's execution is hanging on the last line of Starting MySQL 5.7.22-1.1.5 and not going anywhere.
2) In the output, the 3rd and 4th lines shows an error about exec user process caused "no such file or directory". I don't think it is caused by the commands in the wait-for-db.sh because if I removed the lines after the until command, the problem still persist. In fact, I doubt the command execution ever reaching those lines and it feels like it is still within the until command.
I think it's really close to the final solution though :)
Use the name of your db service, which is mysql, as your database host. Docker will resolve it to the actually IP. Also why do you have FROM mysql:5.7 in your Dockerfile, I don't think it is of any uses.
Updated
Alright, seems like myapp runs db scripts before the db is ready. See here for solution https://docs.docker.com/compose/startup-order/
The problem is probably related to timing. Both containers will start at the same time and your node-app will try to connect to mysql almost immediately, while the MySQL server is still starting.
docker-compose doesn't have any kind of structure for this so you will have to build an entrypoint in your node-app that first waits for mysql to respond.
So, in your case, the entrypoint would be something like
#!/bin/bash
until mysql -h mysql1 -uapp_user -ppassword -e'select 1'; do echo "still waiting for mysql"; sleep 1; done
exec npm start

mysql and gunicorn open connections at the same port

SOME BACKGROUND:
I have created a django app and I am at the point where I want to deploy it. I have looked at multiple options including wsgi but since the new mac os update came about, I can not install mod_wsgi because I do not have apxs or apxs2 on my computer, (Some discussion on web about rights to write in files, If you know more and would like to explain, please do.)
However, I looked into other options to try to deploy the app and I want to use Heroku. I have followed the dev guide for Django deployment until I reached the part where I test using "heroku local web".
THE ISSUE
The problem stems from here because the local mysql server uses the same port that the gunicorn is also trying to use. I have found similar posts on stackoverflow about 'connections in use' but none have shown how to change ports for gunicorn. I have found some open ports available on my localhost but everytime I try to change the mysql ports to those, the connection times out. Therefore, I would like to know how to change the port the Gunicorn connects to so it does not try to connect to the same default port as the mysql which is 3306.
I was serving the django project with the server it came with and the database I am using is mysql for local production. I am trying to connect locally with gunicorn and Heroku now because I feel that if this goes right locally it will probably go right when I attempt to put the project online.
ERROR GIVEN
10:38:52 PM web.1 | [2017-01-08 22:38:52 -0500] [83200] [ERROR] Connection in use: ('0.0.0.0', 3306)
10:38:52 PM web.1 | [2017-01-08 22:38:52 -0500] [83200] [ERROR] Retrying in 1 second.
10:38:53 PM web.1 | [2017-01-08 22:38:53 -0500] [83200] [ERROR] Connection in use: ('0.0.0.0', 3306)
10:38:53 PM web.1 | [2017-01-08 22:38:53 -0500] [83200] [ERROR] Retrying in 1 second.
10:38:54 PM web.1 | [2017-01-08 22:38:54 -0500] [83200] [ERROR] Connection in use: ('0.0.0.0', 3306)
10:38:54 PM web.1 | [2017-01-08 22:38:54 -0500] [83200] [ERROR] Retrying
in 1 second.
MY PROFILE
web: gunicorn project_name.wsgi.application --log-file -
The gunicorn connects when I stop the mysql server, but I get an exception since the project can not connect to the databases.
--Thank you
You can specify the port for Gunicorn as follows -
gunicorn --bind 127.0.0.1:8000
So basically the complete command would be
gunicorn --bind 127.0.0.1:8000 myproject.wsgi:application
You can change 8000 to any of your desired port number.
To install mod_wsgi on MacOS X see:
https://pypi.python.org/pypi/mod_wsgi
All you need to do is pip install mod_wsgi.
You can then use mod_wsgi-express on the command line to run it on an unprivileged port, with all configuration done for you.
Or, you can integrate it with existing Apache installation and manually configure it yourself by running mod_wsgi-express module-config and taking what it outputs and add it to the main Apache configuration for the system. Then add you specific WSGI application configuration to the Apache configuration file as well.

Permission denied when mounting Docker volume in OSX

I'm at my wit's end with this, so hopefully you folks can help me. In OSX 10.11.2 with docker-machine, I've got a docker-compose file that should build a local Dockerfile and attach a MySQL container to it. The MySQL container should mount a local folder where I'm storing my database data, so if the container or VM comes down, I can just restart it without data loss. Problem is, when I run it, it throws a permissions error:
db_1 | 2015-12-23 19:17:59 7facaa89b740 InnoDB: Operating system error number 13 in a file operation.
db_1 | InnoDB: The error means mysqld does not have the access rights to
db_1 | InnoDB: the directory.
I've tried every permutation I can think of to get this to work. I was reading around and it may have something to do with how docker-machine handles permissions with OSX, but the documentation for docker-machine says that it mounts the /Users folder, so that shouldn't be an issue.
Here's the docker-compose.yml:
web:
build: .
ports:
- "3000:3000"
links:
- db
db:
image: mysql:5.6
ports:
- "3306:3306"
volumes:
- /Users/me/Development/mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: mypass
Any ideas? I can't help but think it's something really simple. Any help would be most appreciated!
Edit:
Host - drwxr-xr-x 7 me staff 238 Dec 23 12:10 mysql-data/
VM - drwxr-xr-x 1 docker staff 238 Dec 23 20:10 mysql-data/
As to the container, it won't run with the volume mounted. Without the -v mount, it is:
Container - drwxr-xr-x 4 mysql mysql 4096 Dec 24 00:37 mysql
The issue this comes from is the userids used by Mac and Linux respectively. Mac does not like Linux wanting to use the 1 for the userID.
The way I worked around all the permissions craziness in my mac + docker-machine setup is to use this Dockerfile
FROM mysql:5.6
RUN usermod -u 1000 mysql
RUN mkdir -p /var/run/mysqld
RUN chmod -R 777 /var/run/mysqld
Instead of the plain MySQL 5.6 Image.
The last 2 lines are necessary, because changing the userid for the mysql user will mess up the build in permissions for that image. => you need the 777 permissions to make it run here :/
I know this is a little hacky, but so far the best solution I know to the permissions issue here.
Try to use the latest docker for mac instead of docker tools. Docker for Mac no longer uses VirtualBox, but rather HyperKit, a lightweight OS X virtualization solution built on top of Hypervisor.framework in OS X 10.10 Yosemite and higher.
I suggest also completely remove docker tools(they could co-exist): https://github.com/docker/toolbox/blob/master/osx/uninstall.sh
With docker for mac, you don't have to use permission hacks, it would just work like it would be on a linux build.

ERR_CONTENT_LENGTH_MISMATCH on nginx and proxy on Chrome when loading large files

I'm getting the following error on my chrome console:
GET http://localhost/grunt/vendor/angular/angular.js net::ERR_CONTENT_LENGTH_MISMATCH
This only happens when a simultaneous requests are shot towards nginx e.g. when the browsers cache is empty and the whole app loads. Loading the resource above as a single requests succeeds.
Here are the headers to this requests, copied from Chrome:
Remote Address:127.0.0.1:80
Request URL:http://localhost/grunt/vendor/angular/angular.js
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8,de;q=0.6,pl;q=0.4,es;q=0.2,he;q=0.2,gl;q=0.2
Cache-Control:no-cache
Connection:keep-alive
Cookie:gs_u_GSN-265185-D=1783247335:2567:5000:1377697930719
Host:localhost
Pragma:no-cache
Referer:http://localhost/grunt/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.122 Safari/537.36
Response Headersview source
Accept-Ranges:bytes
Cache-Control:public, max-age=0
Connection:keep-alive
Content-Length:873444
Content-Type:application/javascript
Date:Tue, 23 Sep 2014 11:08:19 GMT
ETag:"873444-1411465226000"
Last-Modified:Tue, 23 Sep 2014 09:40:26 GMT
Server:nginx/1.6.0
the real size of the file:
$ ll vendor/angular/angular.js
-rw-rw-r-- 1 xxxx staff 873444 Aug 30 07:21 vendor/angular/angular.js
As you can see Content-Length and the real size of the file are the same, so that's weird
And the nginx configuration to this proxy:
location /grunt/ {
proxy_pass http://localhost:9000/;
}
Any ideas?
Thanks
EDIT: found more info on the error log:
2014/09/23 13:08:19 [crit] 15435#0: *8 open() "/usr/local/var/run/nginx/proxy_temp/1/00/0000000001" failed (13: Permission denied) while reading upstream, client: 127.0.0.1, server: localhost, request: "GET /grunt/vendor/angular/angular.js HTTP/1.1", upstream: "http://127.0.0.1:9000/vendor/angular/angular.js", host: "localhost", referrer: "http://localhost/grunt/"
Adding the following line to the nginx config was the only thing that fixed the net::ERR_CONTENT_LENGTH_MISMATCH error for me:
proxy_buffering off;
It seems that under pressure, nginx tried to pull angular.js from its cache and couldn't due to permission issues. Here's what solved this issue:
root#amac-2:/usr/local/var/run/nginx $ chown -R _www:admin proxy_temp
_www:admin might be different in your case, depending which user owns the nginx process. See more information on ServerFault:
https://serverfault.com/questions/534497/why-do-nginx-process-run-with-user-nobody
I tried all of the above and still couldn't get it to work. Even after resorting to chmod 777. The only thing that solved it for me was to disable caching entirely:
proxy_max_temp_file_size 0;
Whilst not a fix and no good for production use this was OK for me since I'm only using nginx as part of a local development setup.
For me the remedy were these two settings:
In the file:
/etc/nginx/nginx.conf
Add:
proxy_max_temp_file_size 0;
proxy_buffering off;
Between the lines client_max_body_size 128M; and server_names_hash_bucket_size 256;:
http {
client_max_body_size 128M;
proxy_max_temp_file_size 0;
proxy_buffering off;
server_names_hash_bucket_size 256;
ps aux | grep "nginx: worker process"
after executing above command you'll see the user through which nginx is running
eg.
www-data 25356 0.0 0.0 68576 4800 ? S 12:45 0:00 nginx: worker process
www-data 25357 0.0 0.0 68912 5060 ? S 12:45 0:00 nginx: worker process
now you have to run below command to give permission
chown -R www-data:www-data /var/lib/nginx/
Hope it will work
For us, it turned out to be that our server's rather small root (ie. /) was full.
It had mountains of logs and files from users in /home. Moving all that cruft out to another mounted drive solved things.
Just wanted to share as this can be another cause of the problem.
If somebody ran nginx as a different user in the past, ownership of cache folder may be twisted. I got
/var/cache/nginx# LANG=C ls -l proxy_temp/
total 40
drwx------ 18 nginx nginx 4096 Jul 14 2016 0
drwx------ 19 nginx nginx 4096 Jul 14 2016 1
drwx------ 19 nginx nginx 4096 Jul 14 2016 2
drwx------ 19 nginx nginx 4096 Jul 14 2016 3
drwx------ 19 nginx nginx 4096 Jul 14 2016 4
drwx------ 19 nginx nginx 4096 Jul 14 2016 5
drwx------ 19 nginx nginx 4096 Jul 14 2016 6
drwx------ 18 nginx nginx 4096 Jul 14 2016 7
drwx------ 18 nginx nginx 4096 Jul 14 2016 8
drwx------ 18 nginx nginx 4096 Jul 14 2016 9
while nginx was running as www-data. So the solution is to change ownership of nginx’s cache directory to the user nginx is running under. In the present case
/var/cache/nginx# chown -R www-data:www-data *
or, even simpler
# rm -r /var/cache/nginx/*
What worked for me was to change the proxy_temp_path to a folder with read/write permissions (777)
location / {
proxy_temp_path /data/tmp;
}
I had same issue.
Increasing the space of Directory or Folder where nginx is installed, solved the issue.
For macOS with nginx installed with homebrew, I used the following steps to track down and fix the issue.
Run nginx -h to find your error log location. Look for the following line:
-e filename : set error log file (default: /opt/homebrew/var/log/nginx/error.log)
Take your error log path and tail it to see what error it's reporting when you try to load the page.
tail -f /opt/homebrew/var/log/nginx/error.log
From that I saw that one of the lines showed a permission denied error:
open() "/opt/homebrew/var/run/nginx/proxy_temp/9/01/0000000019" failed (13: Permission denied) while reading upstream
Which means that your cached directories have incorrect permissions for the nginx user.
Stop nginx
brew services stop nginx
Delete all the temp folders (location from the permission error log line)
sudo rm -rf /opt/homebrew/var/run/nginx/*
Start nginx again
brew services start nginx
After doing this, nginx will recreate the temp folders with the correct permissions. At this point you should be good try and reload your page that was failing before.
When I tried the aforementioned solution it didn't fix the issue. I also changed the permission to write on the location but it didn't work. Then I realized I did something wrong in there. In the location to store the file, I had something like
"/storage" + fileName + ".csv"
. I was testing on the Windows environment and it was working great. But later when we moved the application to the Linux environment it stopped working. So later I had to change it to
"./storage" + fileName + ".csv"
and it started working normally.
For me, the solution was:
sudo chown -R nginx:nginx /var/cache/nginx/fastcgi_temp/
For anyone using HAProxy as proxy and getting these exact same symptoms, increasing the timeout values resolved the issue for me:
timeout connect 5000
timeout client 50000
timeout server 50000
The only thing that helped me was the following settings in nginx site .conf file:
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
For me I had the same error except on a
different folder /var/lib/nginx/.
I changed the owner to nginx by
chown -R nginx:nginx /var/lib/nginx/. That did not work.
Then I checked who owned the nginx worker process by
ps aux| grep nginx
And it was running as nginx but when I looked through the nginx.conf file; I found that the user was nginx but it did not have any group. So, I added nginx to the user nginx; it turned out like this
user nginx nginx
Now I rebooted the system and the issue was fixed. I suppose I could have just used
chown -R nginx /var/lib/nginx/
That may have worked as well. So if anyone is facing this issue; firstly go into var/log/nginx and
check where the permission error occurred.