Should the file ".XE.created" be there? - oracle-cloud-infrastructure

the docker container "container-registry.oracle.com/database/express:21.3.0-xe" didn't run the post-setup scripts. The post-startup scripts works fine.
docker-compose.yml
version: '3.3'
services:
db:
image: container-registry.oracle.com/database/express:21.3.0-xe
container_name: db
ports:
- "1521:1521"
- "5500:5500"
environment:
- ORACLE_PWD=oracle
volumes:
- ./oracle/script:/opt/oracle/scripts/setup
The reason could be, that the file .XE.created is already created
sh-4.2$ ls -la /opt/oracle/oradata
total 32
drwxr-xr-x 1 oracle oinstall 4096 Mar 30 10:47 .
drwxr-xr-x 1 oracle oinstall 4096 Mar 30 10:38 ..
-rw-r--r-- 1 oracle oinstall 0 Mar 30 10:47 .XE.created
drwxr-x--- 1 oracle oinstall 4096 Mar 30 10:47 XE
drwxr-xr-x 1 oracle oinstall 4096 Mar 30 10:47 dbconfig
This in combination with https://github.com/oracle/docker-images/blob/8acbbd735e9a7a93a6bec66c1d10b564a25634b4/OracleDatabase/SingleInstance/dockerfiles/21.3.0/runOracle.sh#L236 leads to the behaviour that post-setup is never called.
Is there a way to fix this?

Related

GitHub actions $GITHUB_WORKSPACE envar lists empty dir why?

I'm trying to print content of the current working directory $PWD or $GITHUB_WORKSPACE with my debug job. My expectation is to see the directory's content. Unfortunately, it returns no content.
Results
Run ls -la $PWD
total 8
drwxr-xr-x 2 runner docker 4096 Apr 19 19:42 .
drwxr-xr-x 3 runner docker 4096 Apr 19 19:42 ..
total 8
drwxr-xr-x 2 runner docker 4096 Apr 19 19:42 .
drwxr-xr-x 3 runner docker 4096 Apr 19 19:42 ..
.github/workflows/debug.yaml
name: Debug
on: [push, workflow_dispatch]
jobs:
debug:
runs-on: ubuntu-latest
steps:
- name: Print current working dir
run: |
ls -la $PWD
ls -la $GITHUB_WORKSPACE
I figured it out. Please add uses: actions/checkout#v2 as the first step in your workflow.
name: Debug
on: [push, workflow_dispatch]
jobs:
debug:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Print current working dir
run: |
ls -la $PWD
ls -la $GITHUB_WORKSPACE

docker - mysql Version - How to update?

I have no knowledge of docker and its inner workings.
I have a docker image of an application which support has installed on my Linux Desktop.
Current version of mySql when queried through docker prompt is 5.5.6
I updated mysql on my desktop to 5.7.x but still inside docker prompt its showing 5.5.6 ..
Can anyone help me out ?
Output --
dockerdev#localhost:~$ ps -aef | grep mysql
root 31 28 0 14:45 ? 00:00:00 runsv mysql
root 40 34 0 14:45 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe
mysql 471 40 0 14:45 ? 00:00:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306
dockerdev#localhost:~$ mysql -V
mysql Ver 14.14 Distrib 5.5.62, for debian-linux-gnu (x86_64) using readline 6.3
I run the docker container
docker-compose -f ./application.yml up -d
application.yml
---
myapp:
image: docker.xyz.com/myapp:latest
container_name: myapp
hostname: localhost
ports:
- "80:80" # Apache
- "8000:8000" # Tomcat
- "3306:3306" # Mysql
environment: ....
links: ....
volumes:
# MySQL Data
- /home/kunal/perfios/containers/kubera/perfios/mysql-data:/var/lib/mysql/

Unable to start mysql with docker on Ubuntu 16.04

Am Unable to start mysql with docker on Ubuntu. Get the following error:
db_1_cc1214d5085c | ERROR: mysqld failed while attempting to check
config db_1_cc1214d5085c | command was: "mysqld --verbose --help"
db_1_cc1214d5085c | db_1_cc1214d5085c | mysqld: error while loading
shared libraries: libpthread.so.0: cannot stat shared object:
Permission denied
Content of docker compose file:
version: '2.4'
services:
db:
image: mysql:5.7
ports:
- "32000:3306"
environment:
MYSQL_ROOT_PASSWORD: root
# restart: always
volumes:
- ./data/db:/var/lib/mysql
Docker details:
Client:
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:48:57 2018
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:16:44 2018
OS/Arch: linux/amd64
Experimental: false
Also worth noting is that there is a non dockerized versionof MySQL installed and running on this server. Any help will be appreciated.
To start mysql service you'll need to have something like this in your docker-compose file
version: '3'
services:
<service-name>:
image: mysql:5.7
container_name: <container-name>
ports:
- "<host-port>:<container-port>"
environment:
- MYSQL_ROOT_PASSWORD=<root-password>
- MYSQL_DATABASE=<database-name>
volumes:
- <host-dir>:/var/lib/mysql
networks: ['stack']
networks:
stack:
driver: bridge
Make sure that <host-dir> you have permission with the current user executing the docker-compose up command.
The networks is used if you have multiple services that want to connect to the database they all should consume the same network which is stack in this example
looks like a permission problem on your host.

Kubernetes: Error when creating a StatefulSet with a MySQL container

Good morning,
I'm very new to Docker and Kubernetes, and I do not really know where to start looking for help. I created a database container with Docker and I want manage it and scale with Kubernetes. I started installing minikube in my machine, and tried to create a Deployment first and then a StatefulSet for a database container. But I have a problem with the StatefulSet when creating a Pod with a database (mariadb or mysql). When I use a Deployment the Pods are loaded and work fine. However, the same Pods are not working when using them in a StatefulSet, returning errors asking for the MYSQL constants. This is the Deployment, and I use the command kubectl create -f deployment.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mydb-deployment
spec:
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: mydb
image: ignasiet/aravomysql
ports:
- containerPort: 3306
And when listing the deployments: kubectl get Deployments:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mydb-deployment 1 1 1 1 2m
And the pods: kubectl get pods:
NAME READY STATUS RESTARTS AGE
mydb-deployment-59c867c49d-4rslh 1/1 Running 0 50s
But since I want to create a persistent database, I try to create a statefulSet object with the same container, and a persistent volume.
Thus, when creating the following StatefulSet with kubectl create -f statefulset.yaml:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: statefulset-mydb
spec:
serviceName: mydb-pod
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: aravo-database
image: ignasiet/aravomysql
ports:
- containerPort: 3306
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
volumes:
- name: volume-mydb
persistentVolumeClaim:
claimName: config-mydb
With the service kubectl create -f service-db.yaml:
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
type: ClusterIP
ports:
- port: 3306
selector:
name: mydb-pod
And the permission file kubectl create -f permissions.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: config-mydb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
The pods do not work. They give an error:
NAME READY STATUS RESTARTS AGE
statefulset-mydb-0 0/1 CrashLoopBackOff 1 37s
And when analyzing the logs kubectl logs statefulset-mydb-0:
`error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD`
How it is possible that it does ask for these variables when the container has already an initialization script and works perfectly? And why it asks only when launching as statefulSet, and not when launching the Deployment?
Thanks in advance.
I pulled your image ignasiet/aravomysql to try to figure out what went wrong. As it turns out, your image already has an initialized MySQL data directory at /var/lib/mysql:
$ docker run -it --rm --entrypoint=sh ignasiet/aravomysql:latest
# ls -al /var/lib/mysql
total 110616
drwxr-xr-x 1 mysql mysql 240 Nov 7 13:19 .
drwxr-xr-x 1 root root 52 Oct 29 18:19 ..
-rw-rw---- 1 root root 16384 Oct 29 18:18 aria_log.00000001
-rw-rw---- 1 root root 52 Oct 29 18:18 aria_log_control
-rw-rw---- 1 root root 1014 Oct 29 18:18 ib_buffer_pool
-rw-rw---- 1 root root 50331648 Oct 29 18:18 ib_logfile0
-rw-rw---- 1 root root 50331648 Oct 29 18:18 ib_logfile1
-rw-rw---- 1 root root 12582912 Oct 29 18:18 ibdata1
-rw-rw---- 1 root root 0 Oct 29 18:18 multi-master.info
drwx------ 1 root root 2696 Nov 7 13:19 mysql
drwx------ 1 root root 12 Nov 7 13:19 performance_schema
drwx------ 1 root root 48 Nov 7 13:19 yypy
However, when mounting a PersistentVolume or just a simple Docker volume to /var/lib/mysql, it's initially empty and therefore the script thinks your database is uninitialized. You can reproduce this issue with:
$ docker run -it --rm --mount type=tmpfs,destination=/var/lib/mysql ignasiet/aravomysql:latest
error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
If you have a bunch of scripts you need to run to initialize the database, you have two options:
Create a Dockerfile based on the mysql Dockerfile, and add shell scripts or SQL scripts to /docker-entrypoint-initdb.d. More details available here under "Initializing a fresh instance".
Use the initContainers property in the PodTemplateSpec, something like:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: statefulset-mydb
spec:
serviceName: mydb-pod
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: aravo-database
image: ignasiet/aravomysql
ports:
- containerPort: 3306
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
initContainers:
- name: aravo-database-init
command:
- /script/to/initialize/database
image: ignasiet/aravomysql
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
volumes:
- name: volume-mydb
persistentVolumeClaim:
claimName: config-mydb
The issue you are facing is not specific to StatefulSet. It is because of the persistent volume. If you use StatefulSet without the persistent volume, you will not face this problem. Or, if you use Deployment with persistent volume you will face this issue.
Why? Ok, let me explain.
Setting up one of these environment variable MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD or MYSQL_RANDOM_ROOT_PASSWORD is mandatory for creating new database. Read Environment Variables part here.
But, if you initialize database from script, you will not require to provide it. Look at this line of docker-entrypont.sh here. It check if there is already a database in /var/lib/mysql directory. If there is none, it will try to create one. If you don't provide any of the specified environment variable then it will give the error you are getting. But, if it found already one database there, it will not try to create one and you will not see the error.
Now, the question is, you already have initialized the database then why it still complaining about the environment variables?
Here, the persistent volume come into play. As you have mounted the persistent volume at /var/lib/mysql directory, now this directory points to your persistent volume which is currently empty. So, when your container run docker-entrypoint.sh script, it does not found any database on /var/lib/mysql directory as it is now pointing to the persistent volume instead of original /var/lib/mysql directory of your docker image which had initialized database on this directory. So, it will try to create a new database and will complain as you haven't provided MYSQL_ROOT_PASSWORD environment variable.
When you don't use any persistent volume, your /var/lib/mysql directory points to the original directory which contains the initialized database. So, you don't see the error then.
Then, how you can initialize mysql database properly?
In order to initialize MySQL from a script, you just need to put the script into /docker-entrypoint-initdb.d. Just use a vanilla mysql image, put your initialization script into a volume then mount the volume at /docker-entrypoint-initdb.d directory. MySQL will be initialized.
Check this answer for details on how to initialize from script: https://stackoverflow.com/a/45682775/7695859

php_network_getaddresses: getaddrinfo failed error in Docker's adminer

I have problem with access to adminer in my docker container with laravel 5/mysql app. I got error :
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name does not resolve
My docker-compose.yml :
version: '3'
services:
votes_app:
build:
context: ./web
dockerfile: Dockerfile.yml
container_name: votes_app_container
environment:
- APACHE_RUN_USER=#1000
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8081:80
working_dir: ${APP_PTH_CONTAINER}
votes_db:
image: mysql:5.6.41
container_name: votes_db_container
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
votes_adminer:
image: adminer
container_name: votes_adminer_container
restart: always
ports:
- 8082:8080
links:
- votes_db
votes_composer:
image: composer:1.6
container_name: votes_composer_container
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install --ignore-platform-reqs
I got different ports for app and db container.
Here https://hub.docker.com/_/adminer/ I found:
Usage with external server You can specify the default host with the
ADMINER_DEFAULT_SERVER environment variable. This is useful if you are
connecting to an external server or a docker container named something
other than the default db.
docker run -p 8080:8080 -e ADMINER_DEFAULT_SERVER=mysql adminer
In console of my app I run command
$ docker run -p 8089:8080 -e ADMINER_DEFAULT_SERVER=votes_db adminer
with unused in my apps port and this command was not succesfull anyway, as I got the same error trying to log to adminer https://imgur.com/a/4HCdC1W.
Which is the right way ?
MODIFIED BLOCK # 2:
In my docker-compose.yml :
version: '3'
services:
votes_app:
build:
context: ./web
dockerfile: Dockerfile.yml
container_name: votes_app_container
environment:
- APACHE_RUN_USER=#1000
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8081:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql:5.6.41
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8082:8080
links:
- db
votes_composer:
image: composer:1.6
container_name: votes_composer_container
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install --ignore-platform-reqs
I rebuilded the app but I failed to login into adminer : https://imgur.com/a/JWVGfBA
I run in console of my OS pointing to other unused 8089 port:
$ docker run -p 8089:8080 -e ADMINER_DEFAULT_SERVER=db adminer
PHP 7.2.11 Development Server started at Thu Nov 1 07:00:46 2018
[Thu Nov 1 07:01:11 2018] ::ffff:172.17.0.1:34048 [200]: /
[Thu Nov 1 07:01:20 2018] ::ffff:172.17.0.1:34052 [302]: /
[Thu Nov 1 07:01:21 2018] ::ffff:172.17.0.1:34060 [403]: /?server=db&username=root
But again error logining to adminer to 8089 port, but the error message was different :
https://imgur.com/a/a8qM4bt
What is wrong ?
MODIFIED BLOCK # 3:
I suppose yes, as after I rebuilded the container I entered into the box and see “root” in console output:
$ docker-compose exec votes_app bash
root#a4aa907373f5:/var/www/html# ls -la
total 1063
drwxrwxrwx 1 root root 4096 Oct 27 12:01 .
drwxr-xr-x 1 root root 4096 Oct 16 00:11 ..
-rwxrwxrwx 1 root root 234 Oct 13 07:15 .editorconfig
-rwxrwxrwx 1 root root 1029 Oct 31 06:10 .env
-rwxrwxrwx 1 root root 651 Oct 13 07:15 .env.example
drwxrwxrwx 1 root root 4096 Nov 1 11:10 .git
-rwxrwxrwx 1 root root 111 Oct 13 07:15 .gitattributes
-rwxrwxrwx 1 root root 294 Oct 13 07:15 .gitignore
-rwxrwxrwx 1 root root 4356 Oct 13 07:15 1.txt
drwxrwxrwx 1 root root 0 Oct 13 07:15 __DOCS
drwxrwxrwx 1 root root 0 Oct 13 07:15 __SQL
drwxrwxrwx 1 root root 4096 Oct 13 07:15 app
-rwxrwxrwx 1 root root 1686 Oct 13 07:15 artisan
drwxrwxrwx 1 root root 0 Oct 13 07:15 bootstrap
-rwxrwxrwx 1 root root 2408 Oct 13 07:15 composer.json
-rwxrwxrwx 1 root root 200799 Oct 13 07:15 composer.lock
drwxrwxrwx 1 root root 4096 Oct 13 07:15 config
drwxrwxrwx 1 root root 4096 Oct 13 07:15 database
-rwxrwxrwx 1 root root 52218 Oct 17 05:25 db_1_err.txt
-rwxrwxrwx 1 root root 482562 Oct 13 07:15 package-lock.json
-rwxrwxrwx 1 root root 1168 Oct 13 07:15 package.json
-rwxrwxrwx 1 root root 1246 Oct 13 07:15 phpunit.xml
drwxrwxrwx 1 root root 4096 Oct 13 07:15 public
-rwxrwxrwx 1 root root 66 Oct 13 07:15 readme.txt
drwxrwxrwx 1 root root 0 Oct 13 07:15 resources
drwxrwxrwx 1 root root 4096 Oct 13 07:15 routes
-rwxrwxrwx 1 root root 563 Oct 13 07:15 server.php
drwxrwxrwx 1 root root 4096 Oct 13 07:15 storage
drwxrwxrwx 1 root root 0 Oct 13 07:15 tests
drwxrwxrwx 1 root root 8192 Nov 1 13:05 vendor
-rwxrwxrwx 1 root root 1439 Oct 13 07:15 webpack.mix.js
-rwxrwxrwx 1 root root 261143 Oct 13 07:15 yarn.lock
root#a4aa907373f5:/var/www/html# echo $USER
root#a4aa907373f5:/var/www/html# uname -a
Linux a4aa907373f5 4.15.0-36-generic #39-Ubuntu SMP Mon Sep 24 16:19:09 UTC 2018 x86_64 GNU/Linux
Can it be issue anyway ?
MODIFIED BLOCK # 4
I remade this docker, I set default names of containers(I suppose that it raise some confusion) and I set image: composer:1.8 latest version
So in my docker-compose.yml :
version: '3.1'
services:
web:
build:
context: ./web
dockerfile: Dockerfile.yml
environment:
- APACHE_RUN_USER=#1000
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8081:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql:5.6.41
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8082:8080
links:
- db
composer:
image: composer:1.8
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install --ignore-platform-reqs
and in web/Dockerfile.yml :
FROM php:7.2-apache
RUN apt-get update -y && apt-get install -y libpng-dev nano
RUN docker-php-ext-install \
pdo_mysql \
&& a2enmod \
rewrite
But anyway after rebuilding of the project and connecting to adminer with
http://127.0.0.1:8082
url I got error:
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Try again
P.S.:
I have other laravel 5.0/php:5.6 / image: composer:1.6 / mcrypt installed docker project on the same local
server of my laptop, which works ok for me and I can enter adminer and can login to db from this app.
This docker project has files:
docker-compose.yml:
version: '3.1'
services:
web:
build:
context: ./web
dockerfile: Dockerfile.yml
environment:
- APACHE_RUN_USER=#1000
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8085:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql:5.5.62
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8086:8080
links:
- db
composer:
image: composer:1.6
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install --ignore-platform-reqs
and Dockerfile.yml :
FROM php:5.6-apache
RUN apt-get update -y && apt-get install -y libpng-dev nano libmcrypt-dev
RUN docker-php-ext-install \
pdo_mysql \
mcrypt \
&& a2enmod \
rewrite
Is this issue some php 7.2 specific feature(like some packages missing ?)
MODIFIED BLOCK # 5:
with defined :
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 8082:8080
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: 1
Running http://127.0.0.1:8082/ I got error in browser :
This site can’t be reached The webpage at http://127.0.0.1:8082/ might be temporarily down or it may have moved permanently to a new web address.
ERR_SOCKET_NOT_CONNECTED
While trying app url http://127.0.0.1:8081/public/ I got error :
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution
MODIFIED BLOCK # 6:
I remade with phpmyadmin in docker-compose.yml :
version: '3.1'
services:
# docker run -p 8089:8080 -e ADMINER_DEFAULT_SERVER=db adminer
web:
# env_file:
# - ./mysql.env
build:
context: ./web
dockerfile: Dockerfile.yml
environment:
- APACHE_RUN_USER=#1000
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8081:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql:5.6.41
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 8082:80
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: 1
composer:
image: composer:1.8
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install --ignore-platform-reqs
but trying to login into phpMyAdmin at
http://127.0.0.1:8082
I got the same error : https://imgur.com/a/cGeudI6
Also I have ports :
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
471de34926b9 phpmyadmin/phpmyadmin "/run.sh supervisord…" 41 minutes ago Up 41 minutes 9000/tcp, 0.0.0.0:8082->80/tcp votes_docker_phpmyadmin_1
226fcdbeeb25 mysql:5.6.41 "docker-entrypoint.s…" 41 minutes ago Restarting (1) 49 seconds ago votes_docker_db_1
1cb1efb10561 votes_docker_web "docker-php-entrypoi…" 41 minutes ago Up 41 minutes 0.0.0.0:8081->80/tcp votes_docker_web_1
d6718cd16256 adminer "entrypoint.sh docke…" 13 hours ago Up About an hour 0.0.0.0:8088->8080/tcp ads_docker_adminer_1
1928a54e1d66 mysql:5.5.62 "docker-entrypoint.s…" 13 hours ago Up About an hour 3306/tcp ads_docker_db_1
e43b2a1e9cc7 adminer "entrypoint.sh docke…" 6 days ago Up About an hour 0.0.0.0:8086->8080/tcp youtubeapi_demo_adminer_1
47a034fca5a2 mysql:5.5.62 "docker-entrypoint.s…" 6 days ago Up About an hour 3306/tcp youtubeapi_demo_db_1
3dcc1a4ce8f0 adminer "entrypoint.sh docke…" 6 weeks ago Up About an hour 0.0.0.0:8083->8080/tcp lprods_adminer_container
933d9fffaf76 postgres:9.6.10-alpine "docker-entrypoint.s…" 6 weeks ago Up About an hour 0.0.0.0:5433->5432/tcp lprods_db_container
 
MODIFIED BLOCK # 7 
I am not sure which debugging info can I provide, but seems loging has some warning. Are they critical ?
Which additive debugging info can I provide ?
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker-compose up -d --build
Creating network "votes_docker_default" with the default driver
Building web
Step 1/3 : FROM php:7.2-apache
---> cf1a377ba77f
Step 2/3 : RUN apt-get update -y && apt-get install -y libpng-dev nano
---> Using cache
---> 2c4bce73e8cc
Step 3/3 : RUN docker-php-ext-install pdo_mysql && a2enmod rewrite
---> Using cache
---> 241c9bf59ac0
Successfully built 241c9bf59ac0
Successfully tagged votes_docker_web:latest
Creating votes_docker_composer_1 ... done
Creating votes_docker_web_1 ... done
Creating votes_docker_db_1 ... done
Creating votes_docker_phpmyadmin_1 ... done
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ clear
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker logs --tail=20 votes_docker_web_1
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
[Wed Dec 26 12:26:34.113194 2018] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.25 (Debian) PHP/7.2.11 configured -- resuming normal operations
[Wed Dec 26 12:26:34.113247 2018] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker logs --tail=20 votes_docker_db_1
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_CMPMEM'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_CMP'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_LOCKS'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_TRX'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'BLACKHOLE'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'ARCHIVE'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'MRG_MYISAM'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'MyISAM'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'MEMORY'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'CSV'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'sha256_password'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'mysql_old_password'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'mysql_native_password'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'binlog'
2018-12-26 12:26:43 1 [Note] mysqld: Shutdown complete
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker logs --tail=20 votes_docker_composer_1
> #php artisan package:discover
Discovered Package: aloha/twilio
Discovered Package: barryvdh/laravel-debugbar
Discovered Package: beyondcode/laravel-dump-server
Discovered Package: cviebrock/eloquent-sluggable
Discovered Package: davejamesmiller/laravel-breadcrumbs
Discovered Package: fideloper/proxy
Discovered Package: intervention/image
Discovered Package: itsgoingd/clockwork
Discovered Package: jrean/laravel-user-verification
Discovered Package: laravel/tinker
Discovered Package: laravelcollective/html
Discovered Package: mews/captcha
Discovered Package: nesbot/carbon
Discovered Package: nunomaduro/collision
Discovered Package: proengsoft/laravel-jsvalidation
Discovered Package: rap2hpoutre/laravel-log-viewer
Discovered Package: themsaid/laravel-mail-preview
Discovered Package: yajra/laravel-datatables-oracle
Package manifest generated successfully.
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker logs --tail=20 votes_docker_phpmyadmin_1
phpMyAdmin not found in /var/www/html - copying now...
Complete! phpMyAdmin has been successfully copied to /var/www/html
/usr/lib/python2.7/site-packages/supervisor/options.py:461: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory);
you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2018-12-26 12:26:35,973 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to av
oid this message.
2018-12-26 12:26:35,973 INFO Included extra file "/etc/supervisor.d/nginx.ini" during parsing
2018-12-26 12:26:35,973 INFO Included extra file "/etc/supervisor.d/php.ini" during parsing
2018-12-26 12:26:35,984 INFO RPC interface 'supervisor' initialized
2018-12-26 12:26:35,984 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2018-12-26 12:26:35,984 INFO supervisord started with pid 1
2018-12-26 12:26:36,986 INFO spawned: 'php-fpm' with pid 23
2018-12-26 12:26:36,988 INFO spawned: 'nginx' with pid 24
[26-Dec-2018 12:26:37] NOTICE: fpm is running, pid 23
[26-Dec-2018 12:26:37] NOTICE: ready to handle connections
2018-12-26 12:26:38,094 INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-26 12:26:38,095 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
What is wrong ?
Thanks!
I was having the same issue, then I find that the default value in Adminer application for server address is 'db', which doesn't match with the service name for my MySQL container.
Try with phpMyAdmin :)
version: '3.2'
services:
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: myUserPass
MYSQL_DATABASE: mydb
MYSQL_USER: myUser
MYSQL_PASSWORD: myUser
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 8088:80
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: myUserPass
You can see about it in
https://hub.docker.com/_/adminer/
Example
version: '3.1'
services:
adminer:
image: adminer
restart: always
ports:
- 8080:8080
db:
image: mysql:5.6
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
The problem with your setup is because of the environment variable DB_PATH_HOST. You have setup everything fine in your compose file. but before running docker-compose you are supposed to define the environment variable DB_PATH_HOST. Since the environment variable is not defined it throws an error. See this for more details on Environment variables and it's precedence in Docker.
So what you should have done is, Before starting docker container you should have defined the environment variable either by defining it in compose file or exporting it in shell as shell variable before running docker-compose or should have used env file or by using ENV instrction in dockerfile. (These are all the possible ways of defining environment variables and I've listed all of them the method that comes first takes priority. refer this for more info).
So the proper docker-compose.yml file should be as follows.
version: '3.2'
services:
db:
image: mysql:5.6.41
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
DB_PATH_HOST: /tmp/mysql #this is the host location where mysql data will be stored.
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 8082:80
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: 1
Now coming to the next point I see that from your discussions you have concluded that removing volumes from the db container fixed your problem. But actually not. How?
First let me explain why volume is used here. The data generated my mysql should be stored somewhere. Docker by default runs containers in non-persistant mode, which means all the data generated by the running docker conatiner will be erased when that container is brought down/killed. So in order to persist(store/save) data we use volumes. There are different types of volumes used in docker. I encourage you to read Storage documentation of docker for more details. The type of volume used here is a bind mount which is, You bind a host directory to a docker directory and docker stores all the data directly in host machine such that even if docker container is brought down data is still preserved.
Hence if you don't use volumes in mysql all the db changes irrespective of whatever you do, will be lost whenever the container is stopped.
Bonus Poits:
By default MySQL container doesn't allow remote connections. So if you want to access mysql from anywhere else apart from phpmyadmin. You have to allow remote connections.
Since we are preserving the data here the root password will be set only on the first time whenever you start mysqll container. From next time onwards root password environment variable will be ignored.
If you log into docker containers using docker exec mostly you can see that you will be root. That's because whenever you create a docker container with Dockerfile by using either docker build or docker-compose build unless you specify an instruction on Dockerfile to create and use a new user docker will run everything as root user.
Now whenever you run the above compose file. You can see that the mysql data location's ownership will be changed. It's because whenever you mount a host directory to docker, Docker changes the file permission according to the user and group as per that container's Dockerile definition. Here mysql has defined a user and group called mysql and the uid and gid is 999. hence /tmp/mysql will be havng 999:999 as ownership. If these id's are mapped with any other user account in your system you will see those names instead of ids whenever you do ls -al in host machine. If id's are not mapped then you will see the id's directly.
I've used /tmp/mysql as mysql data directory for an example. Pleae don't use the same as the data in /tmp will be removed whenever there is a system restart.
The question has already been answered, but adding my solution to a similar problem here for reference.
Adding a 'links' parameter to my docker-compose phpmyadmin/adminer service block solved it for me, based on the assumption that the service name of the database block is in fact db as used in examples in answered above too. This link makes it possible to then in the phpmyadmin login interface use 'db' as the host and it will connect.
links:
- db:db
change the container name to db for mysql image made the difference for me
You can see about it in https://hub.docker.com/_/adminer/
services:
db:
image: mysql:5.6