Why is my application not connecting to mysql? - mysql

I did docker-compose, but my application does not want to connect to the database. what could be the problem?
The application itself starts well, but as soon as I turn it into a container, it does not connect to the database and a lot of errors occur.
docker-compose:
version: '3.7'
services:
db:
image: mysql:8.0.17
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: 123test321
ports:
- 3306:3306
networks:
- employee-mysql
volumes:
- /home/alexey/temp/mysql01:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 3380:8080
webapp:
build:
context: .
restart: always
ports:
- 80:8080
networks:
- employee-mysql
depends_on:
- db
networks:
employee-mysql:
This is dockerfile:
FROM anapsix/alpine-java:8_jdk
FROM openjdk:8
ADD target/webapp-1.0.1.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
This is my log:
Starting work_db_1 ... done
Starting work_adminer_1 ... done
Starting work_webapp_1 ... done
Attaching to work_db_1, work_adminer_1, work_webapp_1
db_1 | 2019-09-09T17:29:20.983653Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
db_1 | 2019-09-09T17:29:20.983965Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.17) starting as process 1
db_1 | 2019-09-09T17:29:22.697164Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
db_1 | 2019-09-09T17:29:22.704622Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
db_1 | 2019-09-09T17:29:22.807613Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.17' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
db_1 | 2019-09-09T17:29:23.019957Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/var/run/mysqld/mysqlx.sock' bind-address: '::' port: 33060
adminer_1 | PHP 7.3.9 Development Server started at Mon Sep 9 17:29:21 2019
webapp_1 | [INFO ] 2019-09-09 17:29:30.149 [main] ApplicationKt - Starting ApplicationKt v1.0.1 on d52aa0a7ff7e with PID 1 (/app.jar started by root in /)
webapp_1 | [INFO ] 2019-09-09 17:29:30.184 [main] ApplicationKt - No active profile set, falling back to default profiles: default
webapp_1 | [INFO ] 2019-09-09 17:29:38.634 [main] RepositoryConfigurationDelegate - Bootstrapping Spring Data repositories in DEFAULT mode.
webapp_1 | [INFO ] 2019-09-09 17:29:39.562 [main] RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 894ms. Found 19 repository interfaces.
webapp_1 | [INFO ] 2019-09-09 17:29:42.756 [main] PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$e8bed525] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
webapp_1 | [INFO ] 2019-09-09 17:29:43.087 [main] PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.config.annotation.configuration.ObjectPostProcessorConfiguration' of type [org.springframework.security.config.annotation.configuration.ObjectPostProcessorConfiguration$$EnhancerBySpringCGLIB$$816c9d5f] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
webapp_1 | [INFO ] 2019-09-09 17:29:43.130 [main] PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'objectPostProcessor' of type [org.springframework.security.config.annotation.configuration.AutowireBeanFactoryObjectPostProcessor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
webapp_1 | [INFO ] 2019-09-09 17:29:43.139 [main] PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler#7d61eb55' of type [org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
webapp_1 | [INFO ] 2019-09-09 17:29:43.159 [main] PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration' of type [org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration$$EnhancerBySpringCGLIB$$a6414011] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
webapp_1 | [INFO ] 2019-09-09 17:29:43.202 [main] PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.config.annotation.method.configuration.Jsr250MetadataSourceConfiguration' of type [org.springframework.security.config.annotation.method.configuration.Jsr250MetadataSourceConfiguration$$EnhancerBySpringCGLIB$$cb965827] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
webapp_1 | [INFO ] 2019-09-09 17:29:43.237 [main] PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'jsr250MethodSecurityMetadataSource' of type [org.springframework.security.access.annotation.Jsr250MethodSecurityMetadataSource] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
webapp_1 | [INFO ] 2019-09-09 17:29:43.244 [main] PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'methodSecurityMetadataSource' of type [org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
webapp_1 | [INFO ] 2019-09-09 17:29:45.329 [main] HikariDataSource - HikariPool-1 - Starting...
webapp_1 | [ERROR] 2019-09-09 17:29:46.161 [main] HikariPool - HikariPool-1 - Exception during pool initialization.
webapp_1 | com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
webapp_1 |
webapp_1 | The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.

All right.
version: '3.7'
services:
db:
image: mysql:8.0.17
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: ****
ports:
- 3306:3306
volumes:
- /home/alexey/temp/mysql01:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 3380:8080
webapp:
build:
context: .
environment:
CONTEXT_PATH: /a-test
DB_HOST: db
DB_PORT: 3306
DB_NAME: ****
DB_USER: ****
DB_PASSWORD: ****
MAIL_LOGIN: ****
MAIL_PASSWORD: ****
ports:
- 80:8080
depends_on:
- db
And Dockerfile:
FROM anapsix/alpine-java:8_jdk
FROM openjdk:8
ADD target/webapp-*.jar webapp.jar
EXPOSE 8080
ENTRYPOINT java -jar webapp.jar

Related

Scala app in docker compose - cannot connect to mysql

I have seen multiple other questions on similar topics, but none of them provides a solution that solves my problem. I have, to the best of my ability, checked to make sure that I have tried solutions provided in other similar questions, but none of them appear to work (or I am reading them incorrectly).
I have a small Scala-app to ingest data to a MySQL database. Having both the app and the DB locally, everything works fine. When trying to run using docker compose, I can't connect to the database.
Setup:
Docker compose file:
version: '1'
services:
ingester:
restart: always
build:
context: ./data_ingester
dockerfile: Dockerfile
container_name: data_ingester
links:
- database
depends_on:
- database
grafana:
restart: always
image: grafana/grafana
container_name: grafana_display
volumes:
- ./grafana_display/grafana-storage:/var/lib/grafana
ports:
- 3000:3000
links:
- database
depends_on:
- database
database:
restart: always
image: mysql:5.7
container_name: database
ports:
- 3306:3306
volumes:
- ./mysql_db_setup/db_setup.sql:/docker-entrypoint-initdb.d/db_setup.sql
environment:
MYSQL_ROOT_PASSWORD: pwd
MYSQL_DATABASE: db
MYSQL_USER: mysql_user
MYSQL_PASSWORD: mysql_pw
volumes:
ingester:
grafana:
database:
Starting all this using docker-compose, I am then able to use docker exec to get a bash shell in the database-container and add some sample data to a table.
Following that, I go to the web-ui exposed by the grafana-container and set up a MySQL datasource by specifying host:port as "database:3306" (i.e. the name of the database-container as host with the appropriate port).
This works, I can connect to the MySQL db from the grafana-container and visualise the sample data I inserted, so the database seems reachable and all that.
In my Scala program, which is run using sbt in the ingester-container, I am now setting up a connection as follows:
Class.forName("com.mysql.jdbc.Driver")
ConnectionPool.singleton(
"jdbc:mysql://database:3306?characterEncoding=UTF-8",
"mysql_user",
"mysql_pw"
)
Reading all the other similar questions, I figured this is the right way. Given that I can reach the database from the grafana-container when specifying the connection in this way also makes me believe this is how it should be specified. However, when trying to run this, the scala-program dies with the following error:
data_ingester | [info] welcome to sbt 1.6.1 (Oracle Corporation Java 11.0.14)
data_ingester | [info] loading project definition from /DataIngester/project
data_ingester | [info] loading settings for project dataingester from build.sbt ...
data_ingester | [info] set current project to DataIngester (in build file:/DataIngester/)
data_ingester | [info] running Main
data_ingester | 22:57:35.987 [sbt-bg-threads-1] DEBUG scalikejdbc.ConnectionPool$ - Registered connection pool : ConnectionPool(url:jdbc:mysql://database:3306?characterEncoding=UTF-8, user:mysql_user) using factory : <default>
data_ingester | 22:57:35.993 [sbt-bg-threads-1] DEBUG scalikejdbc.ConnectionPool$ - Registered singleton connection pool : ConnectionPool(url:jdbc:mysql://database:3306?characterEncoding=UTF-8, user:mysql_user)
data_ingester | [error] java.net.ConnectException: Connection refused (Connection refused)
data_ingester | [error] at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
data_ingester | [error] at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412)
data_ingester | [error] at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255)
data_ingester | [error] at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237)
data_ingester | [error] at java.base/java.net.Socket.connect(Socket.java:609)
data_ingester | [error] at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)
data_ingester | [error] at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)
data_ingester | [error] at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)
data_ingester | [error] at java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
data_ingester | [error] at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:341)
data_ingester | [error] at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:362)
data_ingester | [error] at java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1253)
data_ingester | [error] at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187)
data_ingester | [error] at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)
data_ingester | [error] at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015)
data_ingester | [error] at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1592)
data_ingester | [error] at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1520)
data_ingester | [error] at java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527)
data_ingester | [error] at scalaj.http.HttpRequest.doConnection(Http.scala:367)
data_ingester | [error] at scalaj.http.HttpRequest.exec(Http.scala:343)
data_ingester | [error] at scalaj.http.HttpRequest.asString(Http.scala:492)
data_ingester | [error] at
data_ingester | [error] stack trace is suppressed; run 'last Compile / run' for the full output
data_ingester | [error] (Compile / run) java.net.ConnectException: Connection refused (Connection refused)
data_ingester | [error] Total time: 122 s (02:02), completed Feb 10, 2022, 10:57:36 PM
The most common errors seem to be that you have specified the host incorrectly or not exposed the right ports, but since I can connect to the MySQL-container from the grafana-container I am inclined to believe both the host and ports are set correctly, so I am not sure what I am overlooking.
If you have made it through this far, thank you for reading, and any help is very appreciated!
(FYI: setting up a mysql db on localhost and running the Scala-program locally without docker and just connect to "localhost:3306" works, so it seems the code to actually connect seems to be alright, just that something breaks when running it using docker compose).

Error while Dockerizing a CRUD RESTful API with Go, MySQL

I'm getting issue while running docker compose up , You can see my docker-compose.yaml file that I wrote for running the web API with MySQL db. I've also attached the error I'm facing currently. It'll be a great help if you can help me in this regard. thanks.
version: '3'
services:
app:
container_name: web_api
build: .
ports:
- 8080:5000
restart: on-failure
volumes:
- api:/usr/src/app/
depends_on:
- webapi-mysql
networks:
- webapi
webapi-mysql:
image: mysql:8.0.26
container_name: db_mysql
ports:
- 3308:3306
environment:
- MYSQL_ROOT_HOST=${DB_HOST}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_DATABASE=${DB_NAME}
- MYSQL_ROOT_PASSWORD=${DB_PASSWORD}
volumes:
- database_mysql:/var/lib/mysql
networks:
- webapi
volumes:
api:
database_mysql:
# Networks to be created to facilitate communication between containers
networks:
webapi:
driver: bridge
Error is coming up in my CLI, as follows
Attaching to db_mysql, web_api
db_mysql | 2021-11-08 13:20:28+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started.
db_mysql | 2021-11-08 13:20:28+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
db_mysql | 2021-11-08 13:20:28+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started.
db_mysql | 2021-11-08 13:20:28+00:00 [ERROR] [Entrypoint]: MYSQL_USER="root", MYSQL_USER and MYSQL_PASSWORD are for configuring a regular user and cannot be used for the root user
db_mysql | Remove MYSQL_USER="root" and use one of the following to control the root user password:
db_mysql | - MYSQL_ROOT_PASSWORD
db_mysql | - MYSQL_ALLOW_EMPTY_PASSWORD
db_mysql | - MYSQL_RANDOM_ROOT_PASSWORD
web_api | 2021/11/08 13:20:31 dial tcp 127.0.0.1:3306: connect: connection refused
db_mysql exited with code 1
web_api exited with code 1
You should change the db configuration in your app to connect to webapi-mysql instead of 127.0.0.1. Every container gets its own unique address. Docker will create a fake DNS record for webapi-mysql that points to the correct IP address at that time.

error during mysql deployment in kubernetes for FireFly III installation

followed https://docs.firefly-iii.org/installation/k8n
Did some extra customization to the pvc's to make them bind to my nfs pv's.
All going smooth so far. But the mysql pod wont come up and gets into a CrashLoopBackOff.
Describing the mysql pod:
~/kubernetes$ kubectl describe pod firefly-iii-mysql-67bfb68cf9-6gm9l
Name: firefly-iii-mysql-67bfb68cf9-6gm9l
Namespace: default
Priority: 0
Node: ommitted
Start Time: Fri, 30 Oct 2020 15:09:29 +0000
Labels: app=firefly-iii
pod-template-hash=67bfb68cf9
tier=mysql
Annotations: <none>
Status: Running
IP: 10.32.0.4
IPs:
IP: 10.32.0.4
Controlled By: ReplicaSet/firefly-iii-mysql-67bfb68cf9
Containers:
mysql:
Container ID: docker://7c0e5920d4e1cb3ce98308e0c02d4a98bc9926b828c50496b8d8b3486245dcb9
Image: mysql:5.6
Image ID: docker-pullable://mysql#sha256:8875725ff152f77e47a563661ea010b4ca9cea42d9dda897fb565ca224e83de2
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 141
Started: Fri, 30 Oct 2020 15:25:48 +0000
Finished: Fri, 30 Oct 2020 15:25:50 +0000
Ready: False
Restart Count: 8
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'db_password' in secret 'firefly-iii-secrets-kkcdcb696c'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-k9zt4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-k9zt4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-k9zt4
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 18m (x2 over 18m) default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Normal Scheduled 18m default-scheduler Successfully assigned default/firefly-iii-mysql-67bfb68cf9-6gm9l to domm-lnx-02
Normal Pulled 16m (x5 over 18m) kubelet Container image "mysql:5.6" already present on machine
Normal Created 16m (x5 over 18m) kubelet Created container mysql
Normal Started 16m (x5 over 18m) kubelet Started container mysql
Warning BackOff 3m16s (x73 over 18m) kubelet Back-off restarting failed container
From the logs I get this:
~/kubernetes$ kubectl logs firefly-iii-mysql-67bfb68cf9-6gm9l
2020-10-30 15:25:48+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.50-1debian9 started.
2020-10-30 15:25:49+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-10-30 15:25:49+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.50-1debian9 started.
2020-10-30 15:25:49+00:00 [Note] [Entrypoint]: Initializing database files
2020-10-30 15:25:50 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2020-10-30 15:25:50 0 [Note] Ignoring --secure-file-priv value as server is running with --bootstrap.
2020-10-30 15:25:50 0 [Note] /usr/sbin/mysqld (mysqld 5.6.50) starting as process 52 ...
2020-10-30 15:25:50 52 [Warning] Can't create test file /var/lib/mysql/firefly-iii-mysql-67bfb68cf9-6gm9l.lower-test
2020-10-30 15:25:50 52 [Warning] Can't create test file /var/lib/mysql/firefly-iii-mysql-67bfb68cf9-6gm9l.lower-test
/usr/sbin/mysqld: Can't change dir to '/var/lib/mysql/' (Errcode: 13 - Permission denied)
2020-10-30 15:25:50 52 [ERROR] Aborting
2020-10-30 15:25:50 52 [Note] Binlog end
2020-10-30 15:25:50 52 [Note] /usr/sbin/mysqld: Shutdown complete
pod creation yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: firefly-iii-mysql
labels:
app: firefly-iii
spec:
selector:
matchLabels:
app: firefly-iii
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: firefly-iii
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: firefly-iii-secrets
key: db_password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
I had some trouble with my pvc's at first, but that seems to be resolved now. The firefly-iii main pod seems to be working just fine, with a similar pv-pvc setup. I had to recreate the whole deployment a couple of times. But now I am stuck with this permission error. I am not sure how to fix it and I could not find much useful anywhere about this...
I hope someone here can point me in the right direction...
Figured it out. Problem was with the nfs permissions. Had to set them to "map all users to admin" on the nfs host. Now my deployment can actually edit files there... Which is something mysql likes ;)

Unable to link mysql docker container with spring boot application - Communications link failure

I am new to Docker. I am using Spring boot micro service. It's running well On my local machine. Now, I need to create a docker image for my application. It has a dependency on Mysql server. I am using docker-compose to create my containers. I am getting a communications link failure error while running my custom image (spring boot application). The Mysql image is running well independently.
My yml file :
version: '2'
services:
mysql-dev:
image: mysql:5.7
container_name: mysql-dev
environment:
MYSQL_ROOT_PASSWORD: "password"
MYSQL_DATABASE: "onlinetutorialspoint"
networks:
- my_mysql_net
ports:
- 3306:3308
spring_boot_db_service:
depends_on:
- mysql-dev
image: spring_boot_db_service
ports:
- 8181:8181
links:
- mysql-dev:mysql
networks:
- my_mysql_net
networks:
my_mysql_net:
driver: bridge
application properties file :
db.driver: com.mysql.jdbc.Driver
spring.datasource.url = jdbc:mysql://mysql-dev:3308/onlinetutorialspoint?useSSL=false
spring.datasource.username = root
spring.datasource.password = password
Full Error Message :
spring_boot_db_service_1 | 2019-01-28 13:34:06.955 INFO 1 --- [
main] org.hibernate.cfg.Environment : HHH000206:
hibernate.properties not found spring_boot_db_service_1 | 2019-01-28
13:34:07.000 INFO 1 --- [ main]
o.hibernate.annotations.common.Version : HCANN000001: Hibernate
Commons Annotations {5.0.1.Final} spring_boot_db_service_1 |
2019-01-28 13:34:08.430 WARN 1 --- [ main]
o.h.e.j.e.i.JdbcEnvironmentInitiator : HHH000342: Could not obtain
connection to query metadata : Communications link failure
spring_boot_db_service_1 | spring_boot_db_service_1 | The last
packet sent successfully to the server was 0 milliseconds ago. The
driver has not received any packets from the server.
spring_boot_db_service_1 | 2019-01-28 13:34:08.443 INFO 1 --- [
main] org.hibernate.dialect.Dialect : HHH000400: Using
dialect: org.hibernate.dialect.MySQL5Dialect spring_boot_db_service_1
| 2019-01-28 13:34:08.459 INFO 1 --- [ main]
o.h.e.j.e.i.LobCreatorBuilderImpl : HHH000422: Disabling
contextual LOB creation as connection was null
spring_boot_db_service_1 | 2019-01-28 13:34:08.921 WARN 1 --- [
main] ConfigServletWebServerApplicationContext : Exception encountered
during context initialization - cancelling refresh attempt:
org.springframework.beans.factory.UnsatisfiedDependencyException:
Error creating bean with name 'dbServiceImpl': Unsatisfied dependency
expressed through field 'dbServiceDao'; nested exception is
org.springframework.beans.factory.UnsatisfiedDependencyException:
Error creating bean with name 'dbServiceDaoImpl': Unsatisfied dependency expressed through field 'sessionFactory'; nested exception
is org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'sessionFactory' defined in class path
resource [com/htc/dbservice/configuration/DBConfiguration.class]:
Invocation of init method failed; nested exception is
org.hibernate.MappingException: Could not get constructor for
org.hibernate.persister.entity.SingleTableEntityPersister
spring_boot_db_service_1 | 2019-01-28 13:34:08.923 WARN 1 --- [
main] o.s.b.f.support.DisposableBeanAdapter : Invocation of destroy
method 'close' failed on bean with name 'eurekaRegistration':
org.springframework.beans.factory.BeanCreationNotAllowedException:
Error creating bean with name
'org.springframework.cloud.netflix.eureka.EurekaClientAutoConfiguration$RefreshableEurekaClientConfiguration':
Singleton bean creation not allowed while singletons of this factory
are in destruction (Do not request a bean from a BeanFactory in a
destroy method implementation!) spring_boot_db_service_1 | 2019-01-28
13:34:08.926 INFO 1 --- [ main]
o.apache.catalina.core.StandardService : Stopping service [Tomcat]
docker_spring_boot_db_service_1 exited with code 1
Did you change the default mysql port? if not, you are pointing to wrong port number.
When you map the port to pubilsh the service externally, the order is
<host-port>:<container-port>
services:
mysql-dev:
image: mysql:5.7
container_name: mysql-dev
environment:
MYSQL_ROOT_PASSWORD: "password"
MYSQL_DATABASE: "onlinetutorialspoint"
networks:
- my_mysql_net
ports:
- 3308:3306
And using docker-compose and connecting both services to the same network you can connect directly to the container without pubishing the mysql port to the external network.
Try to change your spring app pointing to mysql-dev:3306, or try to connect to the container spring_boot_db_service using
docker-compose exec spring_boot_db_service bash
and make a connectivity test to the database container.

php_network_getaddresses: getaddrinfo failed error in Docker's adminer

I have problem with access to adminer in my docker container with laravel 5/mysql app. I got error :
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name does not resolve
My docker-compose.yml :
version: '3'
services:
votes_app:
build:
context: ./web
dockerfile: Dockerfile.yml
container_name: votes_app_container
environment:
- APACHE_RUN_USER=#1000
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8081:80
working_dir: ${APP_PTH_CONTAINER}
votes_db:
image: mysql:5.6.41
container_name: votes_db_container
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
votes_adminer:
image: adminer
container_name: votes_adminer_container
restart: always
ports:
- 8082:8080
links:
- votes_db
votes_composer:
image: composer:1.6
container_name: votes_composer_container
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install --ignore-platform-reqs
I got different ports for app and db container.
Here https://hub.docker.com/_/adminer/ I found:
Usage with external server You can specify the default host with the
ADMINER_DEFAULT_SERVER environment variable. This is useful if you are
connecting to an external server or a docker container named something
other than the default db.
docker run -p 8080:8080 -e ADMINER_DEFAULT_SERVER=mysql adminer
In console of my app I run command
$ docker run -p 8089:8080 -e ADMINER_DEFAULT_SERVER=votes_db adminer
with unused in my apps port and this command was not succesfull anyway, as I got the same error trying to log to adminer https://imgur.com/a/4HCdC1W.
Which is the right way ?
MODIFIED BLOCK # 2:
In my docker-compose.yml :
version: '3'
services:
votes_app:
build:
context: ./web
dockerfile: Dockerfile.yml
container_name: votes_app_container
environment:
- APACHE_RUN_USER=#1000
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8081:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql:5.6.41
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8082:8080
links:
- db
votes_composer:
image: composer:1.6
container_name: votes_composer_container
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install --ignore-platform-reqs
I rebuilded the app but I failed to login into adminer : https://imgur.com/a/JWVGfBA
I run in console of my OS pointing to other unused 8089 port:
$ docker run -p 8089:8080 -e ADMINER_DEFAULT_SERVER=db adminer
PHP 7.2.11 Development Server started at Thu Nov 1 07:00:46 2018
[Thu Nov 1 07:01:11 2018] ::ffff:172.17.0.1:34048 [200]: /
[Thu Nov 1 07:01:20 2018] ::ffff:172.17.0.1:34052 [302]: /
[Thu Nov 1 07:01:21 2018] ::ffff:172.17.0.1:34060 [403]: /?server=db&username=root
But again error logining to adminer to 8089 port, but the error message was different :
https://imgur.com/a/a8qM4bt
What is wrong ?
MODIFIED BLOCK # 3:
I suppose yes, as after I rebuilded the container I entered into the box and see “root” in console output:
$ docker-compose exec votes_app bash
root#a4aa907373f5:/var/www/html# ls -la
total 1063
drwxrwxrwx 1 root root 4096 Oct 27 12:01 .
drwxr-xr-x 1 root root 4096 Oct 16 00:11 ..
-rwxrwxrwx 1 root root 234 Oct 13 07:15 .editorconfig
-rwxrwxrwx 1 root root 1029 Oct 31 06:10 .env
-rwxrwxrwx 1 root root 651 Oct 13 07:15 .env.example
drwxrwxrwx 1 root root 4096 Nov 1 11:10 .git
-rwxrwxrwx 1 root root 111 Oct 13 07:15 .gitattributes
-rwxrwxrwx 1 root root 294 Oct 13 07:15 .gitignore
-rwxrwxrwx 1 root root 4356 Oct 13 07:15 1.txt
drwxrwxrwx 1 root root 0 Oct 13 07:15 __DOCS
drwxrwxrwx 1 root root 0 Oct 13 07:15 __SQL
drwxrwxrwx 1 root root 4096 Oct 13 07:15 app
-rwxrwxrwx 1 root root 1686 Oct 13 07:15 artisan
drwxrwxrwx 1 root root 0 Oct 13 07:15 bootstrap
-rwxrwxrwx 1 root root 2408 Oct 13 07:15 composer.json
-rwxrwxrwx 1 root root 200799 Oct 13 07:15 composer.lock
drwxrwxrwx 1 root root 4096 Oct 13 07:15 config
drwxrwxrwx 1 root root 4096 Oct 13 07:15 database
-rwxrwxrwx 1 root root 52218 Oct 17 05:25 db_1_err.txt
-rwxrwxrwx 1 root root 482562 Oct 13 07:15 package-lock.json
-rwxrwxrwx 1 root root 1168 Oct 13 07:15 package.json
-rwxrwxrwx 1 root root 1246 Oct 13 07:15 phpunit.xml
drwxrwxrwx 1 root root 4096 Oct 13 07:15 public
-rwxrwxrwx 1 root root 66 Oct 13 07:15 readme.txt
drwxrwxrwx 1 root root 0 Oct 13 07:15 resources
drwxrwxrwx 1 root root 4096 Oct 13 07:15 routes
-rwxrwxrwx 1 root root 563 Oct 13 07:15 server.php
drwxrwxrwx 1 root root 4096 Oct 13 07:15 storage
drwxrwxrwx 1 root root 0 Oct 13 07:15 tests
drwxrwxrwx 1 root root 8192 Nov 1 13:05 vendor
-rwxrwxrwx 1 root root 1439 Oct 13 07:15 webpack.mix.js
-rwxrwxrwx 1 root root 261143 Oct 13 07:15 yarn.lock
root#a4aa907373f5:/var/www/html# echo $USER
root#a4aa907373f5:/var/www/html# uname -a
Linux a4aa907373f5 4.15.0-36-generic #39-Ubuntu SMP Mon Sep 24 16:19:09 UTC 2018 x86_64 GNU/Linux
Can it be issue anyway ?
MODIFIED BLOCK # 4
I remade this docker, I set default names of containers(I suppose that it raise some confusion) and I set image: composer:1.8 latest version
So in my docker-compose.yml :
version: '3.1'
services:
web:
build:
context: ./web
dockerfile: Dockerfile.yml
environment:
- APACHE_RUN_USER=#1000
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8081:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql:5.6.41
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8082:8080
links:
- db
composer:
image: composer:1.8
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install --ignore-platform-reqs
and in web/Dockerfile.yml :
FROM php:7.2-apache
RUN apt-get update -y && apt-get install -y libpng-dev nano
RUN docker-php-ext-install \
pdo_mysql \
&& a2enmod \
rewrite
But anyway after rebuilding of the project and connecting to adminer with
http://127.0.0.1:8082
url I got error:
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Try again
P.S.:
I have other laravel 5.0/php:5.6 / image: composer:1.6 / mcrypt installed docker project on the same local
server of my laptop, which works ok for me and I can enter adminer and can login to db from this app.
This docker project has files:
docker-compose.yml:
version: '3.1'
services:
web:
build:
context: ./web
dockerfile: Dockerfile.yml
environment:
- APACHE_RUN_USER=#1000
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8085:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql:5.5.62
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8086:8080
links:
- db
composer:
image: composer:1.6
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install --ignore-platform-reqs
and Dockerfile.yml :
FROM php:5.6-apache
RUN apt-get update -y && apt-get install -y libpng-dev nano libmcrypt-dev
RUN docker-php-ext-install \
pdo_mysql \
mcrypt \
&& a2enmod \
rewrite
Is this issue some php 7.2 specific feature(like some packages missing ?)
MODIFIED BLOCK # 5:
with defined :
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 8082:8080
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: 1
Running http://127.0.0.1:8082/ I got error in browser :
This site can’t be reached The webpage at http://127.0.0.1:8082/ might be temporarily down or it may have moved permanently to a new web address.
ERR_SOCKET_NOT_CONNECTED
While trying app url http://127.0.0.1:8081/public/ I got error :
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution
MODIFIED BLOCK # 6:
I remade with phpmyadmin in docker-compose.yml :
version: '3.1'
services:
# docker run -p 8089:8080 -e ADMINER_DEFAULT_SERVER=db adminer
web:
# env_file:
# - ./mysql.env
build:
context: ./web
dockerfile: Dockerfile.yml
environment:
- APACHE_RUN_USER=#1000
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8081:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql:5.6.41
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 8082:80
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: 1
composer:
image: composer:1.8
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install --ignore-platform-reqs
but trying to login into phpMyAdmin at
http://127.0.0.1:8082
I got the same error : https://imgur.com/a/cGeudI6
Also I have ports :
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
471de34926b9 phpmyadmin/phpmyadmin "/run.sh supervisord…" 41 minutes ago Up 41 minutes 9000/tcp, 0.0.0.0:8082->80/tcp votes_docker_phpmyadmin_1
226fcdbeeb25 mysql:5.6.41 "docker-entrypoint.s…" 41 minutes ago Restarting (1) 49 seconds ago votes_docker_db_1
1cb1efb10561 votes_docker_web "docker-php-entrypoi…" 41 minutes ago Up 41 minutes 0.0.0.0:8081->80/tcp votes_docker_web_1
d6718cd16256 adminer "entrypoint.sh docke…" 13 hours ago Up About an hour 0.0.0.0:8088->8080/tcp ads_docker_adminer_1
1928a54e1d66 mysql:5.5.62 "docker-entrypoint.s…" 13 hours ago Up About an hour 3306/tcp ads_docker_db_1
e43b2a1e9cc7 adminer "entrypoint.sh docke…" 6 days ago Up About an hour 0.0.0.0:8086->8080/tcp youtubeapi_demo_adminer_1
47a034fca5a2 mysql:5.5.62 "docker-entrypoint.s…" 6 days ago Up About an hour 3306/tcp youtubeapi_demo_db_1
3dcc1a4ce8f0 adminer "entrypoint.sh docke…" 6 weeks ago Up About an hour 0.0.0.0:8083->8080/tcp lprods_adminer_container
933d9fffaf76 postgres:9.6.10-alpine "docker-entrypoint.s…" 6 weeks ago Up About an hour 0.0.0.0:5433->5432/tcp lprods_db_container
 
MODIFIED BLOCK # 7 
I am not sure which debugging info can I provide, but seems loging has some warning. Are they critical ?
Which additive debugging info can I provide ?
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker-compose up -d --build
Creating network "votes_docker_default" with the default driver
Building web
Step 1/3 : FROM php:7.2-apache
---> cf1a377ba77f
Step 2/3 : RUN apt-get update -y && apt-get install -y libpng-dev nano
---> Using cache
---> 2c4bce73e8cc
Step 3/3 : RUN docker-php-ext-install pdo_mysql && a2enmod rewrite
---> Using cache
---> 241c9bf59ac0
Successfully built 241c9bf59ac0
Successfully tagged votes_docker_web:latest
Creating votes_docker_composer_1 ... done
Creating votes_docker_web_1 ... done
Creating votes_docker_db_1 ... done
Creating votes_docker_phpmyadmin_1 ... done
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ clear
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker logs --tail=20 votes_docker_web_1
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
[Wed Dec 26 12:26:34.113194 2018] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.25 (Debian) PHP/7.2.11 configured -- resuming normal operations
[Wed Dec 26 12:26:34.113247 2018] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker logs --tail=20 votes_docker_db_1
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_CMPMEM'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_CMP'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_LOCKS'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'INNODB_TRX'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'BLACKHOLE'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'ARCHIVE'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'MRG_MYISAM'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'MyISAM'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'MEMORY'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'CSV'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'sha256_password'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'mysql_old_password'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'mysql_native_password'
2018-12-26 12:26:43 1 [Note] Shutting down plugin 'binlog'
2018-12-26 12:26:43 1 [Note] mysqld: Shutdown complete
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker logs --tail=20 votes_docker_composer_1
> #php artisan package:discover
Discovered Package: aloha/twilio
Discovered Package: barryvdh/laravel-debugbar
Discovered Package: beyondcode/laravel-dump-server
Discovered Package: cviebrock/eloquent-sluggable
Discovered Package: davejamesmiller/laravel-breadcrumbs
Discovered Package: fideloper/proxy
Discovered Package: intervention/image
Discovered Package: itsgoingd/clockwork
Discovered Package: jrean/laravel-user-verification
Discovered Package: laravel/tinker
Discovered Package: laravelcollective/html
Discovered Package: mews/captcha
Discovered Package: nesbot/carbon
Discovered Package: nunomaduro/collision
Discovered Package: proengsoft/laravel-jsvalidation
Discovered Package: rap2hpoutre/laravel-log-viewer
Discovered Package: themsaid/laravel-mail-preview
Discovered Package: yajra/laravel-datatables-oracle
Package manifest generated successfully.
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker logs --tail=20 votes_docker_phpmyadmin_1
phpMyAdmin not found in /var/www/html - copying now...
Complete! phpMyAdmin has been successfully copied to /var/www/html
/usr/lib/python2.7/site-packages/supervisor/options.py:461: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory);
you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2018-12-26 12:26:35,973 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to av
oid this message.
2018-12-26 12:26:35,973 INFO Included extra file "/etc/supervisor.d/nginx.ini" during parsing
2018-12-26 12:26:35,973 INFO Included extra file "/etc/supervisor.d/php.ini" during parsing
2018-12-26 12:26:35,984 INFO RPC interface 'supervisor' initialized
2018-12-26 12:26:35,984 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2018-12-26 12:26:35,984 INFO supervisord started with pid 1
2018-12-26 12:26:36,986 INFO spawned: 'php-fpm' with pid 23
2018-12-26 12:26:36,988 INFO spawned: 'nginx' with pid 24
[26-Dec-2018 12:26:37] NOTICE: fpm is running, pid 23
[26-Dec-2018 12:26:37] NOTICE: ready to handle connections
2018-12-26 12:26:38,094 INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-26 12:26:38,095 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
What is wrong ?
Thanks!
I was having the same issue, then I find that the default value in Adminer application for server address is 'db', which doesn't match with the service name for my MySQL container.
Try with phpMyAdmin :)
version: '3.2'
services:
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: myUserPass
MYSQL_DATABASE: mydb
MYSQL_USER: myUser
MYSQL_PASSWORD: myUser
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 8088:80
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: myUserPass
You can see about it in
https://hub.docker.com/_/adminer/
Example
version: '3.1'
services:
adminer:
image: adminer
restart: always
ports:
- 8080:8080
db:
image: mysql:5.6
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
The problem with your setup is because of the environment variable DB_PATH_HOST. You have setup everything fine in your compose file. but before running docker-compose you are supposed to define the environment variable DB_PATH_HOST. Since the environment variable is not defined it throws an error. See this for more details on Environment variables and it's precedence in Docker.
So what you should have done is, Before starting docker container you should have defined the environment variable either by defining it in compose file or exporting it in shell as shell variable before running docker-compose or should have used env file or by using ENV instrction in dockerfile. (These are all the possible ways of defining environment variables and I've listed all of them the method that comes first takes priority. refer this for more info).
So the proper docker-compose.yml file should be as follows.
version: '3.2'
services:
db:
image: mysql:5.6.41
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
DB_PATH_HOST: /tmp/mysql #this is the host location where mysql data will be stored.
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 8082:80
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: 1
Now coming to the next point I see that from your discussions you have concluded that removing volumes from the db container fixed your problem. But actually not. How?
First let me explain why volume is used here. The data generated my mysql should be stored somewhere. Docker by default runs containers in non-persistant mode, which means all the data generated by the running docker conatiner will be erased when that container is brought down/killed. So in order to persist(store/save) data we use volumes. There are different types of volumes used in docker. I encourage you to read Storage documentation of docker for more details. The type of volume used here is a bind mount which is, You bind a host directory to a docker directory and docker stores all the data directly in host machine such that even if docker container is brought down data is still preserved.
Hence if you don't use volumes in mysql all the db changes irrespective of whatever you do, will be lost whenever the container is stopped.
Bonus Poits:
By default MySQL container doesn't allow remote connections. So if you want to access mysql from anywhere else apart from phpmyadmin. You have to allow remote connections.
Since we are preserving the data here the root password will be set only on the first time whenever you start mysqll container. From next time onwards root password environment variable will be ignored.
If you log into docker containers using docker exec mostly you can see that you will be root. That's because whenever you create a docker container with Dockerfile by using either docker build or docker-compose build unless you specify an instruction on Dockerfile to create and use a new user docker will run everything as root user.
Now whenever you run the above compose file. You can see that the mysql data location's ownership will be changed. It's because whenever you mount a host directory to docker, Docker changes the file permission according to the user and group as per that container's Dockerile definition. Here mysql has defined a user and group called mysql and the uid and gid is 999. hence /tmp/mysql will be havng 999:999 as ownership. If these id's are mapped with any other user account in your system you will see those names instead of ids whenever you do ls -al in host machine. If id's are not mapped then you will see the id's directly.
I've used /tmp/mysql as mysql data directory for an example. Pleae don't use the same as the data in /tmp will be removed whenever there is a system restart.
The question has already been answered, but adding my solution to a similar problem here for reference.
Adding a 'links' parameter to my docker-compose phpmyadmin/adminer service block solved it for me, based on the assumption that the service name of the database block is in fact db as used in examples in answered above too. This link makes it possible to then in the phpmyadmin login interface use 'db' as the host and it will connect.
links:
- db:db
change the container name to db for mysql image made the difference for me
You can see about it in https://hub.docker.com/_/adminer/
services:
db:
image: mysql:5.6