Connecting to MySQL 5.6 inside Docker For Desktop/Kubernetes: ERROR 1130 (HY000): Host 'xx.xx.xx.xx' is not allowed to connect to this MySQL server - mysql

I'm following theses instructions (page 181) to create a persistent volume & claim, a mysql replica set & service. I specify mysql v5.6 in the yaml file for the replica set.
After viewing the log for the pod, it looks like it is successful. So then I
kubectl run -it --rm --image=mysql --restart=Never mysql-client -- bash
mysql -h mysql -p 3306 -u root
It prompts me for the password and then I get this error:
ERROR 1130 (HY000): Host '10.1.0.17' is not allowed to connect to this MySQL server
Apparently MySQL has a feature where it does not allow remote connections by default and I have to change the configuration files and I don't know how to do that inside a yaml file. Below is my YAML. How do I change it to allow remote connections?
Thanks
Siegfried
cat <<END-OF-FILE | kubectl apply -f -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mysql
# labels so that we can bind a Service to this Pod
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: tododata
image: mysql:5.6
resources:
requests:
cpu: 1
memory: 2Gi
env:
# Environment variables are not a best practice for security,
# but we're using them here for brevity in the example.
# See Chapter 11 for better options.
- name: MYSQL_ROOT_PASSWORD
value: some-password-here
livenessProbe:
tcpSocket:
port: 3306
ports:
- containerPort: 3306
volumeMounts:
- name: tododata
# /var/lib/mysql is where MySQL stores its databases
mountPath: "/var/lib/mysql"
volumes:
- name: tododata
persistentVolumeClaim:
claimName: tododata
END-OF-FILE
Sat Oct 24 2020 3PM EDT Update: Try Bitnami MySQL
I like Ben's idea of using bitnami mysql because then I don't have to create my own custom docker image. However, when using bitnami and trying to connect to they mysql server I get
ERROR 2003 (HY000): Can't connect to MySQL server on 'my-release-mysql.default.svc.cluster.local' (111)
This happens after I successfully get a bash shell with this command:
kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
Then, as per the instructions, I do this and get the HY000 error above.
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p
Wed Nov 04 2020 Update:
Thanks Ben.. Yes -- I had already tried that on Oct 24 (approx) and when I do a k describe pod I get mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/opt/bitnami/mysql/tmp/mysql.sock' exists!.
Of course, when I run the mysql client as described in the nicely generated instructions, the client cannot connect because mysqld has died.
This is after having deleted the pvcs and stss and doing helm delete my-release prior to reinstalling via helm.
Unfortunately, when I tried this the first time (a couple of weeks ago) I did not set the root password and used the default generated password and I think it is still trying to use that.
This did work on azure kubernetes after having created a fresh azure kubernetes cluster. How can I reset my kubernetes cluster in my docker for desktop windows? I tried google searching and no luck so far.
Thanks
Siegfried

After a lot of help from the bitnami folks, I learned that my spinning disks on my 4 year old notebook computer are kinda slow (now why this is a problem with Bitnami MySQL and not Bitnami PostreSQL is a mystery).
This works for me:
helm install my-mysql bitnami/mysql \
--set image.debug=true \
--set primary.persistence.enabled=false,secondary.persistence.enabled=false \
--set primary.readinessProbe.enabled=false,primary.livenessProbe.enabled=false \
--set secondary.readinessProbe.enabled=false,secondary.livenessProbe.enabled=false
This turns off the peristent volumes so the data is lost when the pod dies.
Yes this is useful for me for development purposes and no one should be using Docker For Desktop/Kubernetes for production anyway... I just need to populate a tiny database and test my queries and if I need to repopulate database every time I reboot, well, that is not a big problem.
So maybe I need to get a new notebook computer? The price of notebook computers with 4TB of spinning disk space has gone up in the last couple of years.... And I cannot find any SSD drives of that size so even if I purchased a new replacement with spinning disks I might have the same problem? Hmm....
Thanks everyone for your help!
Siegfried

This appears to work just fine for me on windows. Complete the following steps:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release --set root.password=awesomePassword bitnami/mysql
This is all you need to run the mysql instance. It does not makes a few services and a statefulset. Then, to connect to it, you
Either have to be in another another kubernetes container. Without this, you will not find the dns record for my-release-mysql.default.svc.cluster.local
run my-release-mysql-client --rm --tty -i --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p my_database
For the password, it should be 'awesomePassword'
Port forward the service to your local machine.
kubectl port-forward svc/my-release-mysql 3306:3306
As a note, a bitnami container will have issues if you kill it and restart it with only your helm commands and the password is not set. The persistent volume claim will usually stick around - so you would need to set the password to the old password. If you do not specify the password, you can get the password by running the commands bitnami tells you about.
NAME: my-release
LAST DEPLOYED: Thu Oct 29 20:39:23 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES: Please be patient while the chart is being deployed
Tip:
Watch the deployment status using the command: kubectl get pods -w
--namespace default
Services:
echo Master: my-release-mysql.default.svc.cluster.local:3306 echo
Slave: my-release-mysql-slave.default.svc.cluster.local:3306
Administrator credentials:
echo Username: root echo Password : $(kubectl get secret
--namespace default my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
To connect to your database:
Run a pod that you can use as a client:
kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
To connect to master service (read/write):
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p my_database
To connect to slave service (read-only):
mysql -h my-release-mysql-slave.default.svc.cluster.local -uroot -p my_database
To upgrade this helm chart:
Obtain the password as described on the 'Administrator credentials' section and set the 'root.password' parameter as shown
below:
ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64
--decode)
helm upgrade my-release bitnami/mysql --set root.password=$ROOT_PASSWORD

Related

CircleCI job creates docker MySQL 8 but nothing can connect

(See UPDATE at end of post for potentially helpful debug info.)
I have a CircleCI job that deploys MySQL 8 via - setup_remote_docker+docker-compose and then attempts to start a Java app to communicate with MySQL 8. Unfortunately, even though docker ps shows the container is up and running, any attempt to communicate with MySQL--either through the Java app or docker exec--fails, saying the container is not running (and Java throws a "Communications Link Failure" exception). It's a bit confusing because the container appears to be up, and the exact same commands work on my local machine.
Here's my CircleCI config.yml:
Build and Test:
<<: *configure_machine
steps:
- *load_repo
- ... other unrelated stuff ...
- *load_gradle_wrapper
- run:
name: Install Docker Compose
environment:
COMPOSE_VERSION: '1.29.2'
command: |
curl -L "https://github.com/docker/compose/releases/download/${COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o ~/docker-compose
chmod +x ~/docker-compose
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker
- run:
name: Start MySQL docker
command: docker-compose up -d
- run:
name: Check Docker MySQL
command: docker ps
- run:
name: Query MySQL #test that fails
command: docker exec -it mysql8_test_mysql mysql mysql -h 127.0.0.1 --port 3306 -u root -prootpass -e "show databases;"
And here's my docker-compose.yml that is run in one of the steps:
version: "3.1"
services:
# MySQL Dev Image
mysql-migrate:
container_name: mysql8_test_mysql
image: mysql:8.0
command:
mysqld --default-authentication-plugin=mysql_native_password
--character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
--log-bin-trust-function-creators=true
environment:
MYSQL_DATABASE: test_db
MYSQL_ROOT_PASSWORD: rootpass
ports:
- "3306:3306"
volumes:
- "./docker/mysql/data:/var/lib/mysql"
- "./docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf"
- "./mysql_schema_v1.sql:/docker-entrypoint-initdb.d/mysql_schema_v1.sql"
It's a fairly simple setup and the output from CircleCI is positive until it reaches the docker exec, which I added to test the connection. Here is what the output from CircleCI says per step:
Start MySQL Docker:
#!/bin/bash -eo pipefail
docker-compose up -d
Creating network "project_default" with the default driver
Pulling mysql-migrate (mysql:8.0)...
8.0: Pulling from library/mysql
5158dd02: Pulling fs layer
f6778b18: Pulling fs layer
a6c74a04: Pulling fs layer
4028a805: Pulling fs layer
7163f0f6: Pulling fs layer
cb7f57e0: Pulling fs layer
7a431703: Pulling fs layer
5fe86aaf: Pulling fs layer
add93486: Pulling fs layer
960383f3: Pulling fs layer
80965951: Pulling fs layer
Digest: sha256:b17a66b49277a68066559416cf44a185cfee538d0e16b5624781019bc716c122 121B/121BkBBB
Status: Downloaded newer image for mysql:8.0
Creating mysql8_******_mysql ...
Creating mysql8_******_mysql ... done
So we know MySQL 8 was pulled fine (and therefore the previous step worked). Next step is to ask Docker what's running.
Check Docker MySQL:
#!/bin/bash -eo pipefail
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb6b7941ad65 mysql:8.0 "docker-entrypoint.s…" 1 second ago Up Less than a second 0.0.0.0:3306->3306/tcp, 33060/tcp mysql8_test_mysql
CircleCI received exit code 0
Looks good so far. But now let's actually try to run a command against it via docker exec.
Query MySQL:
#!/bin/bash -eo pipefail
docker exec -it mysql8_test_mysql mysql mysql -h 127.0.0.1 --port 3306 -u root -prootpass -e "show databases;"
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1:3306' (111)
Exited with code exit status 1
CircleCI received exit code 1
So now we can't connect to MySQL even though docker ps showed it up and running. I even tried adding an absurd step to wait in case MySQL needed more time:
- run:
name: Start MySQL docker
command: docker-compose up -d
- run:
name: Check Docker MySQL
command: docker ps
- run:
name: Wait Until Ready
command: sleep 120
- run:
name: Query MySQL
command: docker exec -it mysql8_test_mysql mysql mysql -h 127.0.0.1 --port 3306 -u root -prootpass -e "show databases;"
Of course adding a 2 minute wait for MySQL to spin up didn't help. Any ideas as to why this is so difficult in CircleCI?
Thanks in advance.
UPDATE 1: I can successfully start MySQL if I SSH into the job's server and run the same command myself:
docker-compose up
Then in another terminal run this:
docker exec -it mysql8_test_mysql mysql mysql -h localhost --port 3306 -u root -prootpass -e "show databases;"
+--------------------+
| Database |
+--------------------+
| information_schema |
| test_db |
| mysql |
| performance_schema |
| sys |
+--------------------+
So it is possible to start MySQL. It's just not working right when through job steps.
UPDATE 2: I moved the two minute wait between docker-compose up -d and docker ps and now it shows nothing is running. So the container must be starting then crashing and that's the reason for why it's not available moments later.
The cause of the problem was the volumes entry in my docker-compose.yml with this line:
- "./mysql_schema_v1.sql:/docker-entrypoint-initdb.d/mysql_schema_v1.sql"
The container appeared to be up when I checked immediately after docker-compose up -d but in actuality it would crash seconds later because CircleCI appears to have an issue with Docker volume, potentially related to this: https://discuss.circleci.com/t/docker-compose-doesnt-mount-volumes-with-host-files-with-circle-ci/19099.
To make it work I removed that volume entry and added run commands to copy and import the schema like so:
- run:
name: Start MySQL docker
command: docker-compose up -d
# Manually copy schema file instead of using docker-compose volumes (has issues with CircleCI)
- run:
name: Copy Schema
command: docker cp mysql_schema_v1.sql mysql8_mobile_mysql:docker-entrypoint-initdb.d/mysql_schema_v1.sql
- run:
name: Import Schema
command: docker exec mysql8_mobile_mysql /bin/sh -c 'mysql -u root -prootpass < docker-entrypoint-initdb.d/mysql_schema_v1.sql'
With this new setup I've been able to create the tables and connect to MySQL. However, there appears to be an issue running tests against MySQL causing hangups but that might be unrelated. I will follow up with more information, but at least I hope this can help someone else.

Docker - Securely set MySQL/MariaDB root password at build stage

I'm trying to create a docker build/compose where I can securely set the root password for my MariaDB server at build/runtime rather than having to do it manually in the shell through docker exec. I want to be a completely hands-off build.
I have tried multiple ways of getting this to work, including BuildKit secrets, but am trying to avoid using Swarm if possible. I read that it was possible to do it using docker compose so I have written a YAML for it, however it does not seem to be working.
The compose seems to work fine, however when I try to update my database from a dump (this exec is just for testing so fine that it isn't hands-off) using this command:
docker exec -i my_db_containter mysql -uroot -pmypassword < dbserver/sqlconfig/db_dump.sql
I get this error:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
Can anyone spot what I might be doing wrong here, is able to perhaps suggest an alternative solution to setting the server root password in this manner?
--
EDIT: After doing some more reading, it seems like even this method isn't that secure as it just seems to mount a read-only file in the container? Does anyone have any suggestions as to how I can automatically and securely set the MYSQL_ROOT_PASSWORD, ideally without swarm? If swarm really is the only option then I guess I can look into it.
--
Here is what I have so far:
docker-compose.yaml:
version: '3.9'
services:
db:
build:
context: "./dbserver"
container_name: 'my_db_container'
environment:
MYSQL_DATABASE: 'my_db'
MYSQL_ROOT_PASSWORD: /run/secrets/dbrootpass
networks:
my_net:
ipv4_address: 203.0.113.88
secrets:
- dbrootpass
networks:
my_net:
ipam:
driver: default
config:
- subnet: "203.0.113.0/24"
secrets:
dbrootpass:
file: ./rootpass
rootpass:
mypassword
Create secrets:
$ read -p "Enter variable for MARIADB_ROOT_PASSWORD : " token && echo -n "$token" | podman secret create "MARIADB_PASSWORD" -
Enter variable for MARIADB_ROOT_PASSWORD : whynot
7f3b681f9a05729ad5b6af9d5
$ podman run --secret=MARIADB_ROOT_PASSWORD,type=env --secret=MARIADB_PASSWORD,type=env --env MARIADB_USER=bob mariadb:10.5
Inside container:
$ podman exec -ti funny_cohen bash
root#2583f8620571:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
mysql 1 0 0 03:31 ? 00:00:00 mysqld
root 143 0 0 03:32 pts/0 00:00:00 bash
root 146 143 0 03:32 pts/0 00:00:00 ps -ef
root#2583f8620571:/# printenv
GPG_KEYS=177F4010FE56CA3336300305F1656F24C74CD1D8
PWD=/
MARIADB_USER=bob
container=podman
HOME=/root
MARIADB_VERSION=1:10.5.10+maria~focal
GOSU_VERSION=1.12
TERM=xterm
MARIADB_MAJOR=10.5
SHLVL=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
_=/usr/bin/printenv
It does show up in podman inspect funny_cohen however.

With bash on Windows 10, why does it execute my $() as separate commands?

I'm learning how to work with Docker and Minikube on a Windows 10 Home computer. I've installed the needed software OK. I've installed Docker, minikube, kubectl, and a recent version of MySQL, properly pathed so its CLI can be used. I'm using either the Bash console provided by GIT, and/or the Bash console provided by Cygwin. Both seem to provide the same (bad) results.
I start Docker, and install the MySQL service. The kubectl get all shows everything running OK.
Per the programming book I'm working through, I want to try accessing MySQL through this command:
mysql -h $(minikube service mysql-svc --format "{{.IP}}") -P $(minikube service mysql-svc --format "{{.Port}}") -u root -p
The result should be the MySQL CLI prompt, like mysql> . Instead I get this behavior:
A popup window stating "Windows cannot find '192.168.99.101'. Make sure you typed the name correctly, and then try again."
The console text:
The system cannot find the file 192.168.99.101.
*
X open url failed: 192.168.99.101: exit status 1
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
- (URL for sending an error message)
A popup window stating "Windows cannot find '31067'. Make sure you typed the name correctly, and then try again."
The console text:
mysql: [ERROR] Unknown suffix '|' used for variable 'port' (value (gives the border fence that surrounds the output from minikube service mysql-svc .)
mysql: [ERROR] (path to mysql.exe): Error while setting value '|-----|--- (etc)' to 'port'
The expected behavior is to insert an IP and Port into the mysql command line, then firing a command like mysql -h http://192.168.99.101 -P 31067 -u root -p .
I think that the problem is with the using the Bash console in a Windows environment. Any explanation is appreciated.
Thanks,
Jerome.
UPDATE ON 8/7/2020:
I'm asked to more thoroughly document my issue. Here we go.
Here is what Docker knows:
$ docker images
REPOSITORY TAG
IMAGE ID CREATED SIZE
logicaltiger/cloudnative-statelessness-posts latest
3a3c66daf7f3 5 days ago 139MB
logicaltiger/cloudnative-statelessness-connections latest
d060e9857f49 5 days ago 139MB
logicaltiger/cloudnative-statelessness-connectionposts-stateful latest
ce33f0966380 5 days ago 123MB
openjdk 8-jdk-alpine
a3562aa0b991 15 months ago 105MB
mysql 8.0.12
ee1e8adfcefb 22 months ago 484MB
Here is my reconfiguring of minikube. Other posters suggested that minikube runs iffy unless given a lot of resources.
$ minikube delete
* Deleting "minikube" in virtualbox ...
* Removed all traces of the "minikube" cluster.
$ minikube start --cpus=4 --memory=4096
* minikube v1.12.1 on Microsoft Windows 10 Home 10.0.18363 Build 18363
* Automatically selected the virtualbox driver
* Starting control plane node minikube in cluster minikube
* Creating virtualbox VM (CPUs=4, Memory=4096MB, Disk=20000MB) ...
* Found network options:
- NO_PROXY=192.168.99.100
- no_proxy=192.168.99.100
* Preparing Kubernetes v1.18.3 on Docker 19.03.12 ...
- env NO_PROXY=192.168.99.100
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"
After starting mysql from its yaml file I have it running:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mysql-7dbfd4dbc4-b2tmm 1/1 Running 0 2m55s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) A
GE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6
m43s
service/mysql-svc NodePort 10.102.7.119 <none> 3306:32235/TCP 2
m55s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mysql 1/1 1 1 2m55s
NAME DESIRED CURRENT READY AGE
replicaset.apps/mysql-7dbfd4dbc4 1 1 1 2m55s
Now get the mysql-svc URL:
$ minikube service mysql-svc --url
http://192.168.99.102:32235
Try to run the book example. Again I get the two popup windows and what is shown below in the terminal. I omit the popup window text here...
$ mysql -h $(minikube service mysql-svc --format "{{.IP}}") -P $(minikube service mysql-svc --format "{{.Port}}") -u root -p
The system cannot find the file 192.168.99.102.
*
X open url failed: 192.168.99.102: exit status 1
*
* minikube is exiting due to an error. If the above message is not useful, open
an issue:
- https://github.com/kubernetes/minikube/issues/new/choose
The system cannot find the file 32235.
*
X open url failed: 32235: exit status 1
*
* minikube is exiting due to an error. If the above message is not useful, open
an issue:
- https://github.com/kubernetes/minikube/issues/new/choose
mysql: [ERROR] Unknown suffix '|' used for variable 'port' (value '|-----------|
-----------|-------------|-------|')
mysql: [ERROR] C:\Program Files\MySQL\MySQL Server 8.0\bin\mysql.exe: Error whil
e setting value '|-----------|-----------|-------------|-------|' to 'port'
I'm asked what happens if I put in the IP directly. From above, that IP was http://192.168.99.102:32235
$ mysql -h http://192.168.99.102 -P 32235 -u root -p
Enter password: **********
ERROR 2005 (HY000): Unknown MySQL server host 'http://192.168.99.102' (0)
When directly entering the IP and Port, the MySQL server IS reached (see the "Enter password:" prompt) but the request is refused. I'm thinking that I don't know how to make MySQL use the HTTP request.
But is the MySQL not knowing what to do with the IP, and the way that the indirect method (minikube service mysql -svc ...), related?
Continuing, I edit my cookbook-deployment-posts.yaml file with the MySQL address:
kind: Service
apiVersion: v1
metadata:
name: posts-svc
spec:
selector:
app: posts
ports:
- protocol: "TCP"
port: 80
targetPort: 8080
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts
labels:
app: posts
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: cdavisafc/cloudnative-statelessness-posts
env:
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: PORT
value: "8080"
- name: SPRING_APPLICATION_JSON
value: '{"spring":{"datasource":{"url":"jdbc:mysql://192.168.99.102:32235/cookbook"}}}'
Going to kubectl get all, the posts-svc continually starts, errors out and reboots. Don't know what is wrong...
Jerome.
I now see a number of things going wrong.
First, I keep thinking that the mysql call is somehow related to the MySQL installed on my PC. It never is. Just because I manually create a cookbook database on my PC instance doesn't mean that the textbook example thru Docker / Minikube ever references it.
Second, the textbook is missing the --url from its minikube requests. Here is what is happening.
> kubectl create -f mysql-deployment.yaml
> minikube service mysql-svc --url
http://192.168.99.102:31030
> minikube service mysql-svc
This opens the web browser to show the service at 192.168.99.102:31030. The mysql service doesn't render a web page, but that doesn't matter to this example. The console then shows the details of the service (namespace, name, target port, url) in an ASCII box.
minikube service mysql-svc --format "{{.IP}}"
This wants to open the web browser to show the service at http://192.168.99.102, with an implied port of 80. But there is nothing there, and Windows complains at a popup box. The console then complains about not opening that url.
minikube service mysql-svc --format "{{.Port}}"
This wants to open the web browser to show the service at 31030, which isn't a valid URL. Complaints, complaints.
What I really wanted all along is to add the --url to the minikube bits:
mysql -h $(minikube service mysql-svc --format "{{.IP}}" --url) -P $(minikube service mysql-svc --format "{{.Port}}" --url) -u root -p
This connects to the managed mysql in the console, yielding the mysql> prompt. Now I can run 'create database cookbook;'.
Solved!

Adding Flyway to a MySQL Docker Container

I'm building an derivative to this Docker container for mysql (using it as a starting point): https://github.com/docker-library/mysql
I've amended the Dockerfile to add in Flyway. Everything is set up to edit the config file to connect to the local DB instance, etc. The intent is to call this command from inside the https://github.com/docker-library/mysql/blob/master/5.7/docker-entrypoint.sh file (which runs as the ENTRYPOINT) around line 186:
flyway migrate
I get a connection refused when this is run from inside the shell script:
Flyway 4.1.2 by Boxfuse
ERROR:
Unable to obtain Jdbc connection from DataSource
(jdbc:mysql://localhost:3306/db-name) for user 'root': Could not connect to address=(host=localhost)(port=3306)(type=master) : Connection refused
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08
Error Code : -1
Message : Could not connect to address=(host=localhost)(port=3306)(type=master) : Connection refused
But, if I remove the command from the shell script, rebuild and log in to the container, and run the same command manually, it works with no problems.
I suspect that there may be some differences with how the script connects to the DB to do its thing (it has a built in SQL "runner"), but I can't seem to hunt it down. The container restarts the server during the process, which is what may be the difference here.
Since this container is intended for development, one alternative (a work-around, really) is to use the built in SQL "runner" for this container, using the filename format that Flyway expects, then use Flyway to manage the production DB's versions.
Thanks in advance for any help.
I mean it's the good way to start from the ready image (for start).
You may start from image docker "mysql"
FROM mysql
If you start the finished image - when creating new version your docker then
will only update the difference.
Next, step you may install java and net-tools
RUN apt-get -y install apt-utils openjdk-8-jdk net-tools
Config mysql
ENV MYSQL_DATABASE=mydb
ENV MYSQL_ROOT_PASSWORD=root
Add flyway
ADD flyway /opt/flyway
Add migrations
ADD sql /opt/flyway/sql
Add config flyway
ADD config /opt/flyway/conf
Add script to start
ADD start /root/start.sh
Check start mysql
RUN netstat -ntlp
Check java version
RUN java -version
Example file: /opt/flyway/conf/flyway.conf
flyway.driver=com.mysql.jdbc.Driver
flyway.url=jdbc:mysql://localhost:3306/mydb
flyway.user=root
flyway.password=root
Example file: start.sh
#!/bin/bash
cd /opt/flyway
flyway migrate
# may change to start.sh to start product migration or development.
Flyway documentation
I mean that you in next step may use flyway as service:
For example:
docker run -it -p 3307:3306 my_docker_flyway /root/start << migration_prod.sh
docker run -it -p 3308:3306 my_docker_flayway /root/start << migration_dev.sh
etc ...
services:
# Standard Mysql Box, we have to add tricky things else logging by workbench is hard
supermonk-mysql:
image: mysql
command: --default-authentication-plugin=mysql_native_password --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
environment:
- MYSQL_ROOT_PASSWORD=P#ssw0rd
- MYSQL_ROOT_HOST=%
- MYSQL_DATABASE=test
ports:
- "3306:3306"
healthcheck:
test: ["CMD-SHELL", "nc -z 127.0.0.1 3306 || exit 1"]
interval: 1m30s
timeout: 60s
retries: 6
# Flyway is best for mysql schema migration history.
supermonk-flyway:
container_name: supermonk-flyway
image: boxfuse/flyway
command: -url=jdbc:mysql://supermonk-mysql:3306/test?verifyServerCertificate=false&useSSL=true -schemas=test -user=root -password=P#ssw0rd migrate
volumes:
- "./sql:/flyway/sql"
depends_on:
- supermonk-mysql
mkdir ./sql
vi ./sql/V1.1__Init.sql # and paste below
CREATE TABLE IF NOT EXISTS test.USER (
id VARCHAR(64),
fname VARCHAR(256),
lname VARCHAR(256),
CONSTRAINT pk PRIMARY KEY (id));
save and close
docker-compose up -d
wait for 2 minutes
docker-compose run supermonk-flyway
Ref :
https://github.com/supermonk/webapp/tree/branch-1/docker/docker-database
Thanks to docker community and mysql community
docker-compose logs -f

unable to connect to dockerized mysql container locally

I am still a beginner with docker, trying to use docker to help in my development prototyping. My environment is Mac using boot2docker, version as below
Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 4e9bbfa
OS/Arch (client): darwin/amd64
Server version: 1.3.2
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 39fa2fa
I ran the command as below:
docker run --name mymysql -e MYSQL_ROOT_PASSWORD=mypw -e MYSQL_DATABASE=bullshit -d mysql -p 3306:3306
docker start mymysql
I can see the process running as below:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
22d3f780c270 mysql:5 "/entrypoint.sh -p 3 2 minutes ago Up 2 seconds 3306/tcp mymysql
However I still could not connect to the mysql instance running in the docker. I tried connect to the ip retrieved by :
$ boot2docker ip
The VM's Host only interface IP address is: 192.168.59.103
Please give me a pointer on how to solve this issue, I went through the tutorial but I am not sure what went wrong.
The command you used should give an error. The syntax for docker run is as follow:
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
You have to submit the options to docker run before specifying the image used (mysql in your case), and if it's the case, the command and possible argument(s) to that command.
Not specifying a command will run the default command in the image.
Before running again the container you should stop and remove the old one:
docker kill mymysql
docker rm mymysql
And, following your example you should run:
docker run --name mymysql -e MYSQL_ROOT_PASSWORD=mypw -e MYSQL_DATABASE=bullshit -p 3306:3306 -d mysql
As you set manually a port mapping from container's port 3306 to the same port of your Boot2docker VM, you should can access to MySQL using the IP of the Boot2docker instance, typically 192.168.59.103, and connecting to port 3306.