Using the rancher GUI, I'm trying to set up Nextcloud with MySQL database workloads on my AKS cluster. In the environment variables, I already have defined the admin user and password so why do I get this error on the create admin page?
Error while trying to create admin user: Failed to connect to the
database: An exception occurred in driver: SQLSTATE[HY000] [2054] The
server requested authentication method unknown to the client
I entered the Username and password correctly multiple times.
Below are my configurations for the database and nextcloud so far.
database workload:
Name: nextdb
Docker image: mysql
port: not set
I have the following variables:
MYSQL_ROOT_PASSWORD=rootpassX
MYSQL_DATABASE=nextDB
MYSQL_USER=nextcloud
MYSQL_PASSWORD=passX
volumes configuration:
Volume Type: Bind-Mount
Volume Name: nextdb
Path on the Node : /nextdb
The Path on the Node must be: a directory or create
Mount Point: /var/lib/mysql
nextcloud workload:
Name: nextcloud
Docker Image: nextcloud
Port Mapping:
Port Name : nextcloud80
Publish the container port: 80
Protocol: TCP
As a: Layer-4 load balancer
On listening port: 80
Environment variables:
MYSQL_DATABASE=nextDB
MYSQL_USER=nextcloud
MYSQL_PASSWORD=passX
MYSQL_HOST=nextdb
NEXTCLOUD_ADMIN_USER=admin
NEXTCLOUD_ADMIN_PASSWORD=adminPass
NEXTCLOUD_DATA_DIR=/var/www/html/nextcloud
Volumes:
Volume 1:
name: nextcloud
Volume Type: Bind-Mount
Path on the Node: /nextcloud
The Path on the Node must be: a directory or create.
Mount Point: /var/www/html
Volume 2
name: nextdb
Volume Type: Bind-Mount
Path on the Node: /nextdatabase
The Path on the Node must be: a directory or create.
Mount Point: /var/lib/mysql
What are the problems with my configurations?
Starting with version 8.02, MySQL updated the default authentication method for client connections. To revert to the older authentication method you need to explicitly specify the default authentication method.
If you are able to update your DB service in Rancher to pass the container argument --default-authentication-plugin=mysql_native_password that should revert MySQL to the older auth method.
Alternatively, depending on the MySQL image you are using, you can create a new Docker image from that base which replaces /etc/mysql/my.cnf inside the container. You should inspect /etc/mysql/my.cnf before you replace it, if there are any !includedir directives in the config file, you can place your supplemental configuration into an included folder using whatever filename you choose.
The supplemental configuration should look like this:
[mysqld]
default_authentication_plugin=mysql_native_password
Related
I am using kubernetes with helm 3.
I need to create a kubernetes pod with sql - creating:
database name: my_database
user: root
password:12345
port: 3306
The steps:
creating chart by:
helm create test
after the chart is created, change the Chart.yaml file in test folder, by adding dependencies section.
apiVersion: v2
name: test3
description: A Helm chart for Kubernetes
version: 0.1.0
appVersion: "1.16.0"
dependencies:
name: mysql
version: 8.8.23 repository: "https://charts.bitnami.com/bitnami"
run:
helm dependencies build test
After that there is a compressed file tgz.
So I extracted it and there is tar file - I extracted it too, and leave only the final extracted folder.
I presume this isn't the best approach of changing parameter in yaml for bitnami,
and using also the security.yaml - I would like knowing that better approach too.
I need to change the user + password, and link to database,
so I changed the values.yaml directly (any better approach?), for values: auth:rootPassword and auth:my_database.
the another following steps:
helm build dependencies test
helm install test --namespace test --create-namespace
after that there are two pods created.
I could check it by:
kubectl get pods -n test
and I see two pods running (maybe replication).
one of the pod: test-mysql-0 (the other is with random parse).
run:
kubectl exec --stdin --tty test-mysql-0 --namespace test-mysql -- /bin/sh
did enter the pod.
run:
mysql -rroot -p12345;
and then:
show databases;
That did showing all the database, including seeing the created database: my_database, successfully.
When I tried openning the mysql database from 'mysql workbench', and test (same user: root, and password, and port: 3306, and localhost), I couldn't run test (test connection button in database properties returns: 'failed to connect to database').
Why cannot I run properly 'mysql workbench', while in the pad itself - without any particular problem?
Is there any better approach than extrating the tgz file as I described above, and can I pass in better way (some secured yaml) the user+password?
(Right now is only the root password)
Thanks.
It sounds like you're trying to set the parameters in the dependent chart (please correct me if I'm wrong)
If this is right, all you need to do is add another section in your chart's values.yaml
name-of-dependency:
user-name: ABC
password: abcdef
the "name-of-dependency" is specified in your Chart.yaml file when you declare your chart. For example, here's my redis dependency from one of my own charts
dependencies:
- name: redis
repository: https://charts.bitnami.com/bitnami/
version: x.x.x
Then when I install the chart, I can override the redis chart's settings by doing this in my own chart's values.yaml
redis:
architecture: standalone
auth:
password: "secret-password-here"
I'm learning how to work with Docker and Minikube on a Windows 10 Home computer. I've installed the needed software OK. I've installed Docker, minikube, kubectl, and a recent version of MySQL, properly pathed so its CLI can be used. I'm using either the Bash console provided by GIT, and/or the Bash console provided by Cygwin. Both seem to provide the same (bad) results.
I start Docker, and install the MySQL service. The kubectl get all shows everything running OK.
Per the programming book I'm working through, I want to try accessing MySQL through this command:
mysql -h $(minikube service mysql-svc --format "{{.IP}}") -P $(minikube service mysql-svc --format "{{.Port}}") -u root -p
The result should be the MySQL CLI prompt, like mysql> . Instead I get this behavior:
A popup window stating "Windows cannot find '192.168.99.101'. Make sure you typed the name correctly, and then try again."
The console text:
The system cannot find the file 192.168.99.101.
*
X open url failed: 192.168.99.101: exit status 1
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
- (URL for sending an error message)
A popup window stating "Windows cannot find '31067'. Make sure you typed the name correctly, and then try again."
The console text:
mysql: [ERROR] Unknown suffix '|' used for variable 'port' (value (gives the border fence that surrounds the output from minikube service mysql-svc .)
mysql: [ERROR] (path to mysql.exe): Error while setting value '|-----|--- (etc)' to 'port'
The expected behavior is to insert an IP and Port into the mysql command line, then firing a command like mysql -h http://192.168.99.101 -P 31067 -u root -p .
I think that the problem is with the using the Bash console in a Windows environment. Any explanation is appreciated.
Thanks,
Jerome.
UPDATE ON 8/7/2020:
I'm asked to more thoroughly document my issue. Here we go.
Here is what Docker knows:
$ docker images
REPOSITORY TAG
IMAGE ID CREATED SIZE
logicaltiger/cloudnative-statelessness-posts latest
3a3c66daf7f3 5 days ago 139MB
logicaltiger/cloudnative-statelessness-connections latest
d060e9857f49 5 days ago 139MB
logicaltiger/cloudnative-statelessness-connectionposts-stateful latest
ce33f0966380 5 days ago 123MB
openjdk 8-jdk-alpine
a3562aa0b991 15 months ago 105MB
mysql 8.0.12
ee1e8adfcefb 22 months ago 484MB
Here is my reconfiguring of minikube. Other posters suggested that minikube runs iffy unless given a lot of resources.
$ minikube delete
* Deleting "minikube" in virtualbox ...
* Removed all traces of the "minikube" cluster.
$ minikube start --cpus=4 --memory=4096
* minikube v1.12.1 on Microsoft Windows 10 Home 10.0.18363 Build 18363
* Automatically selected the virtualbox driver
* Starting control plane node minikube in cluster minikube
* Creating virtualbox VM (CPUs=4, Memory=4096MB, Disk=20000MB) ...
* Found network options:
- NO_PROXY=192.168.99.100
- no_proxy=192.168.99.100
* Preparing Kubernetes v1.18.3 on Docker 19.03.12 ...
- env NO_PROXY=192.168.99.100
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"
After starting mysql from its yaml file I have it running:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mysql-7dbfd4dbc4-b2tmm 1/1 Running 0 2m55s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) A
GE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6
m43s
service/mysql-svc NodePort 10.102.7.119 <none> 3306:32235/TCP 2
m55s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mysql 1/1 1 1 2m55s
NAME DESIRED CURRENT READY AGE
replicaset.apps/mysql-7dbfd4dbc4 1 1 1 2m55s
Now get the mysql-svc URL:
$ minikube service mysql-svc --url
http://192.168.99.102:32235
Try to run the book example. Again I get the two popup windows and what is shown below in the terminal. I omit the popup window text here...
$ mysql -h $(minikube service mysql-svc --format "{{.IP}}") -P $(minikube service mysql-svc --format "{{.Port}}") -u root -p
The system cannot find the file 192.168.99.102.
*
X open url failed: 192.168.99.102: exit status 1
*
* minikube is exiting due to an error. If the above message is not useful, open
an issue:
- https://github.com/kubernetes/minikube/issues/new/choose
The system cannot find the file 32235.
*
X open url failed: 32235: exit status 1
*
* minikube is exiting due to an error. If the above message is not useful, open
an issue:
- https://github.com/kubernetes/minikube/issues/new/choose
mysql: [ERROR] Unknown suffix '|' used for variable 'port' (value '|-----------|
-----------|-------------|-------|')
mysql: [ERROR] C:\Program Files\MySQL\MySQL Server 8.0\bin\mysql.exe: Error whil
e setting value '|-----------|-----------|-------------|-------|' to 'port'
I'm asked what happens if I put in the IP directly. From above, that IP was http://192.168.99.102:32235
$ mysql -h http://192.168.99.102 -P 32235 -u root -p
Enter password: **********
ERROR 2005 (HY000): Unknown MySQL server host 'http://192.168.99.102' (0)
When directly entering the IP and Port, the MySQL server IS reached (see the "Enter password:" prompt) but the request is refused. I'm thinking that I don't know how to make MySQL use the HTTP request.
But is the MySQL not knowing what to do with the IP, and the way that the indirect method (minikube service mysql -svc ...), related?
Continuing, I edit my cookbook-deployment-posts.yaml file with the MySQL address:
kind: Service
apiVersion: v1
metadata:
name: posts-svc
spec:
selector:
app: posts
ports:
- protocol: "TCP"
port: 80
targetPort: 8080
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts
labels:
app: posts
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: cdavisafc/cloudnative-statelessness-posts
env:
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: PORT
value: "8080"
- name: SPRING_APPLICATION_JSON
value: '{"spring":{"datasource":{"url":"jdbc:mysql://192.168.99.102:32235/cookbook"}}}'
Going to kubectl get all, the posts-svc continually starts, errors out and reboots. Don't know what is wrong...
Jerome.
I now see a number of things going wrong.
First, I keep thinking that the mysql call is somehow related to the MySQL installed on my PC. It never is. Just because I manually create a cookbook database on my PC instance doesn't mean that the textbook example thru Docker / Minikube ever references it.
Second, the textbook is missing the --url from its minikube requests. Here is what is happening.
> kubectl create -f mysql-deployment.yaml
> minikube service mysql-svc --url
http://192.168.99.102:31030
> minikube service mysql-svc
This opens the web browser to show the service at 192.168.99.102:31030. The mysql service doesn't render a web page, but that doesn't matter to this example. The console then shows the details of the service (namespace, name, target port, url) in an ASCII box.
minikube service mysql-svc --format "{{.IP}}"
This wants to open the web browser to show the service at http://192.168.99.102, with an implied port of 80. But there is nothing there, and Windows complains at a popup box. The console then complains about not opening that url.
minikube service mysql-svc --format "{{.Port}}"
This wants to open the web browser to show the service at 31030, which isn't a valid URL. Complaints, complaints.
What I really wanted all along is to add the --url to the minikube bits:
mysql -h $(minikube service mysql-svc --format "{{.IP}}" --url) -P $(minikube service mysql-svc --format "{{.Port}}" --url) -u root -p
This connects to the managed mysql in the console, yielding the mysql> prompt. Now I can run 'create database cookbook;'.
Solved!
My new responsibility is porting our project into dockers. This means local code on each developer machine with test data on a staging server. At the moment, the code lives on the same server and thus uses local host (127.0.0.1) to connect to the database. The docker currently deploys and can run unit tests, which succeed in cases where no DB is required.
I've tried using the answers provided here: https://github.com/phpmyadmin/docker/issues/99
which failed at the time and with a variety of different attempts eventually led to trying to create SSH tunnels from inside the container (How do I complete this SSH tunnel from local development docker to staging database) . I've returned to trying to use the service, as the other options seem to be even more complicated or unreliable.
I've returned to using the kingsquare image that allows tunnelling but I don't know what ${SSH_AUTH_SOCK} is or how to use it. I've tried pointing it at an SSH key but that (probably obviously) fails.
I've included the whole docker-compose.yml, as an earlier mistake that I had not noticed is not including network reference in my existing docker (app) .
version: '3'
services:
tunnels:
image: kingsquare/tunnel
volumes:
- '${SSH_AUTH_SOCK}:/ssh-agent'
command: '*:3306:localhost:3306 -vvv user#[myserver->the IP of the machine hosting the DB?] -i /.ssh/openssh_ironman_justin -p 2302'
networks:
mynetwork:
aliases:
- remoteserver
app:
build:
context: .
dockerfile: .docker/Dockerfile
args:
APP_PATH: ${APP_PATH}
image: laravel-docker
env_file: .env
ports:
- 8080:80
# We need to expose 443 port for SSL certification.
- "443:443"
volumes:
- .:/var/www/jumbledown
networks:
- mynetwork
networks:
mynetwork:
driver: bridge
In the .env file, every developer has the following, which I need to change once the SSH tunnel is completed so that it uses the tunnel-DB combination:
DB_HOST=127.0.0.1 # As per answer, this will change to the IP address of the server containing the database. I'll leave the current localhost reference rather than displaying the IP address of the machine.
DB_PORT=3306
DB_DATABASE=[central database or sharded version for testing data changes]
DB_USERNAME=[username]
DB_PASSWORD=[password]
I'd like to be be able to get the code in the app container able to use the database on the remote server, with as little post-deployment complication as possible.
Update
I resolved a port issue.
Update 2.5
if I use
command: '*:3306:localhost:3306 -vvv [username]#[IP of DB host] -i [location on my PC of key file]/openssh_dev -p 2302'
then it does establish a connection but it gets turned down with:
tunnels_1 | debug1: Trying private key: /.ssh/openssh_ironman_justin
tunnels_1 | ###########################################################
tunnels_1 | # WARNING: UNPROTECTED PRIVATE KEY FILE! #
tunnels_1 | ###########################################################
tunnels_1 | Permissions 0755 for '/.ssh/openssh_dev ' are too open.
tunnels_1 | It is required that your private key files are NOT accessible by others.
tunnels_1 | This private key will be ignored.
But how do I change the permissions of a mounted file? Can it be done via Dockerfile, or must it already be present before that starts?
But how do I change the permissions of a mounted file? Can it be done
via Dockerfile, or must it already be present before that starts?
The Dockerfile is used to create the image. The container based on that image mounts the directory from your host machine and maintains the same host permissions.
You can change the permissions of the file on your host, Docker will use the same permissions in the container.
For your docker container 127.0.0.1 is its localhost. To access the host machine you need to change the host to 0.0.0.0. On the other hand, if you want to connect to a remote host then it'll be your-host-ip-or-domain.com.
Summary of the question: How can we let the FIWARE IdM Keyrock and the FIWARE Authzforce set properly the AZF domains, thus without getting "AZF domain not created for application XYZ" response?
I'm trying to configure a server with FIWARE Orion, FIWARE PepProxy Wilma, FIWARE IdM Keyrock, FIWARE Authzforce properly.
I arrived at the point in which the first 3 components work properly and interact with each other, but now I'm trying to insert autorization and I obtain the following error:
AZF domain not created for application.
I've already tried all the solutions presented at the following links but no one works:
https://fiware-pep-proxy.readthedocs.io/en/latest/user_guide/#level-2-basic-authorization
https://www.youtube.com/watch?v=coxFQEY0_So
How to configure the Fiware PEP WILMA proxy to use a Keyrock and Orion instance on my own servers
Fiware IDM+AuthZForce+PEP-Proxy-Wilma
Fiware - how to connect PEP proxy to Orion and configure both with HTTPS?
Fiware AuthZForce error: "AZF domain not created for application"
AuthZForce Security Level 2: Basic Authorization error "AZF domain not created for application"
https://www.slideshare.net/daltoncezane/integrating-fiware-orion-keyrock-and-wilma
“AZF domain not created for application” AuthZforce
Fiware AuthZForce error: "AZF domain not created for application"
Fiware suitable Components
https://www.slideshare.net/FI-WARE/adding-identity-management-and-access-control-to-your-app-70523086
Official documentation not usable because refers to (maybe) old python version of IdM
In the following you can find the instructions to reproduce my scenario:
Install Orion by using the Docker container
Create a directory on your system on which to work (for example, /home/fiware-orion-docker).
Create a new file called docker-compose.yml inside your directory with the following contents:
mongo:
image: mongo:3.4
command: --nojournal
orion:
image: fiware/orion
links:
- mongo
ports:
- "1026:1026"
command: -dbhost mongo -logLevel DEBUG
dns:
- 208.67.222.222
- 208.67.220.220
PAY ATTENTION: without the DNS it will never send notifications!!!
PAY ATTENTION 2 (source ): Connections from docker containers get routed into the (iptables) FORWARD chain, this needs to be configured to allow connections through it. The default is to DROP the connections. Thus if you use a firewall you have to change it:
sudo nano /etc/default/ufw
Set DEFAULTFORWARDPOLICY to “ACCEPT”.
DEFAULT_FORWARD_POLICY="ACCEPT"
Save the file.
Reload ufw
sudo ufw reload
Within the directory you created, type the following command in the command line: sudo docker-compose up -d.
After a few seconds you should have your Context Broker running and listening on port 1026.
Check that everything works with
curl localhost:1026/version
Install FIWARE IdM Keyrock (used for authentication over the Orion Context Broker):
https://github.com/ging/fiware-idm
WARNING -1: (if the next command doesn't work:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu artful stable" )
WARNING 0: if you have a firewall: DISABLE IT, otherwise docker-compose will not work
sudo apt-get install docker-compose
mkdir fiware-idm
cd fiware-idm
create docker-compose.yml
nano docker-compose.yml
version: "3.5"
services:
keyrock:
image: fiware/idm:7.6.0
container_name: fiware-keyrock
hostname: keyrock
networks:
default:
ipv4_address: 172.18.1.5
depends_on:
- mysql-db
ports:
- "3000:3000"
environment:
- DEBUG=idm:*
- IDM_DB_HOST=mysql-db
- IDM_HOST=http://localhost:3000
- IDM_PORT=3000
# Development use only
# Use Docker Secrets for Sensitive Data
- IDM_DB_PASS=secret
- IDM_DB_USER=root
- IDM_ADMIN_USER=admin
- IDM_ADMIN_EMAIL=admin#test.com
- IDM_ADMIN_PASS=1234
mysql-db:
restart: always
image: mysql:5.7
hostname: mysql-db
container_name: db-mysql
expose:
- "3306"
ports:
- "3306:3306"
networks:
default:
ipv4_address: 172.18.1.6
environment:
# Development use only
# Use Docker Secrets for Sensitive Data
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_ROOT_HOST=172.18.1.5"
volumes:
- mysql-db:/var/lib/mysql
networks:
default:
ipam:
config:
- subnet: 172.18.1.0/24
volumes:
mysql-db: ~
sudo docker-compose up -d (This will automatically download the two images and run the IdM Keyrock service. (-d is used to run it in background)).
Now you should be able to access the Identity Management tool through the website http://localhost:3000
username: admin#test.com
password: 1234
Register a new user and enable it through the interface
Then use the GUI to:
Create an "Organization" (e.g., ORGANIZ1)
Create an "application"
Step 1:
Name: Orion Idm
Description: Orion Idm
URL: http://localhost
Callback URL: http://localhost
Grant Type: Authorization Code, Implicit, Resource Owner Password, Client Credentials, Refresh Token
Provider: newuser
Step 2: leave empty
Step 3: choose "Provider"
Step 4:
click on "OAuth2 Credentials" and take notes of "Client ID" (94480bc9-43e8-4c15-ad45-0bb227e42e63) and "Client Secret" (4f6ye5y7-b90d-473a-3rr7-ea2f6dd43246)
Click on "PEP Proxy" and then on "Register a new PEP Proxy"
take notes of "Application Id" (94480bc9-43e8-4c15-ad45-0bb227e42e63), "Pep Proxy Username" (pep_proxy_dad356d2-dasa-4f95-a9hf-9ab06tccf929), and "Pep Proxy Password" (pep_proxy_a33667ec-57y1-498k-85aa-ef77ue5f6234)
Click on "Authorize" (Users) and authorize all the existing users with both roles (Purchaser and Provider for all the options)
Click on "Authorize" (Organizations) and authorize all the existing organizations with both roles (Purchaser and Provider for all the options)
Install the FIWARE Authzforce
sudo docker pull authzforce/server:latest (latest was 8.1.0 at the moment of writing)
sudo docker run -d -p 8085:8080 --name authzforce_server authzforce/server
Install the FIWARE PEP Proxy Wilma (used to enable https and authentication for Orion):
git clone https://github.com/ging/fiware-pep-proxy.git
cd fiware-pep-proxy
cp config.js.template config.js
nano config.js
var config = {};
// Used only if https is disabled
config.pep_port = 5056;
config.https = undefined
config.idm = {
host: 'localhost',
port: 3000,
ssl: false
}
config.app = {
host: 'localhost',
port: '1026',
ssl: false // Use true if the app server listens in https
}
config.response_type = 'code';
// Credentials obtained when registering PEP Proxy in app_id in Account Portal
config.pep = {
app_id: '91180bc9-43e8-4c14-ad45-0bb117e42e63',
username: 'pep_proxy_dad356d2-dasa-4f95-a9hf-9ab06tccf929',
password: 'pep_proxy_a33667ec-57y1-498k-85aa-ef77ue5f6234',
trusted_apps : []
}
// in seconds
config.cache_time = 300;
// list of paths that will not check authentication/authorization
// example: ['/public/*', '/static/css/']
config.public_paths = [];
config.magic_key = undefined;
module.exports = config;
config.authorization = {
enabled: true,
pdp: 'authzforce', // idm|authzforce
azf: {
protocol: 'http',
host: 'localhost',
port: 8085,
custom_policy: undefined, // use undefined to default policy checks (HTTP verb + path).
}
}
install all the dependencies
npm install
run the proxy
sudo node server
Create a user role:
Reconnect to the IdM http://localhost:3000:
click on your application
click on Manage rules at the top of the page
click on the + button near Roles
Name: "trial"
Save
click on the + button near Permission
Permission Name: trial1
Description: trial1
HTTP action: GET
Resource: version
Save
come back to the application
Click on "Authorize" near "Authorized users"
Assign the "trial" role to your user
Now use PostMan to get a token:
connect to localhost:3000/oauth2/token and send the following parameters
Body:
username:
password:
grant_type: password
Header:
Content-Type: application/x-www-form-urlencoded
Authorization: BASIC
take note of the obtained access_token
Try to connect to Orion though http://localhost:5056/version with the following parameters:
Header:
X-auth-token:
You will obtain the following response:
AZF domain not created for application 91180bc9-43e8-4c14-ad45-0bb117e42e63
You appear to have a timing issue with your local set up. More specifically, it appears that the timing for docker-compose on your machine is not waiting for Keyrock to be available before the PEP Proxy times out.
There are multiple strategies for dealing with these issues such as adding a wait in the start-up entrypoint, adding restart:true within the docker-compose amending the infrastructure or using some third party script. A good list of strategies can be found in the answer here.
I am working with PhpStorm 2018.3.4, Docker, MySQL and Ubuntu.
I tried unsuccessfully to configure MySQL with the Docker container network_mysql.
First, I have tried this configuration :
It gave me this error :
Then, I tried this :
This one gave me this other error.
Am I missing something? Is there another place where I must configure something?
docker ps output :
Here docker network ls :
For the command docker inspect network_mysql, here is a link to the description :
https://pastebin.com/9LmeAkc8
Here is a docker-compose.yml configuration :
https://pastebin.com/DB4Eye4y
I tried to put - "3306:3306" in addition to the wex_server_proxy section with no avail.
The file to modify was this one :
https://pastebin.com/TPBQNCDZ
I added the ports section, opening the 3306 port :) And then, it works.
Solution
I notice that you are not mapping the mysql container port out. If you did, you would see this from the docker ps command:
... 0.0.0.0:3306->3306/tcp network_mysql
The container network_mysql is attached to a bridge type network called tmp_wex_net. This means that the container is not accesible from the host, by it's container name.
I appears that you are using a docker-compose.yml definition for the stack. In order to be able to access the container from the host, you need to use the ports section of your compose definition for this container:
serivces:
mysql:
...
ports:
- "3306:3306"
...
If you are starting it with docker run, then you can acomplish the same thing with:
docker run -p 3306:3306 --name network_mysql --network="tmp_wex_net" -d mysql
And then use localhost in the hostname of your connection settings in PHPStorm. Like this:
Host: localhost
Port: 3306
Database: network
The problem
The reason that you are not able to connect, is that the host name network_mysql that you specify in the connection settings, does not resolve to any host that your machines knows of.
The container name of a docker container, is not a DNS name that the docker host can resolve.
If you have not specified any network for your mysql container, then it is connected to the default bridge network. And if you have created a new network, without specifying the type - it will also default to the bridge driver.
In order to access the container from the host, you need to either:
Connect the container to the host network
Or from a container on a bridge network, map the port to the host like suggested in the solution above. You can then address the specifically mapped port on that container with localhost:<portnum> from the host machine.
For everyone who has setup mysql manually in a docker image:
chances are you must configure mysql to accept incoming connections also from the network interface which docker creates/uses to communicate with the host (along with port forwarding).
in my case, i added the following to /etc/mysql/my.cnf to the end of the file:
[mysqld]
bind-address = 0.0.0.0
With that mysql listens on all network interfaces.
My solution:
I forwarded ports from localhost to remote: ssh -R 3306:localhost:3306 root#remote_host_ip and the connection was successful.