I am trying to use my Scala-Akka application with my MySQL database on two separate Docker containers. I found out that Docker allows developers to link their application to their databases with the flag named --link. In my Dockerfiles in which I've used to create my images, I have add in EXPOSE 3306 8080 to it.
And this is how I run the containers:
docker run -d -p 3306:3306 --name mysql centos6mysql
docker run -d -p 8080:8080 --name scalaapp --link mysql:db centos6scala
After running the containers, I used docker ps and I am able to see the active containers. However, It seems like the application container is not using the database from the MySQL container. Anyone know what's wrong?
Linking in Docker allows network connections to be made between containers. Docker will define environmental variables to your linked containers for the URL, IP, port, and protocol. The names of these will be based on the name of your container. For instance:
DB_NAME=/web2/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
You can use these environmental variables to set up your Akka app container to connect to your DB container. However, you must manually configure the app container to do so. Docker will not make the connection for you automatically.
So, somewhere in your app, you will need to pass these values to your startup script, something that might look like:
./restcore --Ddb.default.db="jdbc:mysql//${DB_PORT_3306_TCP_ADDR}:${DB_PORT_3306_TCP_PORT"
Related
This might sound repeated question but it is not and this is a crazy bug I feel, however, let me quickly explain my setup:
A simple Spring bootstrap application that runs pretty well on my local and JDBC connection string in application.properites file is as follows.
spring.datasource.url=jdbc:mysql://minesql:3306/datamachine?serverTimezone=UTC
spring.datasource.username=root
spring.datasource.password=****
The docker running instances are:
I copied (with the help of docker cp command) the war file to alpine (unix container) and running it in interactive mode to test and it is throwing exception as it is unable to ping the mysql server. I am certain that the database configurations are fine and clueless why the springboot app is failing to connect to mysql container instance. Note, the mysql container does have "datamachine" database created manually.
This is the error reported:
Please help me understand what I am missing here or what is going wrong.
Just in-case if you wish to know how I started these containers.
For mysql:
docker run -d --name minesql -e MYSQL_ROOT_PASSWORD=**** -p 3306:3306 mysql
Running Java app from the alpine container and this is how I am starting the alpine,
docker run -it --name unix alpine
The interactive mode present me the bash prompt to run the spring-boot war file. (..and running the war file after installing the java 8 in alpine)
You have two docker containers which are running and connected via default bridge network. From docker bridge documentation
Containers on the default bridge network can only access each other by
IP addresses, unless you use the --link option, which is considered
legacy. On a user-defined bridge network, containers can resolve each
other by name or alias.
If you need the second container to be able to resolve the name minesql from inside, you need to create a user-defined network bridge and connect the docker container containers to that.
Create a new network using
docker network create my-net
And add your containers as specified here
Other alternative is to use docker-compose and avoid manual creation of bridge networks for name resolution. For production environment, that would be ideal.
I know there have been many similar questions, but none of them are what I want. I'm following this because I specifically need 5.5, at least for now. My java project (which accesses mysql) is in a container I built with
docker build -t projectname-testing .
The Dockerfile is pretty standard, it just copies over a built tarball and extracts it to a specific folder. The CMD is a shell script run_dev_server.sh that just launches the server with dev configurations rather than production ones.
I created a percona docker container with the command given in the link with
docker run --name projectname-mysql-server -e MYSQL_ROOT_PASSWORD="" -d percona:5.5
So now the way I see it, just need the link the two as mentioned in the link:
docker run -p 3306:3306 --name projectname-local --link projectname-mysql-server projectname-testing
Which gives me
docker: Error response from daemon: Cannot link to a non running container: /projectname-mysql-server AS /projectname-local/projectname-mysql-server.
ERRO[0000] error getting events from daemon: net/http: request canceled
Which isn't very helpful and doesn't tell me what happened. Am I understanding this process wrong? What should I be doing?
First of all, I would recommend using the official Percona docker image from Docker Hub, instead of building your own image. The official image has a 5.5 version; https://hub.docker.com/_/percona/
You can either extend this image if you need specific changes (such as a custom configuration), for example;
FROM percona:5.5
COPY my-config.cnf /etc/mysql/conf.d/
Important: I notice you are publishing port 3306 (-p 3306:3306). Publishing a port makes it publicly accessible on the host's network-interface. You should only do this if you have external software that needs to connect to the database. If only your application needs access to the database, publishing the port is not needed, because containers can connect with eachother through the docker container-container network, which is "private" and not reachable from outside the host.
The --link option on the default network is a legacy option that is still around for backward compatibility, but should not be used for most situations. The --link option has a number of limitations;
legacy links are not dynamic; it's not possible to replace a linked container without re-creating all containers linked to that container
restarting a linked container can break the link, with no option to re-establish a link
legacy links are uni-directional
environment variables are shared between containers, which can easily lead to leaking (e.g.) credentials to other containers.
Docker 1.9 introduced custom docker networks, which allows
A simple example;
create a network for your application;
docker network create mynet
create a database container, and attach it to the network; there is no need to publish its ports for other containers to connect to it. (I'm using an nginx image here, just to illustrate the concept);
docker run -d --name db --network mynet nginx:alpine
create an "application" container and attach it to the same network; doing so
allows it to communicate with the db container over that network;
docker run -dit --name app --network mynet alpine sh
The application container can now connect to the db container, using its name
as hostname (db); to illustrate this, open a shell in the app container, install curl and connect to http://db:80;
docker exec -it app sh
/ # apk add --no-cache curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ca-certificates (20161130-r1)
(2/4) Installing libssh2 (1.7.0-r2)
(3/4) Installing libcurl (7.52.1-r3)
(4/4) Installing curl (7.52.1-r3)
Executing busybox-1.25.1-r0.trigger
Executing ca-certificates-20161130-r1.trigger
OK: 5 MiB in 15 packages
/ # curl http://db:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
You can read more about networks (also how to dynamically attach and detach a container from a network) in the []"docker container networking" section of the documentation](https://docs.docker.com/engine/userguide/networking/)
I'm trying to create a mini-demo with docker using mysql and phpmyadmin and i'm trying to make the two docker containers communicate with each other without using the --link flag since this has been flagged as "legacy" by docker (https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/#/connect-with-the-linking-system)
I managed to do this using docker-compose using the network section, but I want to implement the same scenario using normal dockerfiles and running the two containers in command prompt.
Here are the two dockerfiles I created:
Dockerfile for mysql
FROM mysql:5.7
ENV MYSQL_ROOT_PASSWORD=12345678
ENV MYSQL_DATABASE=mysql
ENV MYSQL_USER=user
ENV MYSQL_PASSWORD=12345678
Dockerfile for pma
FROM phpmyadmin/phpmyadmin:4.6
ENV PMA_HOST=mysql
ENV MYSQL_ROOT_PASSWORD=12345678
Docker images are created correctly using docker build and these are the commands that i use to run the two containers:
mysql:
docker run -d --name mysql sebastian/db-mysql
pma:
docker run -d -p 7777:80 --name pma sebastian/db-pma
When i try to connecto to Pma using username root and password 12345678 i get the following error:
mysqli_real_connect(): (HY000/2005): Unknown MySQL server host 'mysql' (-2)
I'm sure I'm missing something when spinning the two containers and I cannot fully understand how the two containers are suppose to communicate and/or how pma will find host mysql (the name i defined when running the mysql container)
Is docker suppose to allow communication between the two containers?
How do containers should find each other by using names and not ip addresses?
P.S. i'm using dockertoolbox on windows 10 (maybe that is the real problem :D )
The problem:
You are not specifying any networks in your docker run so you will use default bridge, Default bridge will not give you internal DNS but containers on that network can communicate via IP Addresses.
Follow these steps:
First create a user-defined network:
docker network create <yournetworkname>
Now run containers using the network we just created:
docker run -d --name mysql --network <yournetworkname> sebastian/db-mysql
docker run -d -p 7777:80 --name --network <yournetworkname> pma sebastian/db-pma
User defined networks provide connectivity by default and internal dns to the containers on the same network. For example you can ping mysql from pma by:
ping mysql
I have setup docker container with mysql that expose 3306.
I've specified database user, database password and create a test db and give the privileges to new user.
In another container i want to accesso to this db.
So i set up new container with a simply php script that create new table in this db.
I know that mysql container's ip is 172.17.0.2 so :
$mysqli = new mysqli("172.17.0.2", "mattia", "prova", "prova");
Than using mysqli i create new table and all works fine.
But i think that connect to container using his ip address is not good.
Is there another way to specify db host? I tryed with the hostname of the mysql container but it doens't work.
The --link flag is considered a legacy feature, you should use user-defined networks.
You can run both containers on the same network:
docker run -d --name php_container --network my_network my_php_image
docker run -d --name mysql_container --network my_network my_mysql_image
Every container on that network will be able to communicate with each other using the container name as hostname.
You need to link your docker containers together with --link flag in docker run command or using link feature in docker-compose. For instance:
docker run -d -name app-container-name --link mysql-container-name app-image-name
In this way docker will add the IP address of the mysql container into /etc/hosts file of your application container.
For a complete document refer to:
MySQL Docker Containers: Understanding the basics
In your docker-compose.yml file add a link property to your webserver service:
https://docs.docker.com/compose/networking/#links
Then in your query string, the host parameter's value is your database service name:
$mysqli = new mysqli("database", "mattia", "prova", "prova");
If you are using docker-compose, than the database will be accessible under the service name.
version: "3.9"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
Then the database is accessible using: postgres://db:5432.
Here the service name is at the same time the hostname in the internal network.
Quote from docker docs:
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
Source:
https://docs.docker.com/compose/networking/
My web application needs both mysql and redis server to function properly. I am able to link mysql container with app using link tag (mysql is name of mysql image set using -name tag)
sudo docker run -link mysql:amq -d -p 13310 hitesh/image node app
Now I am not sure how to attach redis to this container. Should it be done via same mysql image (if yes, how two ports i.e. 3306 & 6379 will be exposed?) or should I make another container for redis and link it to my node.js app (not sure about it is possible or not).
You should have 3 containers
your app
your mysql
your redis
then expose your mysql port and redis port on the relevant containers.
Then when you run your app container just link both mysql and redis containers to your app
so something like
sudo docker run -d -link mysql:mysql -link redis:redis ....
Now your app container will have environment variables for your other two databases
Also, if you want to expose two ports, then in your dockerfile just do EXPOSE port1 port2
e.g. EXPOSE 22 80
Then you'll get environment variables for both exposed ports. But i'd recommend you don't have a container that runs both mysql and redis. Separate your concerns :)