how to securely expose the API address for ipfs cluster services? - ipfs

I implemented the following from the docs and this all works but the API access is set 0.0.0.0 which is a security hole allowing people from outside the network to connect and add files. I want to create a private network and hence secure the network by allowing just localhost on the API access or access from a known server. But then I find the peers themselves do not connect. Is there a solution for this?
version: '3.4'
# This is an example docker-compose file to quickly test an IPFS Cluster
# with multiple peers on a contained environment.
# It runs 3 cluster peers (cluster0, cluster1...) attached to go-ipfs daemons
# (ipfs0, ipfs1...) using the CRDT consensus component. Cluster peers
# autodiscover themselves using mDNS on the docker internal network.
#
# To interact with the cluster use "ipfs-cluster-ctl" (the cluster0 API port is
# exposed to the locahost. You can also "docker exec -ti cluster0 sh" and run
# it from the container. "ipfs-cluster-ctl peers ls" should show all 3 peers a few
# seconds after start.
#
# For persistance, a "compose" folder is created and used to store configurations
# and states. This can be used to edit configurations in subsequent runs. It looks
# as follows:
#
# compose/
# |-- cluster0
# |-- cluster1
# |-- ...
# |-- ipfs0
# |-- ipfs1
# |-- ...
#
# During the first start, default configurations are created for all peers.
services:
##################################################################################
## Cluster PEER 0 ################################################################
##################################################################################
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs:release
# ports:
# - "4001:4001" # ipfs swarm - expose if needed/wanted
# - "5001:5001" # ipfs api - expose if needed/wanted
# - "8080:8080" # ipfs gateway - expose if needed/wanted
volumes:
- ./compose/ipfs0:/data/ipfs
cluster0:
container_name: cluster0
image: ipfs/ipfs-cluster:latest
depends_on:
- ipfs0
environment:
CLUSTER_PEERNAME: cluster0
CLUSTER_SECRET: ${CLUSTER_SECRET} # From shell variable if set
CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs0/tcp/5001
CLUSTER_CRDT_TRUSTEDPEERS: '*' # Trust all peers in Cluster
CLUSTER_RESTAPI_HTTPLISTENMULTIADDRESS: /ip4/0.0.0.0/tcp/9094 # Expose API
CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery
ports:
# Open API port (allows ipfs-cluster-ctl usage on host)
- "127.0.0.1:9094:9094"
# The cluster swarm port would need to be exposed if this container
# was to connect to cluster peers on other hosts.
# But this is just a testing cluster.
# - "9096:9096" # Cluster IPFS Proxy endpoint
volumes:
- ./compose/cluster0:/data/ipfs-cluster
##################################################################################
## Cluster PEER 1 ################################################################
##################################################################################
# See Cluster PEER 0 for comments (all removed here and below)
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs:release
volumes:
- ./compose/ipfs1:/data/ipfs
cluster1:
container_name: cluster1
image: ipfs/ipfs-cluster:latest
depends_on:
- ipfs1
environment:
CLUSTER_PEERNAME: cluster1
CLUSTER_SECRET: ${CLUSTER_SECRET}
CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs1/tcp/5001
CLUSTER_CRDT_TRUSTEDPEERS: '*'
CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery
volumes:
- ./compose/cluster1:/data/ipfs-cluster
##################################################################################
## Cluster PEER 2 ################################################################
##################################################################################
# See Cluster PEER 0 for comments (all removed here and below)
ipfs2:
container_name: ipfs2
image: ipfs/go-ipfs:release
volumes:
- ./compose/ipfs2:/data/ipfs
cluster2:
container_name: cluster2
image: ipfs/ipfs-cluster:latest
depends_on:
- ipfs2
environment:
CLUSTER_PEERNAME: cluster2
CLUSTER_SECRET: ${CLUSTER_SECRET}
CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs2/tcp/5001
CLUSTER_CRDT_TRUSTEDPEERS: '*'
CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery
volumes:
- ./compose/cluster2:/data/ipfs-cluster
# For adding more peers, copy PEER 1 and rename things to ipfs2, cluster2.
# Keep bootstrapping to cluster0.

First you need to create the private network in IPFS, this allow your ipfs nodes connect to ipfs nodes that have the same swarm key.
In you ipfs0 and ipfs1 services, you need to add two new enviroments variables, and a new volume:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs:release
# ports:
# - "4001:4001" # ipfs swarm - expose if needed/wanted
# - "5001:5001" # ipfs api - expose if needed/wanted
# - "8080:8080" # ipfs gateway - expose if needed/wanted
environment:
- LIBP2P_FORCE_PNET=1
- IPFS_SWARM_KEY_FILE=/data/ipfs/swarm.key
volumes:
- ./compose/ipfs0:/data/ipfs
- ./swarm.key:/data/ipfs/swarm.key
To generate the swarm.key check this link. The swarm.key must be in your ipfs root path (For default, ~/.ipfs, in the container ipfs path is: /data/ipfs). This swarm.key should be the same for all ipfs nodes.
For IPFS Cluster, you have it good, with this command you can generate you cluster key:
export CLUSTER_SECRET=$(od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n')
I recommend you to add files using ipfs cluster REST Api. Check this link to configure ipfs cluster and make more secure to upload files (Using the api secret key), or you can only allow localhost as a ipfs cluster network:
ports:
- "127.0.0.1:9094:9094" # Only open the port 9094 in localhost

Related

Dockerize adonis.js + mysql

I'm trying to dockerize an existing adonis.js app and MySQL through docker-compose.
Here is my Dockerfile
FROM node:12.18.2-alpine3.9
ENV HOME=/app
RUN mkdir /app
COPY package.json $HOME
WORKDIR $HOME
RUN npm i -g #adonisjs/cli && npm install
CMD ["npm", "start"]
And here is my docker-compose.yml file
version: '3'
services:
adonis-mysql:
image: mysql:5.7
ports:
- '3307:3306'
volumes:
- $PWD/data:/var/lib/mysql
environment:
MYSQL_USER: ${DB_USER}
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_ALLOW_EMPTY_PASSWORD: ${DB_ALLOW_EMPTY_PASSWORD}
networks:
- api-network
adonis-api:
container_name: "${APP_NAME}-api"
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
- /app/node_modules
ports:
- "3333:3333"
depends_on:
- adonis-mysql
networks:
- api-network
networks:
api-network:
When running docker-compose up everything goes smoothly and the adonis-api container says that the app is running but I'm am unable to reach it, I always get:
This site can’t be reached
127.0.0.1 refused to connect.
or
This site can’t be reached
The connection was reset.
I tried with different docker-compose settings, and different dockerfiles, almost always everything starts ok but I'm just unable to access the server.
Also tried different IP and ports, but still nothing.
Container logs:
testProject-api |
testProject-api | > adonis-fullstack-app#4.1.0 start /app
testProject-api | > node server.js
testProject-api |
adonis-mysql_1 | 2020-07-09T09:56:35.960082Z 1 [Warning] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
testProject-api | info: serving app on http://127.0.0.1:80
docker ps
dan#dan-Nitro-AN515-54:~/Documents/Tests/testProject$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
45f3dd21ef93 testproject_adonis-api "docker-entrypoint.s…" 20 seconds ago Up 19 seconds 0.0.0.0:3333->3333/tcp testProject-api
7b40bc7c75c3 mysql:5.7 "docker-entrypoint.s…" 2 minutes ago Up 20 seconds 33060/tcp, 0.0.0.0:3307->3306/tcp testproject_adonis-mysql_1
There's two things that jump out in this setup.
First of all, when the container startup prints:
info: serving app on http://127.0.0.1:80
That's usually an indication of a configuration issue that will make the process inaccessible. In Docker each container has its own localhost interface, so a process that's "listening on 127.0.0.1" will only be reachable from the container-private localhost interface, but not from other containers or the host (regardless of what ports: options you have). You generally need to set processes to "bind" or "listen" to the special 0.0.0.0 all-interfaces address.
Within Adonis it looks like this is controlled by the $HOST environment variable; the Adonis templates set this to 127.0.0.1. Adonis documents itself as using the dotenv library, and that in turn gives precedence to environment variables over the .env file, so it should be enough to set an environment variable HOST=0.0.0.0.
(None of the previous paragraph is discussed in the Adonis documentation!)
The second thing from that error message is that the second number in ports: needs to match the port number the container process is using. The Adonis templates all seem to default this to port 3333 but that startup message says port 80, so you need to change your ports: to be port 80 on the right-hand side. You can pick any port you want for the left-hand side.
Adding in some routine cleanups, you could replace your docker-compose.yml service block with:
adonis-api:
build: . # context directory only; use default Dockerfile
environment:
- HOST=0.0.0.0 # listen on all interfaces
ports:
- "3333:80" # matches actual listener message
depends_on:
- adonis-mysql
# Use "default" network (also delete other networks: blocks in the file)
# Use Compose default container name
# Use code from the Docker image; don't overwrite with volumes
# (and don't tell Docker to use arbitrarily old node_modules)

Execute SQL script on docker compose

I have a project that runs when ./entrypoint.sh or docker-compose up is run from the root directory of project and generates the swagger API interface, but the calls return entry response no data.
If I run with MySQL on localhost without docker, works perfectly fine. How do I load the data?
entrypoint.sh
#!/bin/bash
docker network create turingmysql
docker container run -p 3306:3306 --name mysqldb --network turingmysql -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=tshirtshop -d mysql:5.7
docker-compose build
docker-compose up
DockerFile
FROM mysql:5.7
ADD ./database/tshirtshop.sql /docker-entrypoint-initdb.d
#### Stage 1: Build the application
FROM openjdk:8-jdk-alpine as build
# Set the current working directory inside the image
WORKDIR /app
# Copy maven executable to the image
COPY mvnw .
COPY .mvn .mvn
# Copy the pom.xml file
COPY pom.xml .
# Build all the dependencies in preparation to go offline.
# This is a separate step so the dependencies will be cached unless
# the pom.xml file has changed.
RUN ./mvnw dependency:go-offline -B
# Copy the project source
COPY src src
# Package the application
RUN ./mvnw package -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
#### Stage 2: A minimal docker image with command to run the app
FROM openjdk:8-jre-alpine
ARG DEPENDENCY=/app/target/dependency
# Copy project dependencies from the build stage
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","com.turing.ecommerce.TuringApplication"]
docker-compose.yml
version: '3.7'
# Define services
services:
# App backend service
app-server:
# Configuration for building the docker image for the backend service
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080" # Forward the exposed port 8080 on the container to port 8080 on the host machine
restart: always
depends_on:
- mysqldb # This service depends on mysql. Start that first.
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:mysql://mysqldb:3306/tshirtshop?useSSL=false&useLegacyDatetimeCode=false&serverTimezone=UTC
SPRING_DATASOURCE_USERNAME: root
SPRING_DATASOURCE_PASSWORD: root
networks: # Networks to join (Services on the same network can communicate with each other using their name)
- turingmysql
# Database Service (Mysql)
mysqldb:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: tshirtshop
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: root
volumes:
- db-data:/var/lib/mysql
networks:
- turingmysql
# Volumes
volumes:
db-data:
# Networks to be created to facilitate communication between containers
networks:
turingmysql:
Do you have two Dockerfiles? Looks like you built your own MySQL container?
Otherwise, these shouldn't be part of your Java multi-stage build
FROM mysql:5.7
ADD ./database/tshirtshop.sql /docker-entrypoint-initdb.d
Assuming that you did build a separate image for mysql, in the Docker-Compose, you're not using it, as you're still referring to image: mysql:5.7
Rather than building your own, you should mount the SQL script into it
For example
mysqldb:
image: mysql:5.7
...
volumes:
- db-data:/var/lib/mysql
- ./database/tshirtshop.sql:/docker-entrypoint-initdb.d/0_init.sql
Then, forget the Java service for a minute and use MySQL workbench or the mysql CLI to verify that data is actually there. Once you do, then startup the API
If you copying sql scipt already to docker build then you do not need to mapped it again in the docker-compose, if you have docker-compose then you do not the bash script single command docker-compose up --build will do the job.
So modify your docker-compose as per your Dockerfile.
Dockerfile
FROM mysql
ADD init.sql /docker-entrypoint-initdb.d
docker-compose
version: '3.7'
services:
# App backend service
app-server:
# Configuration for building the docker image for the backend service
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080" # Forward the exposed port 8080 on the container to port 8080 on the host machine
restart: always
depends_on:
- mysqldb # This service depends on mysql. Start that first.
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:mysql://mysqldb:3306/tshirtshop?useSSL=false&useLegacyDatetimeCode=false&serverTimezone=UTC
SPRING_DATASOURCE_USERNAME: root
SPRING_DATASOURCE_PASSWORD: root
networks: # Networks to join (Services on the same network can communicate with each other using container name)
- uringmysql
# Database Service (Mysql)
mysql:
build: .
environment:
MYSQL_ROOT_PASSWORD: root123
MYSQL_DATABASE: appdata
MYSQL_USER: test
MYSQL_PASSWORD: root123
volumes:
- db-data:/var/lib/mysql
tty: true
# Volumes
volumes:
db-data:
# Networks to be created to facilitate communication between containers
networks:
turingmysql:
Now just run
docker-compose up --build
this will build and up the container and you will not need to mapped the host init script, as it already in Docker image.
The directory structure will look like
Now you application will able to access DB using jdbc:mysql://mysqldb:3306/tshirtshop? this endpoint as both are in same network and can refer eacher other using name.
Thank you cricket_007 and Adii for the responses. They put me in the right direction. I want to document my experience and how the issue was resolved. New to dockerization so I was learning by practice. For anyone new to dockerization and having same issues in Spring Boot, MySQL and docker, this would surely help
First, my entrypoint.sh changed below. The docker-compose down is for restarts.
#!/bin/bash
docker-compose down -v
docker-compose up --build
Second, I had to use an existing mysql image instead of building one.
version: '3.7'
# Define services
services:
# App backend service
app-server:
# Configuration for building the docker image for the backend service
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080" # Forward the exposed port 8080 on the container to port 8080 on the host machine
restart: always
depends_on:
- mysql # This service depends on mysql. Start that first.
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:mysql://mysql:3306/tshirtshop?useSSL=false&allowPublicKeyRetrieval=true&useLegacyDatetimeCode=false&serverTimezone=UTC
SPRING_DATASOURCE_USERNAME: turing
SPRING_DATASOURCE_PASSWORD: pass
networks: # Networks to join (Services on the same network can communicate with each other using their name)
- turingmysql
# Database Service (Mysql)
mysql:
image: mysql/mysql-server
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: tshirtshop
MYSQL_USER: turing
MYSQL_PASSWORD: pass
volumes:
- db-data:/var/lib/mysql
- ./database:/docker-entrypoint-initdb.d
tty: true
networks: # Networks to join (Services on the same network can communicate with each other using their name)
- turingmysql
# Volumes
volumes:
db-data:
# Networks to be created to facilitate communication between containers
networks:
turingmysql:
driver: bridge
Needed to specify that the network is a bridge. My sql file was mounted from a folder relative to docker-compose.yml. Also had to add allowPublicKeyRetrieval=true to my jdbc url. Created a user to access the database tshirtshop.
And here is the Dockerfile.
#### Stage 1: Build the application
FROM openjdk:8-jdk-alpine as build
# Set the current working directory inside the image
WORKDIR /app
# Copy maven executable to the image
COPY mvnw .
COPY .mvn .mvn
# Copy the pom.xml file
COPY pom.xml .
# Build all the dependencies in preparation to go offline.
# This is a separate step so the dependencies will be cached unless
# the pom.xml file has changed.
RUN ./mvnw dependency:go-offline -B
# Copy the project source
COPY src src
# Package the application
RUN ./mvnw package -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
#### Stage 2: A minimal docker image with command to run the app
FROM openjdk:8-jre-alpine
ARG DEPENDENCY=/app/target/dependency
# Copy project dependencies from the build stage
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","com.turing.ecommerce.TuringApplication"]
to run, from root directory of project ./entrypoint.sh on mac and the rest is history.

what are backend and frontend in traefik.toml

While reading the Documents of Traefik I was confused when I face the configuration skeleton that was mentioned in the documentation:
traefik.toml:
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
# ...
[entryPoints.https]
# ...
[file]
# rules
[backends]
[backends.backend1]
# ...
[backends.backend2]
# ...
[frontends]
[frontends.frontend1]
# ...
[frontends.frontend2]
# ...
[frontends.frontend3]
# ...
# HTTPS certificate
[[tls]]
# ...
[[tls]]
# ...
what is the reason behind dividing rule section in the configuration file into two different sub-sections as backend and frontend?
Without dividing it into backend and frontend, i would not have been able to connect multiple services to the same backend and as such, have load-balancing even though i configured multiple services.
version: '3.2'
services:
minio1:
image: minio/minio:RELEASE.2018-11-30T03-56-59Z
hostname: minio1
volumes:
- minio1-data:/export
ports:
- target: 9000
mode: host
networks:
- minio_distributed
- webgateway
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
labels:
- traefik.enable=true
- traefik.docker.network=webgateway
- traefik.backend=minio
- traefik.frontend.rule=Host:minio.mycooldomain.com
- traefik.port=9000
placement:
constraints:
- node.labels.minio1==true
command: server http://minio1/export http://minio2/export http://minio3/export http://minio4/export
secrets:
- secret_key
- access_key
minio2:
image: minio/minio:RELEASE.2018-11-30T03-56-59Z
hostname: minio2
volumes:
- minio2-data:/export
ports:
- target: 9000
mode: host
networks:
- minio_distributed
- webgateway
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
labels:
- traefik.enable=true
- traefik.docker.network=webgateway
- traefik.backend=minio
- traefik.frontend.rule=Host:minio.mycooldomain.com
- traefik.port=9000
placement:
constraints:
- node.labels.minio2==true
command: server http://minio1/export http://minio2/export http://minio3/export http://minio4/export
secrets:
- secret_key
- access_key
volumes:
minio1-data:
minio2-data:
minio3-data:
minio4-data:
networks:
minio_distributed:
driver: overlay
webgateway:
external: true
secrets:
secret_key:
external: true
access_key:
external: true
thats an example from me, where the service "minio1" and "minio2" are reachable through the same domain. normally as soon as i have different services, each gets its own backend automatically and i would have had to give each service its own domain and only a single service where i scale the number up, these additional containers would be reachable on the same domain.
Hope i was able to explain it a bit with my own experience. :)
Note that i even have 4 minio services, i just cut it to shorten the config

How to specify a docker container database to a app running in docker with docker-compose.yml?

Context
There is this docker-compose.yml:
version: '3'
services:
mediawiki:
image: mediawiki
restart: always
ports:
- 8080:80
links:
- database
volumes:
- /var/www/html/images
# After initial setup, download LocalSettings.php to the same directory as
# this yaml and uncomment the following line and use compose to restart
# the mediawiki service
# - ./LocalSettings.php:/var/www/html/LocalSettings.php
database:
image: mariadb
restart: always
environment:
# #see https://phabricator.wikimedia.org/source/mediawiki/browse/master/includes/DefaultSettings.php
MYSQL_DATABASE: my_wiki
MYSQL_USER: wikiuser
MYSQL_PASSWORD: example
MYSQL_RANDOM_ROOT_PASSWORD: yes
When I run docker ps I get:
89db8794029a mysql:latest "docker-entrypoint..." ... 0.0.0.0:8083->3306/tcp some-mysql
This is a mysql docker container running.
Question
How can I modify the docker-compose.yml in a way that the database to point to the mysql docker container (89db8794029a) already running?
you don't have to add the database service on the yml file.
In order mediawiki service connect to some-mysql container, the mediawiki container need to be on a same network with some-mysql container
assuming that the mediawiki already up
first, you need to know what network some-mysql use,
docker network ls
i'm guessing it would be 'some-mysql_default'
to connect media wiki to some-mysql
docker network connect some-mysql_default mediawiki
now, use 'some-mysql' as hostname database in mediawiki config
OR
yml file to automatically connect to mysql network
version: '3'
services:
mediawiki:
image: mediawiki
restart: always
ports:
- 8080:80
links:
- database
volumes:
- /var/www/html/images
# After initial setup, download LocalSettings.php to the same directory as
# this yaml and uncomment the following line and use compose to restart
# the mediawiki service
# - ./LocalSettings.php:/var/www/html/LocalSettings.php
networks:
- default
- some-mysql_default
networks:
default: # this network
driver: bridge
some-mysql_default: # external network
external: true

Mysql socket is missing in my homestead docker container

I have this issue with my app
SQLSTATE[HY000] [2002] Can't connect to local MySQL server through
socket '/var/run/mysqld/mysqld.sock'(2)
I'm using 2 docker containers to host my laravel 4.2 (I think :o) from this build
Here is my docker-compose.yml
web:
image: shincoder/homestead:php5.6
restart: always
ports:
- "8000:80" # web
- "2222:22" # ssh
- "35729:35729" # live reload
- "9876:9876" # karma server
volumes:
- ~/.composer:/home/homestead/.composer # composer caching
- ~/.gitconfig:/home/homestead/.gitconfig # Git configuration ( access alias && config )
- ~/.ssh:/home/homestead/.ssh # Ssh keys for easy deployment inside the container
- ~/apps:/apps # all apps
- ~/apps/volumes/nginx/sites-available:/etc/nginx/sites-available # nginx sites ( in case you recreate the container )
- ~/apps/volumes/nginx/sites-enabled:/etc/nginx/sites-enabled # nginx sites ( in case you recreate the container )
links:
- mariadb
mariadb:
image: tutum/mariadb
restart: always
ports:
- "33060:3306"
environment:
MARIADB_USER: admin # cannot be changed ( for info. only )
MARIADB_PASS: root
volumes:
- ~/apps/volumes/mysql:/var/lib/mysql # database files
the first one is homesteaddocker_web_1 with php5.6 and the second one homesteaddocker_mariadb_1.
I've searched for the mysql socket in homesteaddocker_web_1 and its not there but I've found it in the second container (homesteaddocker_mariadb_1)
So how can I fix this please.
Docker containers are networked with a bridge.
For one container to talk to another, you need to link them:
https://docs.docker.com/engine/userguide/networking/
If your container needs to access network ressources on your local network, you may want to use --network='host' in your docker run command.