what are backend and frontend in traefik.toml - configuration

While reading the Documents of Traefik I was confused when I face the configuration skeleton that was mentioned in the documentation:
traefik.toml:
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
# ...
[entryPoints.https]
# ...
[file]
# rules
[backends]
[backends.backend1]
# ...
[backends.backend2]
# ...
[frontends]
[frontends.frontend1]
# ...
[frontends.frontend2]
# ...
[frontends.frontend3]
# ...
# HTTPS certificate
[[tls]]
# ...
[[tls]]
# ...
what is the reason behind dividing rule section in the configuration file into two different sub-sections as backend and frontend?

Without dividing it into backend and frontend, i would not have been able to connect multiple services to the same backend and as such, have load-balancing even though i configured multiple services.
version: '3.2'
services:
minio1:
image: minio/minio:RELEASE.2018-11-30T03-56-59Z
hostname: minio1
volumes:
- minio1-data:/export
ports:
- target: 9000
mode: host
networks:
- minio_distributed
- webgateway
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
labels:
- traefik.enable=true
- traefik.docker.network=webgateway
- traefik.backend=minio
- traefik.frontend.rule=Host:minio.mycooldomain.com
- traefik.port=9000
placement:
constraints:
- node.labels.minio1==true
command: server http://minio1/export http://minio2/export http://minio3/export http://minio4/export
secrets:
- secret_key
- access_key
minio2:
image: minio/minio:RELEASE.2018-11-30T03-56-59Z
hostname: minio2
volumes:
- minio2-data:/export
ports:
- target: 9000
mode: host
networks:
- minio_distributed
- webgateway
deploy:
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
labels:
- traefik.enable=true
- traefik.docker.network=webgateway
- traefik.backend=minio
- traefik.frontend.rule=Host:minio.mycooldomain.com
- traefik.port=9000
placement:
constraints:
- node.labels.minio2==true
command: server http://minio1/export http://minio2/export http://minio3/export http://minio4/export
secrets:
- secret_key
- access_key
volumes:
minio1-data:
minio2-data:
minio3-data:
minio4-data:
networks:
minio_distributed:
driver: overlay
webgateway:
external: true
secrets:
secret_key:
external: true
access_key:
external: true
thats an example from me, where the service "minio1" and "minio2" are reachable through the same domain. normally as soon as i have different services, each gets its own backend automatically and i would have had to give each service its own domain and only a single service where i scale the number up, these additional containers would be reachable on the same domain.
Hope i was able to explain it a bit with my own experience. :)
Note that i even have 4 minio services, i just cut it to shorten the config

Related

Django docker could not access MYSQL in host server (using docker-compose)

I have a working django application run via docker to be used for production. Currently, its also using a mysql database which is also hosted using docker. This actually works fine.
However, as what I learned, hosting mysql database on docker may not be a preferable way to do it in production. Which is why I wanted to use my host server, which has mysql running instead. My problem is, I can't seem to make it my django app connect to the host server's mysql server.
This is my docker-compose file.
version: '3.9'
networks:
default:
external: true
name: globalprint-shared-network
services:
#db:
# image: mysql:5.7
# ports:
# - "3307:3306"
# hostname: globalprint-db
# restart: always
# volumes:
# - production_db_volume:/var/lib/mysql
# env_file:
# - .env.prod
app:
build:
context: .
ports:
- "8001:8000"
volumes:
- production_static_data:/vol/web
hostname: globalprint-backend
restart: always
env_file:
- .env.prod
# depends_on:
# - db
proxy:
build:
context: ./proxy
hostname: globalprint-backend-api
volumes:
- production_static_data:/vol/static
restart: always
ports:
- "81:80"
depends_on:
- app
volumes:
production_static_data:
production_db_volume:
I actually have tried adding this in app service and made but still it did not work:
extra_hosts:
host.docker.internal:host-gateway
My django settings for database is also this one. Its actually referencing to an environment variable but the value for host is host.docker.internal and port is 3306:
DATABASES = {
'default': {
'ENGINE': env('DATABASE_ENGINE_MYSQL'),
'NAME': env('MYSQL_DATABASE'),
'USER': env('MYSQL_USER'),
'PASSWORD': env('MYSQL_PASSWORD'),
'HOST': env('DATABASE_HOST'), # localhost Or an IP Address that your DB is hosted on
'PORT': env('DATABASE_PORT'),
}
}
Can anyone tell what did I do wrong here? Appreciate any help on this. Thank you.

Getting mysql connection issue when scaling the mysql container to more than 1 in docker swarm

I have a host machine running in swarm mode. I am running it on single machine now, no clusters (no multiple machine).
The services are running fine. I have created a volume for the mysql container. I believe when the mysql container is scaled they all will read from the same volume.
Here is the docker-compose. Which works Great and no mysql connection issue but when I scale the mysql container to 2
version: "3.4"
services:
node:
image: prod_engineering_node:v7
networks:
- backend
volumes:
- ./codebase:/usr/src/app
ports:
- "8082:8082"
depends_on:
- engineeringmysql
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
mysql:
image: prod_engineering_mysql:v1
command: mysqld --default-authentication-plugin=mysql_native_password
networks:
- backend
ports:
- "3309:3306"
environment:
MYSQL_ROOT_PASSWORD: main_pass
MYSQL_DATABASE: engineering
MYSQL_USER: user
MYSQL_PASSWORD: pass
volumes:
- ./sqldata:/var/lib/mysql:rw
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
nginx:
image: prod_engineering_nginx:v1
ports:
- "80:80"
- "443:443"
volumes:
- ./angular_build:/var/www/html/studydote_v2/frontend:rw
- ./laravel_admin:/var/www/html/dev/backend/public:rw
networks:
- backend
depends_on:
- engineeringphpfpm
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
phpfpm:
image: prod_engineering_phpfpm:v1
ports:
- "9001:9000"
depends_on:
- engineeringmysql
networks:
- backend
volumes:
- ./angular_build:/var/www/html/studydote_v2/frontend:rw
- ./laravel_admin:/var/www/html/dev/backend/public:rw
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
networks:
backend:
driver: overlay
This is how i scaled the mysql container.
docker service scale servicename=2
Now I get the db connections issue.
Can anyone help me with it? What might be the issue? If this is the wrong way to scale mysql db, please suggest me what are the better ways.
When you start a service, Docker swarm will assign a virtual IP address to each service, and load-balance all requests to that IP to each of the replica-containers.
What probably happens (but it's hard to see without the full logs), is that the tcp connection gets loadbalanced across both DBs: the first connection would go to nr1, the second one to nr2 etc.
However, mysql connections are stateful, not stateless. So this way is scaling your db isn't going to work. Also note that Docker won't handle the Mysql replication work for you. What people typically do, is:
avoid having to run multiple DB instances if you don't need to
run 2 mysql services: a mysql-master and a mysql-slave, each with their own config
do some intelligent service discovery in a startup script in your mysql image

Setting up lwm2m-node-lib to FIWARE platform

Having reached a stumbling block with my wakaama LWM2M implementation for couple of weeks, as I reported in #154 I have no option than to try using telefonica lwm2m-node-lib instead.
To make my point clear again, I have already IOTA, Orion, MongoDB, Cygnus all working fine. It is my client implementation that isn't sending measure to IOTA despite being able to connect. The scenario I want is LWM2M -> IOTA -> Orion -> Cygnus -> MongoDB.
My issue now: I want have a precise explanation of configuration I need to do to have lwm2m-node-lib implementation work here, for instance where to input the server IP to connect to (where my FIWARE is running), which file to edit etc. I already picked a new device to use, keeping aside the other.
My docker-compose file below:
version: "3.1"
services:
mongo:
image: mongo:3.4
hostname: mongo
container_name: fiware-mongo
ports:
- "27017:27017"
networks:
- default
command: --nojournal
orion:
image: fiware/orion
hostname: orion
container_name: fiware-orion
depends_on:
- mongo
networks:
- default
ports:
- "1026:1026"
expose:
- "1026"
command: -dbhost mongo -logLevel DEBUG
lightweightm2m-iotagent:
image: telefonicaiot/lightweightm2m-iotagent
hostname: idas
container_name: fiware-iotagent
depends_on:
- mongo
networks:
- default
expose:
- "4041"
- "5684"
ports:
- "4041:4041"
- "5684:5684/udp"
environment:
- "IOTA_CB_HOST=orion"
- "IOTA_CB_PORT=1026"
- "IOTA_NORTH_PORT=4041"
- "IOTA_REGISTRY_TYPE=mongodb"
- "IOTA_LOG_LEVEL=DEBUG"
- "IOTA_TIMESTAMP=true"
- "IOTA_MONGO_HOST=mongo"
- "IOTA_MONGO_PORT=27017"
- "IOTA_MONGO_DB=lwm2miotagent"
- "IOTA_HTTP_PORT=5684"
- "IOTA_PROVIDER_URL=http://lightweightm2m-iotagent:4041"
cygnus:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: fiware-cygnus
depends_on:
- mongo
networks:
- default
expose:
- "5080"
ports:
- "5050:5050"
- "5080:5080"
environment:
- "CYGNUS_MONGO_HOSTS=mongo:27017"
- "CGYNUS_LOG_LEVEL_=DEBUG"
- "CYGNUS_SERVICE_PORT=5050"
- "CYGNUS_API_PORT=5080"
You can have a look to:
https://hub.docker.com/r/fiware/lightweightm2m-iotagent/
There you have a very good explanation in how to use the IOTA-LWM2M docker, along with configuration examples to run it with Orion.

Docker swarm shared volume Mysql: Host is not allowed to connect to this MySQL server

I've a cluster of 3 docker swarm nodes.
For each node I've created the directory /opt/dockershared/ and I've configured glusterfs to share this directory among the 3 nodes.
I"m trying to delpoy a stack of 4 services: NGINX Proxy, GUI, API, MYSQL
If I deploy the stack using the following yml there is no problem:
version: '3.3'
services:
proxy:
image: my-nginx-proxy
ports:
- "34200:34200"
networks:
- mynet
db:
image: mysql/mysql-server:5.6
volumes:
- MYDB:/var/lib/mysql
networks:
- mynet
ports:
- "3306"
deploy:
restart_policy:
condition: any
mode: replicated
replicas: 1
update_config:
parallelism: 1
delay: 10s
environment:
- MYSQL_ROOT_PASSWORD=XXX
- MYSQL_USER=user
- MYSQL_PASSWORD=xxxxx
- MYSQL_DATABASE=db
api:
image: myapi:latest
depends_on:
- db
volumes:
- /opt/dockershared/myapi:/data/tmp
ports:
- "3000"
networks:
- mynet
deploy:
restart_policy:
condition: any
mode: replicated
replicas: 2
update_config:
parallelism: 1
delay: 10s
gui:
image: my-gui:latest
ports:
- "4200"
networks:
- mynet
deploy:
restart_policy:
condition: any
mode: replicated
replicas: 2
update_config:
parallelism: 1
delay: 10s
networks:
mynet:
volumes:
MYDB:
As soon as I replace the volume MYDB in the db service with the folder shared with glusterfs (e.g. /opt/dockershared/mydb:/var/lib/mysql) I cannot connect anymore to mysql from my api service:
Host 'XXX.XXX.XXX.XX' is not allowed to connect to this MySQL server
I haven't change the config of mysql, I've changed only the volume data dir.
If I inspect the directory /opt/dockershared/mydb I can see the default database created by MYSQL on the 3 nodes (test/, performance_schema/, mysql/, ib_logfile1)
What could be the problem?
thank you

can I Use mysql,oracle database as well as sql server on Same system

I have installed SQL-Server 2008 r2 on my system,but now i need to install MYSQL too. Is it possible to install MYSQL and SQL-Server side-by-side. Does installing both SQL-Server and MYSQL on same system affect each other?
Yes this is possible. However you have to make sure the ports they listen to are different.By default mysql uses port 3306 and SQL-server uses port 1433.They are both applications like any other application. With different processes, so they should run on the same machine without any conflicts. On setup just make sure you configure the ports well so that they do not use the same port, of which the system too should detect that the port is being used by another application.
The simple and clean answer to your question is Docker, i have created tutorial where MySQL, MSSQL, Oracle, PostgreSQL and MongoDB are setup and running simultaneously in single CentOS system with 3GB of RAM without affecting each other:
https://adhoctuts.com/run-mulitple-databases-in-single-machine-using-docker-vagrant/
https://www.youtube.com/watch?v=LeqkCoX28qg
below is the content of docker-compose.yml file from the tutorial, but you need the other files as well (all files are in following git repository: https://github.com/mamedshahmaliyev/adhoctuts/tree/master/docker/multiple_databases). If you need MySQL and MSSQL only just delete other services from docker-compose.yml and run docker-compose up:
# link to tutorial: https://adhoctuts.com/run-mulitple-databases-in-single-machine-using-docker-vagrant/
version: "3.1"
networks:
docker-network:
services:
# https://hub.docker.com/_/mysql
mysql_persistance: # service name
image: mysql:8
container_name: mysql_p # container_name
command: --default-authentication-plugin=mysql_native_password
volumes:
- /docker/mysql/data:/var/lib/mysql # for data persistance
- /docker/mysql/conf:/etc/mysql/conf.d # put all the custom configuration files from host to container
environment:
- MYSQL_ROOT_PASSWORD=AdHocTuts2019#
ports:
- "3309:3306" # map host port to container port
networks:
- docker-network
#restart: on-failure
mysql_no_persistance:
image: mysql:5.7
container_name: mysql_np
environment:
- MYSQL_ROOT_PASSWORD=AdHocTuts2019#
ports:
- "3308:3306"
networks:
- docker-network
# https://hub.docker.com/_/microsoft-mssql-server
mssql:
image: mcr.microsoft.com/mssql/server:2017-CU8-ubuntu
container_name: mssql
volumes:
- /docker/mssql/data:/var/opt/mssql
environment:
- SA_PASSWORD=AdHocTuts2019#
- ACCEPT_EULA=Y
- TZ=Asia/Baku
- MSSQL_PID=Express
ports:
- "1433:1433"
networks:
- docker-network
# https://hub.docker.com/_/oracle-database-enterprise-edition
# Accept Terms of Service for Oracle Database Enterprise Edition (Proceed to Checkout).
# Then in command line: docker login
# sqlplus sys/Oradoc_db1#ORCLDB as sysdba
oracle:
image: store/oracle/database-enterprise:12.2.0.1-slim
container_name: oracle
volumes:
- /docker/oracle/data:/ORCL # host path must have 777 permission or writable by docker oracle user
environment:
- DB_SID=ORCLDB
- DB_MEMORY=1GB
ports:
- "1521:1521"
networks:
- docker-network
# https://hub.docker.com/_/postgres
postgres:
image: postgres:12
container_name: postgres
volumes:
- /docker/postgre/data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=AdHocTuts2019#
- POSTGRES_USER=postgres
- POSTGRES_DB=docker_db
ports:
- "5432:5432"
networks:
- docker-network
# https://hub.docker.com/_/mongo
mongo:
image: mongo:3.4.21-xenial
container_name: mongo
volumes:
- /docker/mongo/data:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=AdHocTuts2019#
ports:
- "27017:27017"
networks:
- docker-network