Hypercorn Name does not resolve in Docker - mysql

I am trying to test my Quart application (pager) that connects to a MySQL instance in a docker container called master-db, but after a few retries I get a hypercorn error:
pager | Traceback (most recent call last):
pager | File "/usr/local/bin/hypercorn", line 8, in <module>
pager | sys.exit(main())
pager | File "/usr/local/lib/python3.9/site-packages/hypercorn/__main__.py", line 267, in main
pager | run(config)
pager | File "/usr/local/lib/python3.9/site-packages/hypercorn/run.py", line 34, in run
pager | worker_func(config)
pager | File "/usr/local/lib/python3.9/site-packages/hypercorn/asyncio/run.py", line 187, in asyncio_worker
pager | _run(
pager | File "/usr/local/lib/python3.9/site-packages/hypercorn/asyncio/run.py", line 229, in _run
pager | loop.run_until_complete(main(shutdown_trigger=shutdown_trigger))
pager | File "/usr/local/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
pager | return future.result()
pager | File "/usr/local/lib/python3.9/site-packages/hypercorn/asyncio/run.py", line 69, in worker_serve
pager | sockets = config.create_sockets()
pager | File "/usr/local/lib/python3.9/site-packages/hypercorn/config.py", line 177, in create_sockets
pager | insecure_sockets = self._create_sockets(self.bind)
pager | File "/usr/local/lib/python3.9/site-packages/hypercorn/config.py", line 240, in _create_sockets
pager | sock.bind(binding)
pager | socket.gaierror: [Errno -2] Name does not resolve
pager exited with code 1
The code works locally and has been fully tested, but I don't know where I have gone wrong :(
The docker-compose file is:
version: "3.8"
networks:
localdev:
driver: bridge
services:
master-db:
image: mysql:8.0
container_name: master-db
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- "4000:3306"
environment:
MYSQL_ROOT_PASSWORD: password_for_stackoverflow
volumes:
- ./database/docker:/etc/mysql/conf.d
networks:
- localdev
pager:
build:
context: .
dockerfile: Dockerfile.pager
container_name: pager
ports:
- "2020:2020"
networks:
- localdev
depends_on:
- master-db
Docker.pager:
FROM python:3-alpine
RUN pip install --upgrade pip
RUN pip install hypercorn
RUN pip install mysql-connector
RUN pip install quart
COPY src/common /app/common
COPY src/pager /app/pager
WORKDIR /app
CMD ["hypercorn", "pager:app", "--bind", "'0.0.0.0:2020'"]

The problem is not that you can’t connect to the MySQL instance, the problem is that Hypercorn is trying to listen on a socket and failing.
CMD ["hypercorn", "pager:app", "--bind", "'0.0.0.0:2020'"]
Have you tried removing the single quotes from the bind parameter?
CMD ["hypercorn", "pager:app", "--bind", "0.0.0.0:2020"]

Related

"Can't connect to MySQL server" While trying to connect MYSQL database to Django-REST in Docker

My configuration is as follows:
I am running a Django-REST backend, with a MySQL database. I am trying to run the Django backend in its own Docker container, as well as running a MySQL database in its own Django container. It seems that Django is not able to connect to the MySQL database when my containers are running.
Database settings in Django:
DATABASES = {
"default": {
"ENGINE": os.environ.get("SQL_ENGINE", "django.db.backends.sqlite3"),
"NAME": os.environ.get("SQL_DATABASE", BASE_DIR / "db.sqlite3"),
"USER": os.environ.get("SQL_USER", "user"),
"PASSWORD": os.environ.get("SQL_PASSWORD", "password"),
"HOST": os.environ.get("SQL_HOST", "localhost"),
"PORT": os.environ.get("SQL_PORT", "5432"),
}
}
Dockerfile:
FROM python:3.10.2-slim-buster
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN apt update \
&& apt install -y --no-install-recommends python3-dev \
default-libmysqlclient-dev build-essential default-mysql-client \
&& apt autoclean
RUN pip install --no-cache-dir --upgrade pip
COPY ./requirements.txt /code/
RUN pip install --no-cache-dir -r requirements.txt
COPY ./neura-dbms-backend /code/
EXPOSE 7000
Requirements.txt:
Django
djangorestframework
django-cors-headers
requests
boto3
django-storages
pytest
mysqlclient==2.1.1
django-use-email-as-username
djangorestframework-simplejwt
gunicorn
docker-compose.yml:
version: "3.8"
services:
neura-dbms-backend:
build:
context: ./DBMS/neura-dbms-backend
command: [sh, -c, "python manage.py runserver 0.0.0.0:7000"]
image: neura-dbms-backend
container_name: neura-dbms-backend
volumes:
- ./DBMS/neura-dbms-backend/neura-dbms-backend:/code
ports:
- 7000:7000
networks:
- docker-network
environment:
- DEBUG=1
- SECRET_KEY=${SECRET_KEY_DBMS}
- DJANGO_ALLOWED_HOSTS=${DJANGO_ALLOWED_HOSTS}
- DJANGO_ALLOWED_ORIGINS=${DJANGO_ALLOWED_ORIGINS}
- JWT_KEY=${JWT_KEY}
- SQL_ENGINE=django.db.backends.mysql
- SQL_DATABASE=db_neura_dbms
- SQL_USER=neura_dbms_user
- SQL_PASSWORD=super_secure_password
- SQL_HOST=db_neura_dbms
- SQL_PORT=5432
depends_on:
- "db_neura_dbms"
db_neura_dbms:
image: mysql:latest
volumes:
- mysql_data_db_neura_dbms:/var/lib/mysql/
environment:
- MYSQL_DATABASE=db_neura_dbms
- MYSQL_USER=neura_dbms_user
- MYSQL_PASSWORD=super_secure_password
- MYSQL_ROOT_PASSWORD=super_secure_password
networks:
- docker-network
networks:
docker-network:
driver: bridge
volumes:
mysql_data_db_neura_dbms:
I am able to build images for Django and the Database, but when I try to run the containers, I get the following error from the Django container:
neura-dbms-backend | System check identified no issues (0 silenced).
neura-dbms-backend | Exception in thread django-main-thread:
neura-dbms-backend | Traceback (most recent call last):
neura-dbms-backend | File "/usr/local/lib/python3.10/site-packages/django/db/backends/base/base.py", line 282, in ensure_connection
neura-dbms-backend | self.connect()
neura-dbms-backend | File "/usr/local/lib/python3.10/site-packages/django/utils/asyncio.py", line 26, in inner
neura-dbms-backend | return func(*args, **kwargs)
neura-dbms-backend | File "/usr/local/lib/python3.10/site-packages/django/db/backends/base/base.py", line 263, in connect
neura-dbms-backend | self.connection = self.get_new_connection(conn_params)
neura-dbms-backend | File "/usr/local/lib/python3.10/site-packages/django/utils/asyncio.py", line 26, in inner
neura-dbms-backend | return func(*args, **kwargs)
neura-dbms-backend | File "/usr/local/lib/python3.10/site-packages/django/db/backends/mysql/base.py", line 247, in get_new_connection
neura-dbms-backend | connection = Database.connect(**conn_params)
neura-dbms-backend | File "/usr/local/lib/python3.10/site-packages/MySQLdb/__init__.py", line 123, in Connect
neura-dbms-backend | return Connection(*args, **kwargs)
neura-dbms-backend | File "/usr/local/lib/python3.10/site-packages/MySQLdb/connections.py", line 185, in __init__
neura-dbms-backend | super().__init__(*args, **kwargs2)
neura-dbms-backend | MySQLdb.OperationalError: (2002, "Can't connect to MySQL server on 'db_neura_dbms' (115)")
What am I missing? Thanks!
So I added a script so that Django waits for the mysql database to be ready before it connects:
#!/bin/bash
if [ "$SQL_HOST" = "db" ]
then
echo "Waiting for mysql..."
while !</dev/tcp/$SQL_HOST/$SQL_PORT; do sleep 1; done;
echo "MySQL started"
fi
# python manage.py migrate
exec "$#"
When I first run the Docker containers, it seems that MySQL runs through some sort of setup, Django then tries to connect and fails.
If I then kill the containers, and run them again, the MySQL setup is finished, and Django is able to connect to the database. I wonder if there is a way for Django to wait for this setup to be finished as well?
depends_on only waits till the database container is started, but in this case, after the container is started it still takes some time for mysql to make the system ready for connection.
what you can do is create a command file (This is for postgres, you can make one for yours, you will need to add raised mysql exception instead of Psycopg2Error )
import time
from psycopg2 import OperationalError as Psycopg2Error
from django.db.utils import OperationalError
from django.core.management.base import BaseCommand
class Command(BaseCommand):
"""
Django command to wait for database
"""
def handle(self, *args, **options):
"""
Command entrypoint
"""
self.stdout.write("Checking database availability\n")
db_up = False
seconds_cnt = 0
while not db_up:
try:
self.check(databases=['default'])
db_up = True
self.stdout.write(
self.style.WARNING(
"Available within {} seconds".format(seconds_cnt)))
self.stdout.write(self.style.SUCCESS("Database available!"))
except(Psycopg2Error, OperationalError):
seconds_cnt += 1
self.stdout.write(
self.style.WARNING(
"Database unavailable waiting... {} seconds"
.format(seconds_cnt)))
time.sleep(1)
Your command can be updated with,
command: >
sh -c "python manage.py wait_for_db &&
python manage.py runserver 0.0.0.0:87000"

Failed to connect python and MySQL using Docker Compose

I am trying to connect to MySQL database using Docker Compose and I get the following error
Traceback (most recent call last):
| File "/usr/local/lib/python3.9/site-packages/mysql/connector/connection_cext.py", line 268, in _open_connection
| self._cmysql.connect(**cnx_kwargs)
| _mysql_connector.MySQLInterfaceError: Can't connect to MySQL server on '172.22.0.2:3306' (110)
|
| The above exception was the direct cause of the following exception:
| Traceback (most recent call last):
| File "//script.py", line 19, in <module>
| cnx = mysql.connector.connect(host='172.22.0.2', user='user', password='pass', database='db', port=3306)
| File "/usr/local/lib/python3.9/site-packages/mysql/connector/pooling.py", line 286, in connect
| return CMySQLConnection(*args, **kwargs)
| File "/usr/local/lib/python3.9/site-packages/mysql/connector/connection_cext.py", line 101, in __init__
| self.connect(**kwargs)
| File "/usr/local/lib/python3.9/site-packages/mysql/connector/abstracts.py", line 1108, in connect
| self._open_connection()
| File "/usr/local/lib/python3.9/site-packages/mysql/connector/connection_cext.py", line 273, in _open_connection..
| raise get_mysql_exception(mysql.connector.errors.DatabaseError: 2003 (HY000): Can't connect to MySQL server on '172.22.0.2:3306'
In my programm I have to containers: for python script and for MySQL database. Both containers are built successfully and db container starts just fine.
172.22.0.2 stands for ip address in network I've created (see docker-compose).
My python script simply connects to the database using mysql-connector.
The code is the following
Dockerfile for script
FROM python:3.9
COPY script.py script.py
RUN pip install mysql-connector-python
script.py
import mysql.connector
cnx = mysql.connector.connect(host='172.22.0.2', user='user', password='pass', database='db', port=3306)
docker-compose.yaml
version: "3"
services:
db:
image: mysql
container_name: db
command: '--init-file /data/app/init.sql'
ports:
- "3306:3306"
volumes:
- "./init.sql:/data/app/init.sql"
environment:
MYSQL_DATABASE: "db"
MYSQL_USER: "user"
MYSQL_PASSWORD: "pass"
MYSQL_ROOT_PASSWORD: "pass"
networks:
net:
ipv4_address: 172.22.0.2
script:
build:
context: ./
dockerfile: Dockerfile
depends_on:
- db
links:
- db
container_name: script
command: sh -c "sleep 10s ; python script.py"
networks:
net:
ipv4_address: 172.22.0.3
networks:
net:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.22.0.0/24
gateway: 172.22.0.1
init.sql
CREATE DATABASE IF NOT EXIST db;
USE db;

connect ECONNREFUSED 127.0.0.1:3306 with using jest in docker

I'm trying to use jest codes before deploy to dev. So i made docker-compose.yml and put "npm test (ENV=test jest --runInBand --forceExit test/**.test.ts -u)" but it has error.
This is my local.yml file (for docker-compose.yml)
version: "3"
services:
my-node:
image: my-api-server:dev
container_name: my_node
# sleep 10 sec for db init
command: bash -c "sleep 10; pwd; cd packages/server; yarn orm schema:sync -f ormconfig.dev.js; yarn db:migrate:run -f ormconfig.dev.js; npm test; cross-env ENV=dev node lib/server.js"
ports:
- "8082:8082"
depends_on:
- my-mysql
- my-redis
my-mysql:
image: mysql:5.7
container_name: my_mysql
command: --character-set-server=utf8mb4 --sql_mode="NO_ENGINE_SUBSTITUTION"
ports:
- "33079:3306"
volumes:
- ./init/:/docker-entrypoint-initdb.d/
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=test
- MYSQL_USER=test
- MYSQL_PASSWORD=test
my-redis:
image: redis:6.2-alpine
container_name: my_redis
ports:
- 6379:6379
command: redis-server --requirepass test
networks:
default:
external:
name: my_nginx_default
and it's my ormconfig.dev.js file
module.exports = {
type: 'mysql',
host: 'my_mysql',
port: 3306,
username: 'test',
password: 'test',
database: 'test',
entities: ['./src/modules/**/entities/*'],
migrations: ['migration/dev/*.ts'],
cli: { "migrationsDir": "migration/dev" }
}
But after i use docker-compose -f res/docker/local.yml up, it throws error after whole build and then jests. and then it opens server which does not have error.
Errors are like these below.
connect ECONNREFUSED 127.0.0.1:3306
and then
my_node | TypeError: Cannot read property 'subscriptionsPath' of undefined
my_node |
my_node | 116 | ): Promise<TestResponse<T>> => {
my_node | 117 | const req = request(server.app)
my_node | > 118 | .post(server.apolloServer!.subscriptionsPath!)
my_node | | ^
my_node | 119 | .set('Accept', 'application/json')
my_node | 120 |
my_node | 121 | if (token) {
I've tried to change entities path.
entities: ['./src/modules/**/entities/*']
entities: ['src/modules/**/entities/*']
entities: [__dirname + '/src/modules/**/entities/*']
My entities are in the right path.
Here is my whole file structure
Can anyone help this problem?
in your module.export host:my-mysql
Looking at the documentation, it could be an environment variable issue. As per the docs, the orm config file used is -
From the environment variables. Typeorm will attempt to load the .env file using dotEnv if it exists. If the environment variables TYPEORM_CONNECTION or TYPEORM_URL are set, Typeorm will use this method.
From the ormconfig.env.
From the other ormconfig.[format] files, in this order: [js, ts, json, yml, yaml, xml].
Since you have not defined the first two, it must be defaulting to use the ormconfig.js file. There is no reason it should pick ormconfig.dev.js.
If you can, change the ormconfig.js file to be this -
module.exports = {
type: 'mysql',
host: 'my_mysql',
port: 3306,
username: 'test',
password: 'test',
database: 'test',
entities: ['./src/modules/**/entities/*'],
migrations: ['migration/dev/*.ts'],
cli: { "migrationsDir": "migration/dev" }
}

Python Flask SQLAlchemy container unable to connect to MySQL container

I am unable to connect to my MySQL database from Flask application.
I am learning to build web application with Python Flask from this tutorial but tried modifying some elements of it for experimenting with Docker. Even without using docker-compose, I was unable to connect to the database from the web application.
Let me first give the error traceback in the application log (flask_test container):
[2021-12-20 18:13:18 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2021-12-20 18:13:18 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
[2021-12-20 18:13:18 +0000] [1] [INFO] Using worker: sync
[2021-12-20 18:13:18 +0000] [8] [INFO] Booting worker with pid: 8
[2021-12-20 18:13:19,105] INFO in __init__: Microblog startup
[2021-12-20 18:14:19,239] ERROR in app: Exception on /auth/register [POST]
Traceback (most recent call last):
File "/home/flasktest/venv/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/home/flasktest/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 96, in create_connection
raise err
File "/home/flasktest/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 86, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/flasktest/venv/lib/python3.10/site-packages/elasticsearch/connection/http_urllib3.py", line 255, in perform_request
response = self.pool.urlopen(
File "/home/flasktest/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "/home/flasktest/venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 507, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/home/flasktest/venv/lib/python3.10/site-packages/urllib3/packages/six.py", line 770, in reraise
raise value
File "/home/flasktest/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "/home/flasktest/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 394, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/flasktest/venv/lib/python3.10/site-packages/urllib3/connection.py", line 239, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/usr/local/lib/python3.10/http/client.py", line 1282, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.10/http/client.py", line 1328, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.10/http/client.py", line 1277, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.10/http/client.py", line 1037, in _send_output
self.send(msg)
File "/usr/local/lib/python3.10/http/client.py", line 975, in send
self.connect()
File "/home/flasktest/venv/lib/python3.10/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/flasktest/venv/lib/python3.10/site-packages/urllib3/connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f73470be7d0>: Failed to establish a new connection: [Errno 111] Connection refused
And this is the MySQL container (mysql_test) log:
2021-12-20T18:13:14.094155Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-12-20T18:13:14.098891Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2021-12-20T18:13:14.110089Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2021-12-20T18:13:14.110149Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.27' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
mbind: Operation not permitted
mbind: Operation not permitted
2021-12-20 18:13:10+00:00 [Note] [Entrypoint]: Creating database test_db
2021-12-20 18:13:10+00:00 [Note] [Entrypoint]: Creating user test_user
2021-12-20 18:13:10+00:00 [Note] [Entrypoint]: Giving user test_user access to schema test_db
2021-12-20 18:13:10+00:00 [Note] [Entrypoint]: Stopping temporary server
2021-12-20 18:13:13+00:00 [Note] [Entrypoint]: Temporary server stopped
2021-12-20 18:13:13+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up.
Here is the starting point of Python application (microblog.py):
from app_pkg import create_app, cli
app = create_app()
cli.register(app)
Here is the model class:
class User(UserMixin, SearchableMixin, db.Model):
__searchable__ = ['username']
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), index=True, unique=True)
email = db.Column(db.String(120), index=True, unique=True)
password_hash = db.Column(db.String(128))
posts = db.relationship('Post', backref='author', lazy='dynamic')
about_me = db.Column(db.String(140))
last_seen = db.Column(db.DateTime, default=datetime.utcnow)
followed = db.relationship(
'User', secondary=followers,
primaryjoin=(followers.c.follower_id == id),
secondaryjoin=(followers.c.followed_id == id),
backref=db.backref('followers', lazy='dynamic'), lazy='dynamic')
This is my compose.yaml:
version: '3'
services:
python_app:
container_name: flask_test
build: .
env_file: .env
ports:
- 8000:5000
links:
- mysqldb:dbserver
depends_on:
mysqldb:
condition: service_healthy
mysqldb:
container_name: mysql_test
image: mysql:latest
env_file: database.conf
volumes:
- db-data:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "127.0.0.1", "--silent"]
interval: 3s
retries: 5
start_period: 30s
volumes:
db-data:
This is my Dockerfile:
FROM python:slim
RUN useradd flasktest
WORKDIR /home/flasktest
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn pymysql cryptography
COPY app_pkg app_pkg
COPY migrations migrations
COPY microblog.py config.py boot.sh ./
RUN chmod +x boot.sh
ENV FLASK_APP microblog.py
RUN chown -R flasktest:flasktest ./
USER flasktest
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
And finally, this is the boot.sh:
#!/bin/bash
source venv/bin/activate
while true; do
flask db upgrade
if [[ "$?" == "0" ]]; then
break
fi
echo Upgrade command failed, retrying in 5 secs...
sleep 5
done
exec gunicorn -b :5000 --access-logfile - --error-logfile - microblog:app
And these are some necessary environment variables that are being used in the application:
DATABASE_URL=mysql+pymysql://test_user:abc123#dbserver/test_db
MYSQL_ROOT_PASSWORD=root123
MYSQL_DATABASE=test_db
MYSQL_USER=test_user
MYSQL_PASSWORD=abc123
Sorry if the question has become too lengthy. I wanted to give as much detail as possible in the question. Let me know if any other details is required. I have been trying to debug this issue for the past week, but am unable to find a way to connect the app with the sql server.
Also let me know if I should try any specific method to try to debug this issue.
Edit:
create_app function:
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
from flask_login import LoginManager
from flask_mail import Mail
from flask_bootstrap import Bootstrap
from flask_moment import Moment
db = SQLAlchemy()
migrate = Migrate()
login = LoginManager()
login.login_view = 'auth.login'
login.login_message = _l('Please log in to access this page.')
mail = Mail()
bootstrap = Bootstrap()
moment = Moment()
def create_app(config_class=Config):
app = Flask(__name__)
app.config.from_object(config_class)
db.init_app(app)
migrate.init_app(app, db)
login.init_app(app)
mail.init_app(app)
bootstrap.init_app(app)
moment.init_app(app)
from app_pkg.errors import bp as errors_bp
app.register_blueprint(errors_bp)
from app_pkg.auth import bp as auth_bp
app.register_blueprint(auth_bp, url_prefix='/auth')
from app_pkg.main import bp as main_bp
app.register_blueprint(main_bp)
I tried changing DATABASE_URL from mysql+pymysql://test_user:abc123#dbserver/test_db to mysql+pymysql://test_user:abc123#mysqldb/test_db, but the issue still persists.
I also tried adding 'ports: -3306:3306' to compose.yaml and changing DATABASE_URL to 0.0.0.0 as host, but this is giving this error:
[+] Running 3/4
- Network flask_tutorial_default Created 0.7s
- Volume "flask_tutorial_db-data" Created 0.0s
- Container mysql_test Starting 2.4s
- Container flask_test Created 0.2s
Error response from daemon: Ports are not available: listen tcp 0.0.0.0:3306: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
You are currently using the docker-compose dns feature in which you can use the container name as domain name for services running in docker-compose, thats neat - unless when you rename containers ;) Did you rename the mysqldb from dbserver?
If you want to continue using this feature, modify the env vars as so: (change dbserver to mysqldb)
DATABASE_URL=mysql+pymysql://test_user:abc123#mysqldb/test_db
...
If you instead what to use a more explicit approach:
In your docker-compose, you need to bind the port 3306 to your host network by
version: '3'
services:
python_app:
container_name: flask_test
build: .
env_file: .env
ports:
- 8000:5000
links:
- mysqldb:dbserver
depends_on:
mysqldb:
condition: service_healthy
mysqldb:
container_name: mysql_test
image: mysql:latest
env_file: database.conf
volumes:
- db-data:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "127.0.0.1", "--silent"]
interval: 3s
retries: 5
start_period: 30s
ports:
- 3306:3306
...
Then change the env var to:
DATABASE_URL=mysql+pymysql://test_user:abc123#0.0.0.0/test_db
Thanks for all the info, you did not post too much - needed it all :)

fiware quantumleap insert into cratedb not working (schema missing)

goal
Use qunatumleap to move data into a crate_db to display later using Grafana.
what I did
follow tutorial to setup Docker images
setup opc-agent to provide data to the orion broker
setup quantumleap to move data from broker to crate_db on change
checked that a subscription is present in the contextBroker
Expected behavior
on subscription of a new item quantumleap will create a entry in a table in the crate_db to store the provided values
what actually happens
Instead of creating a entry in the Crate_db quantumleap throws a "schema not existing" fault.
The provided tutorials do not talk about setting those schema up myself, therefore I assume that quantumleap normally sets them up.
Right now I do not know why this is failing, most likely it is a configuration mistake on my side
additional information
subscription present in contextBroker:
curl -X GET \
'http://localhost:1026/v2/subscriptions/' \
-H 'fiware-service: openiot' \
-H 'fiware-servicepath: /'
[
{"id":"60360eae34f0ca493f0fc148",
"description":"plc_id",
"status":"active",
"subject":{"entities":[{"idPattern":"PLC1"}],
"condition":{"attrs":["main"]}},
"notification":{"timesSent":1748,
"lastNotification":"2021-02-24T08:59:45.000Z",
"attrs":["main"],
"onlyChangedAttrs":false,
"attrsFormat":"normalized",
"http":{"url":"http://quantumleap:8668/v2/notify"},
"metadata":["dateCreated","dateModified"],
"lastSuccess":"2021-02-24T08:59:45.000Z",
"lastSuccessCode":500},
"throttling":1}
]
Orion log:
orion_1 | INFO#09:07:55 logTracing.cpp[130]: Request received: POST /v1/updateContext, request payload (327 bytes): {"contextElements":[{"type":"plc","isPattern":"false","id":"PLC1","attributes":[{"name":"main","type":"Number","value":"12285","metadatas":[{"name":"SourceTimestamp","type":"ISO8601","value":"2021-02-24T09:07:55.033Z"},{"name":"ServerTimestamp","type":"ISO8601","value":"2021-02-24T09:07:55.033Z"}]}]}],"updateAction":"UPDATE"}, response code: 200
Quantum Leap log:
quantumleap_1 | time=2021-02-24 09:07:55.125 | level=ERROR | corr=c7df320c-767f-11eb-bbb3-0242ac1b0005; cbnotif=1 | from=172.27.0.5 | srv=openiot | subserv=/ | op=_insert_entity_rows | comp=translators.crate | msg=Failed to insert entities because of below error; translator will still try saving original JSON in "mtopeniot"."etplc".__original_ngsi_entity__ | payload=[{'id': 'PLC1', 'type': 'plc', 'main': {'type': 'Number', 'value': '12285', 'metadata': {'dateCreated': {'type': 'DateTime', 'value': '2021-02-24T08:28:59.917Z'}, 'dateModified': {'type': 'DateTime', 'value': '2021-02-24T09:07:55.115Z'}}}, 'time_index': '2021-02-24T09:07:55.115000+00:00'}] | thread=140262103055136 | process=67
Traceback from Qunatumleap
quantumleap_1 | Traceback (most recent call last): quantumleap_1 | File "/src/ngsi-timeseries-api/src/translators/sql_translator.py", line 365, in _insert_entity_rows
quantumleap_1 | self.cursor.executemany(stmt, rows) quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/cursor.py", line 67, in executemany quantumleap_1 | self.execute(sql, bulk_parameters=seq_of_parameters)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/cursor.py", line 53, in execute quantumleap_1 | self._result = self.connection.client.sql(sql, parameters,
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 331, in sql quantumleap_1 | content = self._json_request('POST', self.path, data=data)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 458, in _json_request quantumleap_1 | _raise_for_status(response)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 187, in _raise_for_status
quantumleap_1 | raise ProgrammingError(error.get('message', ''),
quantumleap_1 | crate.client.exceptions.ProgrammingError: SQLActionException[SchemaUnknownException: Schema 'mtopeniot' unknown] quantumleap_1 | quantumleap_1 | During handling of the above exception, another exception occurred: quantumleap_1 |
quantumleap_1 | Traceback (most recent call last):
quantumleap_1 | File "/src/ngsi-timeseries-api/src/reporter/reporter.py", line 195, in notify quantumleap_1 | trans.insert(payload, fiware_s, fiware_sp)
quantumleap_1 | File "/src/ngsi-timeseries-api/src/translators/sql_translator.py", line 221, in insert
quantumleap_1 | res = self._insert_entities_of_type(et,
quantumleap_1 | File "/src/ngsi-timeseries-api/src/translators/sql_translator.py", line 354, in _insert_entities_of_type
quantumleap_1 | self._insert_entity_rows(table_name, col_names, entries, entities) quantumleap_1 | File "/src/ngsi-timeseries-api/src/translators/sql_translator.py", line 381, in _insert_entity_rows
quantumleap_1 | self._insert_original_entities_in_failed_batch(
quantumleap_1 | File "/src/ngsi-timeseries-api/src/translators/sql_translator.py", line 437, in _insert_original_entities_in_failed_batch
quantumleap_1 | self.cursor.executemany(stmt, rows)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/cursor.py", line 67, in executemany
quantumleap_1 | self.execute(sql, bulk_parameters=seq_of_parameters)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/cursor.py", line 53, in execute
quantumleap_1 | self._result = self.connection.client.sql(sql, parameters,
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 331, in sql
quantumleap_1 | content = self._json_request('POST', self.path, data=data)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 458, in _json_request
quantumleap_1 | _raise_for_status(response) quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 187, in _raise_for_status
quantumleap_1 | raise ProgrammingError(error.get('message', ''),
quantumleap_1 | crate.client.exceptions.ProgrammingError: SQLActionException[SchemaUnknownException: Schema 'mtopeniot' unknown]
Tables in cratedb after running qunatumleap for a while:
screenshot of cratedb tables
docker-compose file
version: "3"
services:
iotage:
hostname: iotage
image: iotagent4fiware/iotagent-opcua
networks:
- hostnet
- iotnet
ports:
- "4001:4001"
- "4081:8080"
extra_hosts:
- "iotcarsrv:192.168.2.16"
# - "PLC1:192.168.2.57"
depends_on:
- iotmongo
- orion
volumes:
- ./certificates:/opt/iotagent-opcua/certificates
- ./AGECONF:/opt/iotagent-opcua/conf
command: /usr/bin/tail -f /var/log/lastlog
iotmongo:
hostname: iotmongo
image: mongo:3.4
volumes:
- iotmongo_data:/data/db
- iotmongo_conf:/data/configdb
crate-db:
image: crate
hostname: crate-db
ports:
- "4200:4200"
- "4300:4300"
command:
crate -Clicense.enterprise=false -Cauth.host_based.enabled=false -Ccluster.name=democluster
-Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
networks:
- hostnet
quantumleap:
hostname: quantumleap
image: smartsdk/quantumleap
ports:
- "8668:8668"
depends_on:
- crate-db
environment:
- CRATE_HOST=crate-db
networks:
- hostnet
grafana:
image: grafana/grafana
depends_on:
- crate-db
ports:
- "3003:3000"
networks:
- hostnet
################ OCB ################
orion:
hostname: orion
image: fiware/orion:latest
networks:
- hostnet
- ocbnet
ports:
- "1026:1026"
depends_on:
- orion_mongo
#command: -dbhost mongo
entrypoint: /usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -statCounters -dbhost mongo -logForHumans -logLevel DEBUG -t 255
orion_mongo:
hostname: orion_mongo
image: mongo:3.4
networks:
ocbnet:
aliases:
- mongo
volumes:
- orion_mongo_data:/data/db
- orion_mongo_conf:/data/configdb
command: --nojournal
volumes:
iotmongo_data:
iotmongo_conf:
orion_mongo_data:
orion_mongo_conf:
networks:
hostnet:
iotnet:
ocbnet:
edits
added docker compose file
after changing the database to a more recent version (for example crate-db:3.1.2) the data arrives at the database nicely