Cygnus doesn't persist data because: "namespace name generated is too long >127" - fiware

After some days breaking my head because cygnus persist randomly the updates, I have found in the logs that the size of a generated name space is too long.
I'm working on Centos 7
my entities use the standard type: BikeHireDockingStation
The error says that the namespace generate is too long (127 caracteres). It generates 167:
sth_malaga.sth_/_urn:ngsi-ld:BikeHireDockingStation:10_BikeHireDockingStation.aggr.$_id.entityId_1__id.entityType_1__id.attrName_1__id.resolution_1__id.origin_1
but even if I change the type to bike, the size is 124.
here you can see the part of the log error that I obtain when I call:
$ docker container logs fiware-cygnus
time=2019-09-09T21:14:14.176Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A |
subsrv=N/A | comp=cygnus-ngsi | op=processRollbackedBatches |
msg=com.telefonica.iot.cygnus.sinks.NGSISink[399] : CygnusPersistenceError. -,
Command failed with error 67: 'namespace name generated from index name
"sth_malaga.sth_/_urn:ngsi-ld:BikeHireDockingStation:10_BikeHireDockingStation.aggr.$_id.entityId_1__id.entityType_1__id.attrName_1__id.resolution_1__id.origin_1"
is too long (127 byte max)' on server mongo-db:27017.
The full response is { "ok" : 0.0, "errmsg" : "namespace name generated from index name
\"sth_malaga.sth_/_urn:ngsi-ld:BikeHireDockingStation:10_BikeHireDockingStation.aggr.$_id.entityId_1__id.entityType_1__id.attrName_1__id.resolution_1__id.origin_1\"
is too long (127 byte max)", "code" : 67, "codeName" : "CannotCreateIndex" }.
Stack trace:
[com.telefonica.iot.cygnus.sinks.NGSISTHSink$STHAggregator.persist(NGSISTHSink.java:374),
com.telefonica.iot.cygnus.sinks.NGSISTHSink.persistBatch(NGSISTHSink.java:108),
com.telefonica.iot.cygnus.sinks.NGSISink.processRollbackedBatches(NGSISink.java:391),
com.telefonica.iot.cygnus.sinks.NGSISink.process(NGSISink.java:373),
org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67),
org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145), java.lang.Thread.run(Thread.java:748)]
Is it possible that the maximum size for a type is 5? (with 5 the size of the namespace is 126)
Can you help me to solve this problem?
I have tried different scenarios:
fiware/orion:latest
fiware/cygnus-common:latest
mongo:3.6
This one has the result:
time=2019-09-12T17:12:17.071Z | lvl=WARN | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-common | op=doPost | msg=org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet[186] : Received bad request from client.
org.apache.flume.source.http.HTTPBadRequestException: Request has invalid JSON Syntax.
at org.apache.flume.source.http.JSONHandler.getEvents(JSONHandler.java:119)
at org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet.doPost(HTTPSource.java:184)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:814)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Caused by: com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at line 1 column 2
at com.google.gson.Gson.fromJson(Gson.java:806)
at com.google.gson.Gson.fromJson(Gson.java:761)
at org.apache.flume.source.http.JSONHandler.getEvents(JSONHandler.java:117)
... 16 more
Caused by: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at line 1 column 2
at com.google.gson.stream.JsonReader.expect(JsonReader.java:339)
at com.google.gson.stream.JsonReader.beginArray(JsonReader.java:306)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:79)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:60)
at com.google.gson.Gson.fromJson(Gson.java:795)
... 18 more
with the configuration:
fiware/orion:latest
fiware/cygnus-ngsi:1.13.0
mongo:3.6
the result is:
time=2019-09-12T17:22:15.466Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=processRollbackedBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[399] : CygnusPersistenceError. -, Command failed with error 67: 'namespace name generated from index name "sth_malaga.sth_/_EstacionBici:10_BikeHireDockingStation.aggr.$_id.entityId_1__id.entityType_1__id.attrName_1__id.resolution_1__id.origin_1" is too long (127 byte max)' on server mongo-db:27017. The full response is { "ok" : 0.0, "errmsg" : "namespace name generated from index name \"sth_malaga.sth_/_EstacionBici:10_BikeHireDockingStation.aggr.$_id.entityId_1__id.entityType_1__id.attrName_1__id.resolution_1__id.origin_1\" is too long (127 byte max)", "code" : 67, "codeName" : "CannotCreateIndex" }. Stack trace: [com.telefonica.iot.cygnus.sinks.NGSISTHSink$STHAggregator.persist(NGSISTHSink.java:374), com.telefonica.iot.cygnus.sinks.NGSISTHSink.persistBatch(NGSISTHSink.java:108), com.telefonica.iot.cygnus.sinks.NGSISink.processRollbackedBatches(NGSISink.java:391), com.telefonica.iot.cygnus.sinks.NGSISink.process(NGSISink.java:373), org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67), org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145), java.lang.Thread.run(Thread.java:748)]
and finally with the configuration:
fiware/orion:latest
fiware/cygnus-ngsi:latest
mongo:3.6
the result is:
time=2019-09-12T17:25:48.943Z | lvl=DEBUG | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=processNewBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[492] : Batch accumulation time reached, the batch will be processed as it is
time=2019-09-12T17:25:49.007Z | lvl=DEBUG | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=run | msg=com.telefonica.iot.cygnus.interceptors.NGSINameMappingsInterceptor$PeriodicalNameMappingsReader[205] : [nmi] The configuration has not changed
but it doesn't create the sth_malaga database analising mongo like this: $docker exec -it db-mongo bash
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
orion 0.000GB
orion-malaga 0.000GB
>
As you can see I'm nearly crazy. Can you suggest the best cygnus,orion and mongo version to use?
version: "3.5"
services:
# Orion es el context broker
orion:
image: fiware/orion:latest
hostname: orion
container_name: fiware-orion
depends_on:
- mongo-db
networks:
- default
expose:
- "1026"
ports:
- "1026:1026"
command: -dbhost mongo-db -logLevel DEBUG
healthcheck:
test: curl --fail -s http://orion:1026/version || exit 1
# Configurando Cygnus para que almacene las actualizaciones que consultara STH-Comet
cygnus:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: fiware-cygnus
depends_on:
- mongo-db-cygnus
networks:
- default
expose:
- "5050"
- "5080"
ports:
- "5050:5050"
- "5080:5080"
environment:
- "CYGNUS_MONGO_HOSTS=mongo-db-cygnus:27017" # servidor donde se harĂ¡ la persistencia de datos
- "CYGNUS_LOG_LEVEL=DEBUG" # Nivel de log para Cygnus
- "CYGNUS_SERVICE_PORT=5050" # Puerto de Cynus en el que escucha las actualizaciones
- "CYGNUS_API_PORT=5080" # Puerto de Cygnus para operacion
healthcheck:
test: curl --fail -s http://localhost:5080/v1/version || exit 1
# STH-Comet consumira los datos almacenados en Mongo DB para el historico a corto plazo
sth-comet:
image: fiware/sth-comet:latest
hostname: sth-comet
container_name: fiware-sth-comet
depends_on:
- cygnus
- mongo-db-cygnus
networks:
- default
ports:
- "8666:8666"
environment:
- STH_HOST=0.0.0.0
- STH_PORT=8666
- DB_PREFIX=sth_
- DB_URI=mongo-db-cygnus:27017
- LOGOPS_LEVEL=DEBUG
healthcheck:
test: curl --fail -s http://localhost:8666/version || exit 1
# Database orion
mongo-db:
image: mongo:3.6
hostname: mongo-db
container_name: db-mongo
expose:
- "27017"
ports:
- "27017:27017"
networks:
- default
command: --bind_ip_all --smallfiles
volumes:
- mongo-db:/data
# Database cygnus
mongo-db-cygnus:
image: mongo:latest
hostname: mongo-db-cygnus
container_name: db-mongo-cygnus
expose:
- "27018"
ports:
- "27018:27017"
networks:
- default
command: --bind_ip_all
volumes:
- mongo-db-cygnus:/data
networks:
default:
ipam:
config:
- subnet: 172.18.1.0/24
volumes:
mongo-db: ~
mongo-db-cygnus: ~
I have tried this. But in this case, in the second database cygnus doesn't write anything. Only it works as before (it updates only some entities) if I change the cygnus version to( image: fiware/cygnus-ngsi:1.14.0). This means that using the two versions of the database doesn't give any improvement.

The root cause of this problem is outside Cygnus itself. It is in the index length limit that MongoDB prior to version 4.2 has.
Depending Cygnus version, it deals with the problem in a different way:
Version prior to 1.14.0 throws an exception that interrupts the data persist operation and prints an ugly Java stack trace in the logs. I understand this is your case.
Version 1.14.0 and beyond deals correctly the error situation, so the index is not created (a warn trace is printed in the logs about it) but the data is persisted. So in this case, Cygnus does its work although you may experience slower queries accessing data if you have a large amount of it.
The best solution is to upgrade MongoDB to 4.2, which should remove completely the problem. But in that case you should take into account two things:
MongoDB 4.2 is not yet officially supported by Cygnus, although user reports are positive.
I don't know if Orion Context Broker will work with MongoDB 4.2. I don't know about any positive or negative reports, so my suggestion is you to test it :). In the worst case, you could use two separate MongoDB instances (4.2 for Cygnus and 3.6 for Orion).

Related

Connecting to MySQL with Ecto in Docker

I am in the process of Dockerizing my project, and the following is my docker-compose.yml:
version: '3.8'
services:
server:
image: server
environment:
DB_HOST: 0.0.0.0
DB_DATABASE: dev
DB_PORT: 3306
DB_USER: dev
DB_PASSWORD: dev
ports:
- "4000:4000"
restart: on-failure
depends_on:
database:
condition: service_started
database:
image: mysql:latest
command: --default-authentication-plugin=mysql_native_password
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: dev
MYSQL_PASSWORD: dev
MYSQL_DATABASE: dev
volumes:
- mysql_data:/var/lib/mysql
volumes:
mysql_data:
and the Dockerfile for server:
FROM elixir:latest
EXPOSE 4000
RUN mkdir /server
COPY . /server
WORKDIR /server
RUN mix local.rebar --force
RUN mix local.hex --force
RUN mix deps.get
RUN mix do compile
CMD [ "sh", "/server/entry.sh" ]
where entry.sh is:
# Exit on failure
set -e
mix ecto.create
mix ecto.migrate
mix run priv/repo/seeds.exs
exec mix phx.server
When I run this using docker compose up, I get an error that Phoenix can't connect to the MySQL database. I have tried using hosts 0.0.0.0 and localhost, however both don't work.
This is the corresponding section of my config/dev.exs:
config :app, App.Repo
username: System.get_env("DB_USER"),
password: System.get_env("DB_PASSWORD"),
hostname: System.get_env("DB_HOST"),
database: System.get_env("DB_DATABASE"),
port: 3306,
stacktrace: true,
pool_size: 10
The following is the exact error message I get:
server-1 | 11:23:07.417 [error] GenServer #PID<0.272.0> terminating
server-1 | ** (DBConnection.ConnectionError) (0.0.0.0:3306) connection refused - :econnrefused
server-1 | (db_connection 2.4.3) lib/db_connection/connection.ex:100: DBConnection.Connection.connect/2
server-1 | (connection 1.1.0) lib/connection.ex:622: Connection.enter_connect/5
server-1 | (stdlib 4.2) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
server-1 | Last message: nil
server-1 | State: MyXQL.Connection
server-1 |
server-1 | 11:23:07.430 [error] GenServer #PID<0.278.0> terminating
server-1 | ** (DBConnection.ConnectionError) (0.0.0.0:3306) connection refused - :econnrefused
server-1 | (db_connection 2.4.3) lib/db_connection/connection.ex:100: DBConnection.Connection.connect/2
server-1 | (connection 1.1.0) lib/connection.ex:622: Connection.enter_connect/5
server-1 | (stdlib 4.2) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
server-1 | Last message: nil
server-1 | State: MyXQL.Connection
server-1 | ** (Mix) The database for App.Repo couldn't be created: %RuntimeError{message: "killed"}
server-1 exited with code 1

Fiware Perseo MongoDB Authentication failed

I'm trying to connect perseo-fe with mongodb (with authentication enabled) and always raise AuthenticationFailed error.
Other components like orion,cygnus,... works Ok but perseo-fe fail. May you help me?
Error
(node:1) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
(node:1) [MONGODB DRIVER] Warning: Warning: no saslprep library specified. Passwords will not be sanitized
time=2022-08-03T13:05:29.151Z | lvl=ERROR | corr=n/a | trans=n/a | op=checkDB | comp=perseo-fe | msg=connect failed to connect to server [mongo-db:27017] on first connect [MongoError: Authentication failed.
at Connection.messageHandler (/opt/perseo-fe/node_modules/mongodb/lib/core/connection/connection.js:359:19)
at Connection.emit (events.js:314:20)
at Connection.EventEmitter.emit (domain.js:506:15)
at processMessage (/opt/perseo-fe/node_modules/mongodb/lib/core/connection/connection.js:451:10)
at Socket. (/opt/perseo-fe/node_modules/mongodb/lib/core/connection/connection.js:620:15)
at Socket.emit (events.js:314:20)
at Socket.EventEmitter.emit (domain.js:506:15)
at addChunk (_stream_readable.js:297:12)
at readableAddChunk (_stream_readable.js:272:9)
at Socket.Readable.push (_stream_readable.js:213:10) {
ok: 0,
code: 18,
codeName: 'AuthenticationFailed'
}]
This is my docker-compose
version: "3.5"
services:
orion:
image: fiware/orion:2.2.0
depends_on:
- mongo-db
networks:
- default
ports:
- "1026:1026"
command: -dbhost mongo-db -dbuser admin -dbpwd 2m4rt2022
mongo-db:
image: mongo:latest
ports:
- "27017:27017"
networks:
- default
command: --bind_ip_all
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=XXX
cygnus-mongo:
image: fiware/cygnus-ngsi:latest
depends_on:
- mongo-db
networks:
- default
ports:
- "5051:5051"
- "5081:5081"
environment:
- CYGNUS_MONGO_HOSTS=mongo-db:27017
- CYGNUS_MONGO_DATA_MODEL=dm-by-service-path
- CYGNUS_MONGO_ATTR_PERSISTENCE=column
- CYGNUS_MONGO_DB_PREFIX=sti_
- CYGNUS_MONGO_USER=admin
- CYGNUS_MONGO_PASS=XXX
- CYGNUS_MONGO_AUTH_SOURCE=admin
- CYGNUS_STH_DB_PREFIX=sth_
- CYGNUS_API_PORT=5081
- CYGNUS_SERVICE_PORT=5051
perseo-core:
image: fiware/perseo-core:latest
depends_on:
- mongo-db
- orion
networks:
- default
ports:
- "8080:8080"
environment:
- "PERSEO_FE_URL=http://perseo-fe:9090"
- "MAX_AGE=3600000"
perseo-fe:
image: fiware/perseo:latest
networks:
- default
ports:
- "9090:9090"
depends_on:
- perseo-core
- mongo-db
environment:
- PERSEO_MONGO_ENDPOINT=mongo-db:27017
- PERSEO_MONGO_USER=admin
- PERSEO_MONGO_PASS=XXX
- PERSEO_MONGO_AUTH_SOURCE=admin
- PERSEO_CORE_URL=http://perseo-core:8080
- PERSEO_LOG_LEVEL=debug
- PERSEO_ORION_URL=http://orion:1026/

connect ECONNREFUSED 127.0.0.1:3306 with using jest in docker

I'm trying to use jest codes before deploy to dev. So i made docker-compose.yml and put "npm test (ENV=test jest --runInBand --forceExit test/**.test.ts -u)" but it has error.
This is my local.yml file (for docker-compose.yml)
version: "3"
services:
my-node:
image: my-api-server:dev
container_name: my_node
# sleep 10 sec for db init
command: bash -c "sleep 10; pwd; cd packages/server; yarn orm schema:sync -f ormconfig.dev.js; yarn db:migrate:run -f ormconfig.dev.js; npm test; cross-env ENV=dev node lib/server.js"
ports:
- "8082:8082"
depends_on:
- my-mysql
- my-redis
my-mysql:
image: mysql:5.7
container_name: my_mysql
command: --character-set-server=utf8mb4 --sql_mode="NO_ENGINE_SUBSTITUTION"
ports:
- "33079:3306"
volumes:
- ./init/:/docker-entrypoint-initdb.d/
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=test
- MYSQL_USER=test
- MYSQL_PASSWORD=test
my-redis:
image: redis:6.2-alpine
container_name: my_redis
ports:
- 6379:6379
command: redis-server --requirepass test
networks:
default:
external:
name: my_nginx_default
and it's my ormconfig.dev.js file
module.exports = {
type: 'mysql',
host: 'my_mysql',
port: 3306,
username: 'test',
password: 'test',
database: 'test',
entities: ['./src/modules/**/entities/*'],
migrations: ['migration/dev/*.ts'],
cli: { "migrationsDir": "migration/dev" }
}
But after i use docker-compose -f res/docker/local.yml up, it throws error after whole build and then jests. and then it opens server which does not have error.
Errors are like these below.
connect ECONNREFUSED 127.0.0.1:3306
and then
my_node | TypeError: Cannot read property 'subscriptionsPath' of undefined
my_node |
my_node | 116 | ): Promise<TestResponse<T>> => {
my_node | 117 | const req = request(server.app)
my_node | > 118 | .post(server.apolloServer!.subscriptionsPath!)
my_node | | ^
my_node | 119 | .set('Accept', 'application/json')
my_node | 120 |
my_node | 121 | if (token) {
I've tried to change entities path.
entities: ['./src/modules/**/entities/*']
entities: ['src/modules/**/entities/*']
entities: [__dirname + '/src/modules/**/entities/*']
My entities are in the right path.
Here is my whole file structure
Can anyone help this problem?
in your module.export host:my-mysql
Looking at the documentation, it could be an environment variable issue. As per the docs, the orm config file used is -
From the environment variables. Typeorm will attempt to load the .env file using dotEnv if it exists. If the environment variables TYPEORM_CONNECTION or TYPEORM_URL are set, Typeorm will use this method.
From the ormconfig.env.
From the other ormconfig.[format] files, in this order: [js, ts, json, yml, yaml, xml].
Since you have not defined the first two, it must be defaulting to use the ormconfig.js file. There is no reason it should pick ormconfig.dev.js.
If you can, change the ormconfig.js file to be this -
module.exports = {
type: 'mysql',
host: 'my_mysql',
port: 3306,
username: 'test',
password: 'test',
database: 'test',
entities: ['./src/modules/**/entities/*'],
migrations: ['migration/dev/*.ts'],
cli: { "migrationsDir": "migration/dev" }
}

fiware quantumleap insert into cratedb not working (schema missing)

goal
Use qunatumleap to move data into a crate_db to display later using Grafana.
what I did
follow tutorial to setup Docker images
setup opc-agent to provide data to the orion broker
setup quantumleap to move data from broker to crate_db on change
checked that a subscription is present in the contextBroker
Expected behavior
on subscription of a new item quantumleap will create a entry in a table in the crate_db to store the provided values
what actually happens
Instead of creating a entry in the Crate_db quantumleap throws a "schema not existing" fault.
The provided tutorials do not talk about setting those schema up myself, therefore I assume that quantumleap normally sets them up.
Right now I do not know why this is failing, most likely it is a configuration mistake on my side
additional information
subscription present in contextBroker:
curl -X GET \
'http://localhost:1026/v2/subscriptions/' \
-H 'fiware-service: openiot' \
-H 'fiware-servicepath: /'
[
{"id":"60360eae34f0ca493f0fc148",
"description":"plc_id",
"status":"active",
"subject":{"entities":[{"idPattern":"PLC1"}],
"condition":{"attrs":["main"]}},
"notification":{"timesSent":1748,
"lastNotification":"2021-02-24T08:59:45.000Z",
"attrs":["main"],
"onlyChangedAttrs":false,
"attrsFormat":"normalized",
"http":{"url":"http://quantumleap:8668/v2/notify"},
"metadata":["dateCreated","dateModified"],
"lastSuccess":"2021-02-24T08:59:45.000Z",
"lastSuccessCode":500},
"throttling":1}
]
Orion log:
orion_1 | INFO#09:07:55 logTracing.cpp[130]: Request received: POST /v1/updateContext, request payload (327 bytes): {"contextElements":[{"type":"plc","isPattern":"false","id":"PLC1","attributes":[{"name":"main","type":"Number","value":"12285","metadatas":[{"name":"SourceTimestamp","type":"ISO8601","value":"2021-02-24T09:07:55.033Z"},{"name":"ServerTimestamp","type":"ISO8601","value":"2021-02-24T09:07:55.033Z"}]}]}],"updateAction":"UPDATE"}, response code: 200
Quantum Leap log:
quantumleap_1 | time=2021-02-24 09:07:55.125 | level=ERROR | corr=c7df320c-767f-11eb-bbb3-0242ac1b0005; cbnotif=1 | from=172.27.0.5 | srv=openiot | subserv=/ | op=_insert_entity_rows | comp=translators.crate | msg=Failed to insert entities because of below error; translator will still try saving original JSON in "mtopeniot"."etplc".__original_ngsi_entity__ | payload=[{'id': 'PLC1', 'type': 'plc', 'main': {'type': 'Number', 'value': '12285', 'metadata': {'dateCreated': {'type': 'DateTime', 'value': '2021-02-24T08:28:59.917Z'}, 'dateModified': {'type': 'DateTime', 'value': '2021-02-24T09:07:55.115Z'}}}, 'time_index': '2021-02-24T09:07:55.115000+00:00'}] | thread=140262103055136 | process=67
Traceback from Qunatumleap
quantumleap_1 | Traceback (most recent call last): quantumleap_1 | File "/src/ngsi-timeseries-api/src/translators/sql_translator.py", line 365, in _insert_entity_rows
quantumleap_1 | self.cursor.executemany(stmt, rows) quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/cursor.py", line 67, in executemany quantumleap_1 | self.execute(sql, bulk_parameters=seq_of_parameters)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/cursor.py", line 53, in execute quantumleap_1 | self._result = self.connection.client.sql(sql, parameters,
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 331, in sql quantumleap_1 | content = self._json_request('POST', self.path, data=data)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 458, in _json_request quantumleap_1 | _raise_for_status(response)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 187, in _raise_for_status
quantumleap_1 | raise ProgrammingError(error.get('message', ''),
quantumleap_1 | crate.client.exceptions.ProgrammingError: SQLActionException[SchemaUnknownException: Schema 'mtopeniot' unknown] quantumleap_1 | quantumleap_1 | During handling of the above exception, another exception occurred: quantumleap_1 |
quantumleap_1 | Traceback (most recent call last):
quantumleap_1 | File "/src/ngsi-timeseries-api/src/reporter/reporter.py", line 195, in notify quantumleap_1 | trans.insert(payload, fiware_s, fiware_sp)
quantumleap_1 | File "/src/ngsi-timeseries-api/src/translators/sql_translator.py", line 221, in insert
quantumleap_1 | res = self._insert_entities_of_type(et,
quantumleap_1 | File "/src/ngsi-timeseries-api/src/translators/sql_translator.py", line 354, in _insert_entities_of_type
quantumleap_1 | self._insert_entity_rows(table_name, col_names, entries, entities) quantumleap_1 | File "/src/ngsi-timeseries-api/src/translators/sql_translator.py", line 381, in _insert_entity_rows
quantumleap_1 | self._insert_original_entities_in_failed_batch(
quantumleap_1 | File "/src/ngsi-timeseries-api/src/translators/sql_translator.py", line 437, in _insert_original_entities_in_failed_batch
quantumleap_1 | self.cursor.executemany(stmt, rows)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/cursor.py", line 67, in executemany
quantumleap_1 | self.execute(sql, bulk_parameters=seq_of_parameters)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/cursor.py", line 53, in execute
quantumleap_1 | self._result = self.connection.client.sql(sql, parameters,
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 331, in sql
quantumleap_1 | content = self._json_request('POST', self.path, data=data)
quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 458, in _json_request
quantumleap_1 | _raise_for_status(response) quantumleap_1 | File "/usr/local/lib/python3.8/site-packages/crate/client/http.py", line 187, in _raise_for_status
quantumleap_1 | raise ProgrammingError(error.get('message', ''),
quantumleap_1 | crate.client.exceptions.ProgrammingError: SQLActionException[SchemaUnknownException: Schema 'mtopeniot' unknown]
Tables in cratedb after running qunatumleap for a while:
screenshot of cratedb tables
docker-compose file
version: "3"
services:
iotage:
hostname: iotage
image: iotagent4fiware/iotagent-opcua
networks:
- hostnet
- iotnet
ports:
- "4001:4001"
- "4081:8080"
extra_hosts:
- "iotcarsrv:192.168.2.16"
# - "PLC1:192.168.2.57"
depends_on:
- iotmongo
- orion
volumes:
- ./certificates:/opt/iotagent-opcua/certificates
- ./AGECONF:/opt/iotagent-opcua/conf
command: /usr/bin/tail -f /var/log/lastlog
iotmongo:
hostname: iotmongo
image: mongo:3.4
volumes:
- iotmongo_data:/data/db
- iotmongo_conf:/data/configdb
crate-db:
image: crate
hostname: crate-db
ports:
- "4200:4200"
- "4300:4300"
command:
crate -Clicense.enterprise=false -Cauth.host_based.enabled=false -Ccluster.name=democluster
-Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
networks:
- hostnet
quantumleap:
hostname: quantumleap
image: smartsdk/quantumleap
ports:
- "8668:8668"
depends_on:
- crate-db
environment:
- CRATE_HOST=crate-db
networks:
- hostnet
grafana:
image: grafana/grafana
depends_on:
- crate-db
ports:
- "3003:3000"
networks:
- hostnet
################ OCB ################
orion:
hostname: orion
image: fiware/orion:latest
networks:
- hostnet
- ocbnet
ports:
- "1026:1026"
depends_on:
- orion_mongo
#command: -dbhost mongo
entrypoint: /usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -statCounters -dbhost mongo -logForHumans -logLevel DEBUG -t 255
orion_mongo:
hostname: orion_mongo
image: mongo:3.4
networks:
ocbnet:
aliases:
- mongo
volumes:
- orion_mongo_data:/data/db
- orion_mongo_conf:/data/configdb
command: --nojournal
volumes:
iotmongo_data:
iotmongo_conf:
orion_mongo_data:
orion_mongo_conf:
networks:
hostnet:
iotnet:
ocbnet:
edits
added docker compose file
after changing the database to a more recent version (for example crate-db:3.1.2) the data arrives at the database nicely

Scala Play JDBC can't connect to MySQL when running in a docker container

I am trying to make a scala play web application (RESTful). I have been following the play tutorial for SQL connections but I am having trouble connecting my container with the play application to the container with the mysql container. After a lot of debugging I have realised that the scala application does work when ran locally instead of in the docker container.
code from application.conf
db.default.driver=com.mysql.cj.jdbc.Driver
db.default.url="jdbc:mysql://localhost:49160/testdb"
db.default.username="root"
db.default.password="password"
db.default.host="localhost"
# db connections = ((pyhsical_core_count * 2) + effective_spindle_count)
fixedConnectionPool = 17
database.dispatcher {
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = ${fixedConnectionPool}
}
}
docker-compose.yml
version: "2"
services:
spades:
build: ./spades
depends_on:
- database
volumes:
- ./spades/cardsatra-spades:/home/app
ports:
- 49162:9000
database:
build: ./database
ports:
- 49160:3306
volumes:
- ./database/data:/var/lib/mysql:rw
sbt application dockerfile
ARG OPENJDK_TAG=8u232
FROM openjdk:8u232
ARG SBT_VERSION=1.3.7
# Install sbt
RUN \
curl -L -o sbt-1.3.7.deb https://dl.bintray.com/sbt/debian/sbt-1.3.7.deb && \
dpkg -i sbt-1.3.7.deb && \
rm sbt-1.3.7.deb && \
apt-get update && \
apt-get install sbt && \
sbt sbtVersion
EXPOSE 9000
RUN mkdir /home/app
WORKDIR /home/app
COPY cardsatra-spades/entrypoint.sh .
CMD ["/bin/sh", "/home/app/entrypoint.sh"]
entrypoint.sh just runs sbt clean and sbt run
database dockerfile
FROM mysql:8
# ENV MYSQL_DATABASE stormlight
ENV MYSQL_ROOT_PASSWORD password
ENV MYSQL_USER mysql
ENV MYSQL_PASSWORD password
ENV DATABASE_HOST db
scala endpoint (ommitted the class and imports) -> this is on the GET /news/all route
def doSomething: Future[Vector[Newspost]] = Future {
db.withConnection { conn =>
var res: Vector[Newspost] = Vector[Newspost]()
val statement = conn.createStatement
val resultSet = statement.executeQuery("SELECT * FROM news")
while(resultSet.next) {
val id = resultSet.getInt("id")
val title = resultSet.getString("title")
val body = resultSet.getString("body")
val date = resultSet.getString("date")
res = res :+ Newspost(id, title, body, date)
}
res
}
}(dec)
When I run the database with docker-compose up database and the play application locally using sbt run the endpoint works correctly and returns the Newspost vector
When I run both applications via docker-compose up I get a huge stack trace
spades_1 | Getting req!
spades_1 | [error] p.a.h.DefaultHttpErrorHandler -
spades_1 |
spades_1 | ! #7emkpm006 - Internal server error, for (GET) [/news/all] ->
spades_1 |
spades_1 | play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.]]
spades_1 | at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:332)
spades_1 | at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:251)
spades_1 | at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:421)
spades_1 | at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:417)
spades_1 | at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:453)
spades_1 | at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
spades_1 | at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:92)
spades_1 | at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
spades_1 | at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
spades_1 | at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:92)
spades_1 | Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
I have omitted most of the scala code as its working not in docker I assume there is no issue with the scala code itself.
You should point your application to the other container, not localhost which implies the same "machine" from the system's perspective (i.e. same container):
db.default.url="jdbc:mysql://database:49160/testdb"
You can make this setting environment-dependent of course, so that the app works in development mode and in docker-compose mode.