OPC-uA Iotagent & Fiware Communication - fiware

With OPC-uA we want to transfer the data PLC to Fiware using the OPC-uA IotAgent. We are doing all the steps in this Link but we can't connect to IotAgent. We made the server file with python and read data from the uA-Expert software. You can see our files below. Firstly, we change the "docker-compose-external.yml" like the info about the iotagent website. Also when we delete the "config.json" file in AGECONF. The website says if the delete the config.json, MappingTool will be create new one, but its not. We edit the both "config.properties" file in AGECONF and CONF file. But we got a problem about the iotagent, other containers like mongodb have not a problem. We ask for your help in this matter. Thank you.
Opc-ua server's python file:
127.0.0.1 ip address is the localhost.
from opcua import Server
from random import randint
import time
server = Server()
url = "opc.tcp://127.0.0.1:4840"
server.set_endpoint(url)
name = "age01_Car"
addspace = server.register_namespace(name)
node = server.get_objects_node()
Param = node.add_object(addspace, "Parameters")
Speed = Param.add_variable(addspace," Speed ",0)
Speed.set_writable()
server.start()
print("Server has started at {}".format(url))
while True:
Speed_i = randint(20,100)
print("Speed:",Speed_i)
Speed.set_value(Speed_i)
time.sleep(2)
docker-compose-external.yml file:
services:
iotage:
hostname: iotage
image: iotagent4fiware/iotagent-opcua:latest
networks:
- hostnet
- iotnet
ports:
- "0.0.0.0:4001:4001"
- "0.0.0.0:4081:8080"
extra_hosts:
- "iotcarsrv:127.0.0.1"
- "age01_Car:127.0.0.1"
depends_on:
- iotmongo
- orion
volumes:
- ./AGECONF:/opt/iotagent-opcua/conf
- ./certificates:/opt/iotagent-opcua/certificates
environment:
- IOTA_REGISTRY_TYPE=memory #Whether to hold IoT device info in memory or in a database
- IOTA_LOG_LEVEL=DEBUG # The log level of the IoT Agent
- IOTA_MONGO_HOST=iot_mongo # The host name of MongoDB
- IOTA_MONGO_DB=iotagent_opcua # The name of the database used in mongoDB
#################################################################################################################
# please comment out if you want to use NGSI-ld, the OPCUA Agent is provided with NGSIv2 as default configuration
#- IOTA_CB_NGSI_VERSION=ld
#- IOTA_JSON_LD_CONTEXT=https://uri.etsi.org/ngsi-ld/v1/ngsi-ld-core-context-v1.3.jsonld
#################################################################################################################
- IOTA_FALLBACK_TENANT=opcua_car
- IOTA_RELAX_TEMPLATE_VALIDATION=true
iotmongo:
hostname: iotmongo
image: mongo:4.4
networks:
- iotnet
volumes:
- iot_mongo_data:/data/db
- iot_mongo_conf:/data/configdb
################ OCB ################
orion:
hostname: orion
#replace fiware/orion:latest with fiware/orion-ld:0.7.0 if you mind using NGSI-ld
image: fiware/orion:latest
#image: fiware/orion-ld:0.7.0
networks:
- hostnet
- ocbnet
ports:
- "0.0.0.0:1026:1026"
depends_on:
- orion_mongo
# please replace "command" if you want to use NGSI-ld, the OPCUA Agent is provided with NGSIv2 as default configuration
#command: -statCounters -dbhost orion_mongo -logLevel INFO -forwarding
command: -statCounters -dbhost orion_mongo -logLevel INFO
orion_mongo:
hostname: orion_mongo
image: mongo:4.4
networks:
- ocbnet
ports:
- "0.0.0.0:27017:27017"
volumes:
- orion_mongo_data:/data/db
- orion_mongo_conf:/data/configdb
command: --nojournal
volumes:
iot_mongo_data:
iot_mongo_conf:
orion_mongo_data:
orion_mongo_conf:
networks:
hostnet:
iotnet:
ocbnet:
Config.properties File:
# Southbound configuration
# The OPC UA Objects available within the specified namespaces will not be mapped by the OPC UA IotAgent.
namespace-ignore=2,7
# OPC UA Server address
endpoint=opc.tcp://127.0.0.1:4840
# Northbound configuration
# These are important for identifying the Device location and will be useful
# when contacting the Orion Context Broker requesting values or methods execution.
context-broker-host=orion
context-broker-port=1026
fiware-service=opcua_car
fiware-service-path=/demo
# Agent Server Configuration
device-registry-type=memory
agent-id=age01_Car
# The identifiers of the namespace the nodes belong to
[22:41, 18.11.2022] Serdar Sallantı: namespaceIndex=3
namespaceNumericIdentifier=1000
# Session and monitoring parameters
# These parameters are the homonymous counterparts of OPC UA official ones.
# See OPC UA Documentation for further information
configuration=api
This is the server from the uAExpert:
uAExpert
The logs when the run the "sudo docker-compose -f docker-compose-external-server.yml up -d" command:
serdar#ubuntu:~/Desktop/iot_agent_car/iotagent-opcua$ sudo docker-compose -f docker-compose-external-server.yml up
Starting iotagentopcua_iotmongo_1 ...
Starting iotagentopcua_iotmongo_1
Starting iotagentopcua_orion_mongo_1 ...
Starting iotagentopcua_orion_mongo_1 ... done
Starting iotagentopcua_orion_1 ...
Starting iotagentopcua_orion_1 ... done
Starting iotagentopcua_iotage_1 ...
Starting iotagentopcua_iotage_1 ... done
Attaching to iotagentopcua_iotmongo_1, iotagentopcua_orion_mongo_1, iotagentopcua_orion_1, iotagentopcua_iotage_1
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:10.873+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:09.284+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:09.289+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:10.943+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:10.954+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"orion_mongo"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:10.954+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.17","gitVersion":"85de0cc83f4dc64dbbac7fe028a4866228c1b5d1","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:10.954+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:10.954+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"},"storage":{"journal":{"enabled":false}}}}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:09.290+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"iotmongo"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:09.290+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.17","gitVersion":"85de0cc83f4dc64dbbac7fe028a4866228c1b5d1","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:09.290+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:09.290+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:10.973+00:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:10.973+00:00"},"s":"I", "c":"STORAGE", "id":22297, "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:10.973+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=2443M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],,log=(enabled=false),"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:09.291+00:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:09.291+00:00"},"s":"I", "c":"STORAGE", "id":22297, "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:09.291+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=2443M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.361+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1668801313:361205][1:0x7f348af6ccc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.361+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1668801313:361292][1:0x7f348af6ccc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:09.877+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1668801309:877944][1:0x7f8908b32cc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.365+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1668801313:365353][1:0x7f348af6ccc0], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 70"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.385+00:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":2412}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.385+00:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.393+00:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:10.246+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1668801310:246744][1:0x7f8908b32cc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.231+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1668801313:230643][1:0x7f8908b32cc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 1/31872 to 2/256"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.399+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1668801313:399492][1:0x7f8908b32cc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.501+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1668801313:501625][1:0x7f8908b32cc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.549+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1668801313:549120][1:0x7f8908b32cc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.549+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1668801313:549302][1:0x7f8908b32cc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.557+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1668801313:557681][1:0x7f8908b32cc0], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 37"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.564+00:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":4273}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.564+00:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.569+00:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.571+00:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.575+00:00"},"s":"I", "c":"STORAGE", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.579+00:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.580+00:00"},"s":"I", "c":"REPL", "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigReplicationDisabled","oldState":"ConfigPreStart"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.582+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.582+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
iotmongo_1 | {"t":{"$date":"2022-11-18T19:55:13.582+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.397+00:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.421+00:00"},"s":"I", "c":"STORAGE", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.452+00:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.461+00:00"},"s":"I", "c":"REPL", "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigReplicationDisabled","oldState":"ConfigPreStart"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.492+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.492+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.492+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.762+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.22.0.3:59462","connectionId":1,"connectionCount":1}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.763+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn1","msg":"client metadata","attr":{"remote":"172.22.0.3:59462","client":"conn1","doc":{"driver":{"name":"mongoc","version":"1.17.4"},"os":{"type":"Linux","name":"Debian GNU/Linux","version":"11","architecture":"x86_64"},"platform":"cfg=0x02a156a0e9 posix=200809 stdc=201710 CC=GCC 10.2.1 20210110 CFLAGS=\"\" LDFLAGS=\"\""}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.765+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.22.0.3:59466","connectionId":2,"connectionCount":2}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.766+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn2","msg":"client metadata","attr":{"remote":"172.22.0.3:59466","client":"conn2","doc":{"driver":{"name":"mongoc","version":"1.17.4"},"os":{"type":"Linux","name":"Debian GNU/Linux","version":"11","architecture":"x86_64"},"platform":"cfg=0x02a156a0e9 posix=200809 stdc=201710 CC=GCC 10.2.1 20210110 CFLAGS=\"\" LDFLAGS=\"\""}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.769+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.22.0.3:59474","connectionId":3,"connectionCount":3}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.770+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn3","msg":"client metadata","attr":{"remote":"172.22.0.3:59474","client":"conn3","doc":{"driver":{"name":"mongoc","version":"1.17.4"},"os":{"type":"Linux","name":"Debian GNU/Linux","version":"11","architecture":"x86_64"},"platform":"cfg=0x02a156a0e9 posix=200809 stdc=201710 CC=GCC 10.2.1 20210110 CFLAGS=\"\" LDFLAGS=\"\""}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.772+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.22.0.3:59482","connectionId":4,"connectionCount":4}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.772+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn4","msg":"client metadata","attr":{"remote":"172.22.0.3:59482","client":"conn4","doc":{"driver":{"name":"mongoc","version":"1.17.4"},"os":{"type":"Linux","name":"Debian GNU/Linux","version":"11","architecture":"x86_64"},"platform":"cfg=0x02a156a0e9 posix=200809 stdc=201710 CC=GCC 10.2.1 20210110 CFLAGS=\"\" LDFLAGS=\"\""}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.775+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.22.0.3:59496","connectionId":5,"connectionCount":5}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.776+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn5","msg":"client metadata","attr":{"remote":"172.22.0.3:59496","client":"conn5","doc":{"driver":{"name":"mongoc","version":"1.17.4"},"os":{"type":"Linux","name":"Debian GNU/Linux","version":"11","architecture":"x86_64"},"platform":"cfg=0x02a156a0e9 posix=200809 stdc=201710 CC=GCC 10.2.1 20210110 CFLAGS=\"\" LDFLAGS=\"\""}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.778+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.22.0.3:59502","connectionId":6,"connectionCount":6}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.779+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn6","msg":"client metadata","attr":{"remote":"172.22.0.3:59502","client":"conn6","doc":{"driver":{"name":"mongoc","version":"1.17.4"},"os":{"type":"Linux","name":"Debian GNU/Linux","version":"11","architecture":"x86_64"},"platform":"cfg=0x02a156a0e9 posix=200809 stdc=201710 CC=GCC 10.2.1 20210110 CFLAGS=\"\" LDFLAGS=\"\""}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.781+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.22.0.3:59518","connectionId":7,"connectionCount":7}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.782+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7","msg":"client metadata","attr":{"remote":"172.22.0.3:59518","client":"conn7","doc":{"driver":{"name":"mongoc","version":"1.17.4"},"os":{"type":"Linux","name":"Debian GNU/Linux","version":"11","architecture":"x86_64"},"platform":"cfg=0x02a156a0e9 posix=200809 stdc=201710 CC=GCC 10.2.1 20210110 CFLAGS=\"\" LDFLAGS=\"\""}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.783+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.22.0.3:59532","connectionId":8,"connectionCount":8}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.784+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn8","msg":"client metadata","attr":{"remote":"172.22.0.3:59532","client":"conn8","doc":{"driver":{"name":"mongoc","version":"1.17.4"},"os":{"type":"Linux","name":"Debian GNU/Linux","version":"11","architecture":"x86_64"},"platform":"cfg=0x02a156a0e9 posix=200809 stdc=201710 CC=GCC 10.2.1 20210110 CFLAGS=\"\" LDFLAGS=\"\""}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.786+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.22.0.3:59542","connectionId":9,"connectionCount":9}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.786+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn9","msg":"client metadata","attr":{"remote":"172.22.0.3:59542","client":"conn9","doc":{"driver":{"name":"mongoc","version":"1.17.4"},"os":{"type":"Linux","name":"Debian GNU/Linux","version":"11","architecture":"x86_64"},"platform":"cfg=0x02a156a0e9 posix=200809 stdc=201710 CC=GCC 10.2.1 20210110 CFLAGS=\"\" LDFLAGS=\"\""}}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.789+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.22.0.3:59548","connectionId":10,"connectionCount":10}}
orion_mongo_1 | {"t":{"$date":"2022-11-18T19:55:13.789+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn10","msg":"client metadata","attr":{"remote":"172.22.0.3:59548","client":"conn10","doc":{"driver":{"name":"mongoc","version":"1.17.4"},"os":{"type":"Linux","name":"Debian GNU/Linux","version":"11","architecture":"x86_64"},"platform":"cfg=0x02a156a0e9 posix=200809 stdc=201710 CC=GCC 10.2.1 20210110 CFLAGS=\"\" LDFLAGS=\"\""}}}
orion_1 | time=2022-11-18T19:55:13.752Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1092]:main | msg=start command line </usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -disableFileLog -statCounters -dbhost orion_mongo -logLevel INFO>
orion_1 | time=2022-11-18T19:55:13.752Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1166]:main | msg=Orion Context Broker is running
orion_1 | time=2022-11-18T19:55:13.794Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=mongoConnectionPool.cpp[506]:mongoConnectionPoolInit | msg=Connected to mongodb://orion_mongo/?connectTimeoutMS=10000 (dbName: orion, poolsize: 10)
orion_1 | time=2022-11-18T19:55:13.799Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1304]:main | msg=Startup completed
iotage_1 | *****************
iotage_1 | WARNING: It is recommended to enable authentication for secure connection
iotage_1 | *****************
iotage_1 | INFO: IoT Agent running standalone
iotage_1 | WARNING!!!
iotage_1 | CHECK YOUR config.properties FILE, THE FOLLOWING PARAMETERS ARE NULL:
iotage_1 | server_base_root
iotage_1 | server_port
iotage_1 | mongodb_host
iotage_1 | mongodb_port
iotage_1 | mongodb_db
iotage_1 | mongodb_retries
iotage_1 | mongodb_retry_time
iotage_1 | device_registration_duration
iotage_1 | log_level
iotage_1 | requestedPublishingInterval
iotage_1 | requestedLifetimeCount
iotage_1 | requestedMaxKeepAliveCount
iotage_1 | maxNotificationsPerPublish
iotage_1 | publishingEnabled
iotage_1 | priority
iotage_1 | api_port
iotage_1 | polling_commands_timer
iotage_1 | pollingDaemonFrequency
iotage_1 | pollingExpiration
iotage_1 | samplingInterval
iotage_1 | queueSize
iotage_1 | discardOldest
iotage_1 | polling
iotagentopcua_iotage_1 exited with code 1
We are waiting for a solution about transferring data from PLC to fiware via opc-ua.

Please update your agent to the lastest version from here and then use the following configuration
config.js
var config = {};
config.iota = {
logLevel: 'DEBUG',
timestamp: true,
contextBroker: {
host: 'localhost',
port: '1026',
ngsiVersion: 'v2',
jsonLdContext: 'https://uri.etsi.org/ngsi-ld/v1/ngsi-ld-core-context.jsonld',
service: 'opcua_car',
subservice: '/demo'
},
server: {
port: 4041
},
deviceRegistry: {
type: 'mongodb'
},
mongodb: {
host: 'localhost',
port: '27017',
db: 'iotagent_opcua'
},
types: {
Device: {
active: [
{
name: 'ParametersSpeed',
type: 'Number'
}
]
}
},
contexts: [
{
id: 'age01_Car',
type: 'Device',
mappings: [
{
ocb_id: 'ParametersSpeed',
opcua_id: 'ns=2;i=2',
object_id: 'ns=2;i=2',
inputArguments: []
}
]
}
],
contextSubscriptions: [],
service: 'opcua_car',
subservice: '/demo',
providerUrl: 'http://localhost:4041',
deviceRegistrationDuration: 'P20Y',
defaultType: 'Device',
defaultResource: '/iot/opcua',
explicitAttrs: false
};
config.opcua = {
endpoint: 'opc.tcp://localhost:4840',
securityMode: 'None',
securityPolicy: 'None',
username: null,
password: null,
uniqueSubscription: false
};
config.mappingTool = {
polling: false,
agentId: 'age01_',
namespaceIgnore: '0,7',
entityId: 'age01_Car',
entityType: 'Device'
};
config.jexlTransformations = {};
config.configRetrieval = false;
config.defaultKey = 'iot';
config.defaultTransport = 'OPCUA';
config.autoprovision = true;
module.exports = config;

Related

docker compose mysql works differently on remote server than on windows docker desktop - access denied for user 'root'#'172.18.0.2'

The bounty expires in 4 days. Answers to this question are eligible for a +50 reputation bounty.
Lou K wants to draw more attention to this question.
I have a docker compose set of images with one of them based on mysql. The stack was adapted from https://github.com/docker/awesome-compose/tree/c2f8036fd353dae457eba7b9b436bf3a1c85d937/nginx-flask-mysql
This runs fine on my ("local") Windows 10 docker desktop (docker v20.10.22), but when I try to run it on my ("remote") centos7 server (Docker version 23.0.0, build e92dd87, remote server hosted at digitalocean), I see Access denied for user 'root'#'172.18.0.3' (using password: YES) when trying to connect with the database.
I should note I've seen answers like https://stackoverflow.com/a/59839180/799921, and I have (repeatedly) tried removing the volume on the remote server.
This failure happens whether I use root and MYSQL_ROOT_PASSWORD or MYSQL_USER (user) and MYSQL_PASSWORD. I've verified I can connect to the database from the mysql container, but the app container does not have mysql installed so I haven't been able to manually test the connection from the app container.
Following https://hub.docker.com/_/mysql, "Connect to MySQL from the MySQL command line client", I'm able to connect, so this is a problem with the app according to https://dev.mysql.com/doc/refman/8.0/en/problems-connecting.html, "If you have access problems with a Perl, PHP, Python, or ODBC program, try to connect to the server with mysql -u user_name db_name or mysql -u user_name -ppassword db_name. If you are able to connect using the mysql client, the problem lies with your program", but I'm not sure what.
$ docker run -it --network webmodules_backend-network --rm mysql mysql -hdb -uroot -p
Enter password:
Welcome to the MySQL monitor.
...
I've looked at the mysql.user table on both running versions and nothing jumps out at me as being incorrect.
mysql.user on local:
mysql> select * from mysql.user where user='root';
+-----------+------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------------------+--------------------------+----------------------------+---------------+-------------+-----------------+----------------------+-----------------------+-------------------------------------------+------------------+-----------------------+-------------------+----------------+------------------+----------------+------------------------+---------------------+--------------------------+-----------------+
| Host | User | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections | plugin | authentication_string | password_expired | password_last_changed | password_lifetime | account_locked | Create_role_priv | Drop_role_priv | Password_reuse_history | Password_reuse_time | Password_require_current | User_attributes |
+-----------+------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------------------+--------------------------+----------------------------+---------------+-------------+-----------------+----------------------+-----------------------+-------------------------------------------+------------------+-----------------------+-------------------+----------------+------------------+----------------+------------------------+---------------------+--------------------------+-----------------+
| % | root | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | 0x | 0x | 0x | 0 | 0 | 0 | 0 | mysql_native_password | *52594A4243313A7447185F38CB9D3859DDC5FF77 | N | 2023-02-14 21:20:23 | NULL | N | Y | Y | NULL | NULL | NULL | NULL |
| localhost | root | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | 0x | 0x | 0x | 0 | 0 | 0 | 0 | mysql_native_password | *52594A4243313A7447185F38CB9D3859DDC5FF77 | N | 2023-02-14 21:20:23 | NULL | N | Y | Y | NULL | NULL | NULL | NULL |
+-----------+------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------------------+--------------------------+----------------------------+---------------+-------------+-----------------+----------------------+-----------------------+-------------------------------------------+------------------+-----------------------+-------------------+----------------+------------------+----------------+------------------------+---------------------+--------------------------+-----------------+
2 rows in set (0.00 sec)
mysql.user on remote:
mysql> select * from mysql.user where user='root';
+-----------+------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------------------+--------------------------+----------------------------+---------------+-------------+-----------------+----------------------+-----------------------+-------------------------------------------+------------------+-----------------------+-------------------+----------------+------------------+----------------+------------------------+---------------------+--------------------------+-----------------+
| Host | User | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections | plugin | authentication_string | password_expired | password_last_changed | password_lifetime | account_locked | Create_role_priv | Drop_role_priv | Password_reuse_history | Password_reuse_time | Password_require_current | User_attributes |
+-----------+------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------------------+--------------------------+----------------------------+---------------+-------------+-----------------+----------------------+-----------------------+-------------------------------------------+------------------+-----------------------+-------------------+----------------+------------------+----------------+------------------------+---------------------+--------------------------+-----------------+
| % | root | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | 0x | 0x | 0x | 0 | 0 | 0 | 0 | mysql_native_password | *CFBC0A14FD2027A55F04E2A65FAF93B5D528800B | N | 2023-02-14 21:22:12 | NULL | N | Y | Y | NULL | NULL | NULL | NULL |
| localhost | root | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | 0x | 0x | 0x | 0 | 0 | 0 | 0 | mysql_native_password | *CFBC0A14FD2027A55F04E2A65FAF93B5D528800B | N | 2023-02-14 21:22:12 | NULL | N | Y | Y | NULL | NULL | NULL | NULL |
+-----------+------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------------------+--------------------------+----------------------------+---------------+-------------+-----------------+----------------------+-----------------------+-------------------------------------------+------------------+-----------------------+-------------------+----------------+------------------+----------------+------------------------+---------------------+--------------------------+-----------------+
2 rows in set (0.01 sec)
This is started with the following on local:
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d
and on remote (from the local machine):
docker --context webmodules compose -f docker-compose.yml -f docker-compose-prod.yml up -d
where
docker-compose.yml:
version: "3.8"
services:
db:
# https://github.com/docker-library/mysql/issues/275#issuecomment-636831964
image: mysql:8.0.32 # 32 gives access denied on centos7 server for both root and user
# command: '--default-authentication-plugin=mysql_native_password'
command: '--default-authentication-plugin=mysql_native_password --log_error_verbosity=3' # mysql
# restart: always
secrets:
- db-password
- user-password
volumes:
- db-data:/var/lib/mysql
networks:
- backend-network
environment:
- MYSQL_DATABASE=webmodules
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db-password
- MYSQL_USER=user
- MYSQL_PASSWORD_FILE=/run/secrets/user-password
app:
build: app
restart: always
secrets:
- db-password
- user-password
networks:
- backend-network
- frontend-network
web:
build: web
restart: always
ports:
- 8000:80
networks:
- frontend-network
volumes:
db-data:
secrets:
db-password:
file: db/password.txt
user-password:
file: db/userpassword.txt
networks:
backend-network:
frontend-network:
docker-compose.dev.yml:
version: '3.8'
services:
app:
ports:
- 5678:5678
volumes:
- ./app/src:/app
environment:
- FLASK_DEBUG=True
docker-compose-prod.yml:
version: '3.8'
secrets:
db-password:
file: /home/appuser/.docker/webmodules-db-password.txt
user-password:
file: /home/appuser/.docker/webmodules-user-password.txt
app/Dockerfile:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.9-slim
EXPOSE 5000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# set the working directory in the container
WORKDIR /app
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
# copy the content of the local src directory to the working directory
# this isn't needed when developing as there's a bind under volumes: in the docker-compose.dev.yml file
COPY src .
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["gunicorn", "--reload", "--bind", "0.0.0.0:5000", "app:app"]
The full log file from the mysql container on remote:
$ docker logs webmodules-db-1
2023-02-15 12:35:06+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.32-1.el8 started.
2023-02-15 12:35:07+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2023-02-15 12:35:07+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.32-1.el8 started.
'/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock'
2023-02-15T12:35:08.185441Z 0 [Note] [MY-013667] [Server] Error-log destination "stderr" is not a file. Can not restore error log messages from previous run.
2023-02-15T12:35:08.174478Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
2023-02-15T12:35:08.182644Z 0 [Warning] [MY-010918] [Server] 'default_authentication_plugin' is deprecated and will be removed in a future release. Please use authentication_policy instead.
2023-02-15T12:35:08.182660Z 0 [Note] [MY-013932] [Server] BuildID[sha1]=6b049f17400f850658b2eb3ff165ec9a085d9655
2023-02-15T12:35:08.182673Z 0 [Note] [MY-010949] [Server] Basedir set to /usr/.
2023-02-15T12:35:08.182694Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.32) starting as process 1
2023-02-15T12:35:08.195754Z 0 [Note] [MY-012366] [InnoDB] Using Linux native AIO
2023-02-15T12:35:08.195970Z 0 [Note] [MY-010747] [Server] Plugin 'FEDERATED' is disabled.
2023-02-15T12:35:08.196046Z 0 [Note] [MY-010747] [Server] Plugin 'ndbcluster' is disabled.
2023-02-15T12:35:08.196062Z 0 [Note] [MY-010747] [Server] Plugin 'ndbinfo' is disabled.
2023-02-15T12:35:08.196069Z 0 [Note] [MY-010747] [Server] Plugin 'ndb_transid_mysql_connection_map' is disabled.
2023-02-15T12:35:08.197842Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2023-02-15T12:35:08.197887Z 1 [Note] [MY-013546] [InnoDB] Atomic write enabled
2023-02-15T12:35:08.197923Z 1 [Note] [MY-012932] [InnoDB] PUNCH HOLE support available
2023-02-15T12:35:08.197942Z 1 [Note] [MY-012944] [InnoDB] Uses event mutexes
2023-02-15T12:35:08.197948Z 1 [Note] [MY-012945] [InnoDB] GCC builtin __atomic_thread_fence() is used for memory barrier
2023-02-15T12:35:08.197956Z 1 [Note] [MY-012948] [InnoDB] Compressed tables use zlib 1.2.13
2023-02-15T12:35:08.206838Z 1 [Note] [MY-012951] [InnoDB] Using hardware accelerated crc32 and polynomial multiplication.
2023-02-15T12:35:08.207494Z 1 [Note] [MY-012203] [InnoDB] Directories to scan './'
2023-02-15T12:35:08.207577Z 1 [Note] [MY-012204] [InnoDB] Scanning './'
2023-02-15T12:35:08.212659Z 1 [Note] [MY-012208] [InnoDB] Completed space ID check of 4 files.
2023-02-15T12:35:08.213682Z 1 [Note] [MY-012955] [InnoDB] Initializing buffer pool, total size = 128.000000M, instances = 1, chunk size =128.000000M
2023-02-15T12:35:08.228723Z 1 [Note] [MY-012957] [InnoDB] Completed initialization of buffer pool
2023-02-15T12:35:08.385456Z 0 [Note] [MY-011952] [InnoDB] If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2023-02-15T12:35:08.386138Z 1 [Note] [MY-013532] [InnoDB] Using './#ib_16384_0.dblwr' for doublewrite
2023-02-15T12:35:08.386668Z 1 [Note] [MY-013532] [InnoDB] Using './#ib_16384_1.dblwr' for doublewrite
2023-02-15T12:35:08.432730Z 1 [Note] [MY-013566] [InnoDB] Double write buffer files: 2
2023-02-15T12:35:08.432776Z 1 [Note] [MY-013565] [InnoDB] Double write buffer pages per instance: 4
2023-02-15T12:35:08.432817Z 1 [Note] [MY-013532] [InnoDB] Using './#ib_16384_0.dblwr' for doublewrite
2023-02-15T12:35:08.432849Z 1 [Note] [MY-013532] [InnoDB] Using './#ib_16384_1.dblwr' for doublewrite
2023-02-15T12:35:08.531818Z 1 [Note] [MY-013883] [InnoDB] The latest found checkpoint is at lsn = 31919058 in redo log file ./#innodb_redo/#ib_redo9.
2023-02-15T12:35:08.532211Z 1 [Note] [MY-013086] [InnoDB] Starting to parse redo log at lsn = 31918620, whereas checkpoint_lsn = 31919058 and start_lsn = 31918592
2023-02-15T12:35:08.585369Z 1 [Note] [MY-013083] [InnoDB] Log background threads are being started...
2023-02-15T12:35:08.760113Z 1 [Note] [MY-012532] [InnoDB] Applying a batch of 0 redo log records ...
2023-02-15T12:35:08.760147Z 1 [Note] [MY-012535] [InnoDB] Apply batch completed!
2023-02-15T12:35:08.760387Z 1 [Note] [MY-013252] [InnoDB] Using undo tablespace './undo_001'.
2023-02-15T12:35:08.763735Z 1 [Note] [MY-013252] [InnoDB] Using undo tablespace './undo_002'.
2023-02-15T12:35:08.768232Z 1 [Note] [MY-012910] [InnoDB] Opened 2 existing undo tablespaces.
2023-02-15T12:35:08.768318Z 1 [Note] [MY-011980] [InnoDB] GTID recovery trx_no: 2832
2023-02-15T12:35:08.788140Z 1 [Note] [MY-013777] [InnoDB] Time taken to initialize rseg using 1 thread: 19811 ms.
2023-02-15T12:35:08.788273Z 1 [Note] [MY-012923] [InnoDB] Creating shared tablespace for temporary tables
2023-02-15T12:35:08.788341Z 1 [Note] [MY-012265] [InnoDB] Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2023-02-15T12:35:08.835413Z 1 [Note] [MY-012266] [InnoDB] File './ibtmp1' size is now 12 MB.
2023-02-15T12:35:08.835602Z 1 [Note] [MY-013627] [InnoDB] Scanning temp tablespace dir:'./#innodb_temp/'
2023-02-15T12:35:09.003141Z 1 [Note] [MY-013018] [InnoDB] Created 128 and tracked 128 new rollback segment(s) in the temporary tablespace. 128 are now active.
2023-02-15T12:35:09.037574Z 1 [Note] [MY-012976] [InnoDB] 8.0.32 started; log sequence number 31919068
2023-02-15T12:35:09.038169Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2023-02-15T12:35:09.046561Z 1 [Note] [MY-011089] [Server] Data dictionary restarting version '80023'.
2023-02-15T12:35:09.184823Z 1 [Note] [MY-012357] [InnoDB] Reading DD tablespace files
2023-02-15T12:35:09.185738Z 1 [Note] [MY-012356] [InnoDB] Scanned 6 tablespaces. Validated 6.
2023-02-15T12:35:09.246031Z 1 [Note] [MY-010006] [Server] Using data dictionary with version '80023'.
2023-02-15T12:35:09.252593Z 0 [Note] [MY-011332] [Server] Plugin mysqlx reported: 'IPv6 is available'
2023-02-15T12:35:09.254590Z 0 [Note] [MY-011323] [Server] Plugin mysqlx reported: 'X Plugin ready for connections. bind-address: '::' port: 33060'
2023-02-15T12:35:09.254626Z 0 [Note] [MY-011323] [Server] Plugin mysqlx reported: 'X Plugin ready for connections. socket: '/var/run/mysqld/mysqlx.sock''
2023-02-15T12:35:09.278358Z 0 [Note] [MY-010902] [Server] Thread priority attribute setting in Resource Group SQL shall be ignored due to unsupported platform or insufficient privilege.
2023-02-15T12:35:09.310954Z 0 [Note] [MY-013911] [Server] Crash recovery finished in binlog engine. No attempts to commit, rollback or prepare any transactions.
2023-02-15T12:35:09.311015Z 0 [Note] [MY-013911] [Server] Crash recovery finished in InnoDB engine. No attempts to commit, rollback or prepare any transactions.
2023-02-15T12:35:09.316319Z 0 [Note] [MY-012487] [InnoDB] DDL log recovery : begin
2023-02-15T12:35:09.316414Z 0 [Note] [MY-012488] [InnoDB] DDL log recovery : end
2023-02-15T12:35:09.322395Z 0 [Note] [MY-011946] [InnoDB] Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2023-02-15T12:35:09.322785Z 0 [Note] [MY-011946] [InnoDB] Buffer pool(s) load completed at 230215 12:35:09
2023-02-15T12:35:09.433353Z 0 [Note] [MY-010913] [Server] You have not provided a mandatory server-id. Servers in a replication topology must have unique server-ids. Please refer to the proper server start-up parameters documentation.
2023-02-15T12:35:09.435029Z 0 [Note] [MY-010182] [Server] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
2023-02-15T12:35:09.435065Z 0 [Note] [MY-010304] [Server] Skipping generation of SSL certificates as certificate files are present in data directory.
2023-02-15T12:35:09.438539Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2023-02-15T12:35:09.438584Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2023-02-15T12:35:09.438630Z 0 [Note] [MY-010308] [Server] Skipping generation of RSA key pair through --sha256_password_auto_generate_rsa_keys as key files are present in data directory.
2023-02-15T12:35:09.438643Z 0 [Note] [MY-010308] [Server] Skipping generation of RSA key pair through --caching_sha2_password_auto_generate_rsa_keys as key files are present in data directory.
2023-02-15T12:35:09.438802Z 0 [Note] [MY-010252] [Server] Server hostname (bind-address): '*'; port: 3306
2023-02-15T12:35:09.438849Z 0 [Note] [MY-010253] [Server] IPv6 is available.
2023-02-15T12:35:09.438857Z 0 [Note] [MY-010264] [Server] - '::' resolves to '::';
2023-02-15T12:35:09.438885Z 0 [Note] [MY-010251] [Server] Server socket created on IP: '::'.
2023-02-15T12:35:09.440185Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2023-02-15T12:35:09.461816Z 0 [Note] [MY-011025] [Repl] Failed to start slave threads for channel ''.
2023-02-15T12:35:09.463840Z 5 [Note] [MY-010051] [Server] Event Scheduler: scheduler thread started with id 5
2023-02-15T12:35:09.464101Z 0 [Note] [MY-011240] [Server] Plugin mysqlx reported: 'Using SSL configuration from MySQL Server'
2023-02-15T12:35:09.464743Z 0 [Note] [MY-011243] [Server] Plugin mysqlx reported: 'Using OpenSSL for TLS connections'
2023-02-15T12:35:09.464917Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.32' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
2023-02-15T12:35:09.464973Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2023-02-15T12:36:06.574103Z 8 [Note] [MY-010926] [Server] Access denied for user 'root'#'172.18.0.2' (using password: YES)
And not sure if this is relevant, but the python code is in app.py:
import mysql.connector
from flask import Flask, jsonify
from ptvsd import enable_attach
# enable python visual studio debugger
enable_attach(address=('0.0.0.0', 5678))
app = Flask(__name__)
conn = None
# adapted from https://github.com/aiordache/demos/blob/c7aa37cc3e2f8800296f668138b4cf208b27380a/dockercon2020-demo/app/src/server.py
# similar to https://github.com/docker/awesome-compose/blob/e6b1d2755f2f72a363fc346e52dce10cace846c8/nginx-flask-mysql/backend/hello.py
class DBManager:
def __init__(self, database='example', host="db", user="root", password_file=None):
pf = open(password_file, 'r')
self.connection = mysql.connector.connect(
user=user,
password=pf.read(),
host=host, # name of the mysql service as set in the docker compose file
database=database,
auth_plugin='mysql_native_password'
)
pf.close()
self.cursor = self.connection.cursor()
def populate_db(self):
self.cursor.execute('DROP TABLE IF EXISTS blog')
self.cursor.execute('CREATE TABLE blog (id INT AUTO_INCREMENT PRIMARY KEY, title VARCHAR(255))')
self.cursor.executemany('INSERT INTO blog (id, title) VALUES (%s, %s);', [(i, 'Blog post #%d'% i) for i in range (1,5)])
self.connection.commit()
def query_titles(self):
self.cursor.execute('SELECT title FROM blog')
rec = []
for c in self.cursor:
rec.append(c[0])
return rec
#app.route('/')
def hello_world():
return 'Hello, Docker!'
#app.route('/blogs')
def listBlog():
global conn
if not conn:
conn = DBManager(host='db', database='webmodules', user='root', password_file='/run/secrets/db-password')
conn.populate_db()
rec = conn.query_titles()
result = []
for c in rec:
result.append(c)
return jsonify({"response": result})
if __name__ == "__main__":
app.run(host ='0.0.0.0', port=5000)
my.cnf (local and remote identical):
# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/8.0/en/server-configuration-defaults.html
[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
# Remove leading # to revert to previous value for default_authentication_plugin,
# this will increase compatibility with older clients. For background, see:
# https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_default_authentication_plugin
# default-authentication-plugin=mysql_native_password
skip-host-cache
skip-name-resolve
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
secure-file-priv=/var/lib/mysql-files
user=mysql
pid-file=/var/run/mysqld/mysqld.pid
[client]
socket=/var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
You should replace the following line in app.py
password=pf.read(),
by this line:
password=pf.read().strip(),
The reason is that python will also read the newline at the end of the file as part of the password string, and mysql will cut that off.
On Windows you might have used an editor which doesn't place a newline at the end of the file or it might have added a carriage return whereas on Linux the line ending is just a linefeed character.
Alternatively you can make sure you password file doesn't contain a newline at the end, but most editors will not allow this.
Using the 'echo -n' command you can achieve this:
> echo -n "bla" >password_no_lf.txt
> od -c < password_no_lf.txt
0000000 b l a
0000003
> echo "bla" >password_with_lf.txt
> od -c < password_with_lf.txt
0000000 b l a \n
0000004

Is Orion compatible with AWS DocumentDB

I am trying to connect Orion with AWS DocumentDB but it's not getting connected. However I tried two other FIWARE components IoTAgent and Sth-Comet with DocumentDB and both are working fine.
Same hostname and credential are working for IoTAgent and Sth-Comet. I also checked for the connectivity, which is fine, as IoTAgent and Sth-Comet are in same network. I also checked from a different mongo host in same network and this also worked. Below is the error that I am getting for Orion.
time=2021-02-18T07:03:46.293Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=mongoConnectionPool.cpp[180]:mongoConnect | msg=Database Startup Error (cannot connect to mongo - doing 100 retries with a 1000 millisecond interval)
Is there any possibility that Orion is not compatible with AWS DocumentDB?
Update1:
bash-4.2$ ps ax | grep contextBroker
1 ? Ss 0:00 /usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -disableFileLog -dbhost xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com -db admin -dbuser test -dbpwd xxxxxxxxxx
Update2:
Earlier, I was using Orion docker images by pulling directly from dockerhub and that was not working. So this time, I build two docker images by building source code of version 2.4.2 and 2.5.2. Now, I was able to connect with AWS DocuemntDB with these docker images but getting a different error as below.
time=2021-02-23T06:10:41.982Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }> from caller mongoSubCacheItemInsert:83)
time=2021-02-23T06:10:41.982Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }'
Below is the Orion version
contextBroker --version
2.5.0-next (git version: 3984f9fc30e90fa04682131ca4516b4d277eb27e)
curl -X GET 'http://localhost:1026/version'
{
"orion" : {
"version" : "2.5.0-next",
"uptime" : "0 d, 0 h, 4 m, 56 s",
"git_hash" : "3984f9fc30e90fa04682131ca4516b4d277eb27e",
"compile_time" : "Mon Feb 22 17:39:30 UTC 2021",
"compiled_by" : "root",
"compiled_in" : "4c7575c7c27f",
"release_date" : "Mon Feb 22 17:39:30 UTC 2021",
"doc" : "https://fiware-orion.rtfd.io/",
"libversions": {
"boost": "1_53",
"libcurl": "libcurl/7.29.0 NSS/3.53.1 zlib/1.2.7 libidn/1.28 libssh2/1.8.0",
"libmicrohttpd": "0.9.70",
"openssl": "1.0.2k",
"rapidjson": "1.1.0",
"mongodriver": "legacy-1.1.2"
}
}
}
I am also able to connect to DocumentDB from Orion Pod using Mongo Shell.
mongo --host xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com:27017 --username xxxx --password xxxx
rs0:PRIMARY> show dbs;
rs0:PRIMARY>
I am also able to create entries using below command and it creates a DB and collection in DocumentDB:
curl localhost:1026/v2/entities -s -S --header 'Content-Type: application/json' \
> -X POST -d #- <<EOF
> {
> "id": "Room2",
> "type": "Room",
> "temperature": {
> "value": 23,
> "type": "Number"
> },
> "pressure": {
> "value": 720,
> "type": "Number"
> }
> }
> EOF
rs0:PRIMARY> show dbs;
orion 0.000GB
But I am not able to get that data using orion API and after executing this command it getting exited from container with a empty response. I have checked the same with Orion version 2.4.2 and 2.5.2 with DocumentDB 4.0 and 3.6.
[root#orion-docdb-7748fd9c85-gbjz7 /]# curl localhost:1026/v2/entities/Room2 -s -S --header 'Accept: application/json' | python -mjson.tool
curl: (52) Empty reply from server
command terminated with exit code 137
At the end, still getting same error in logs.
time=2021-02-23T06:16:04.564Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }> from caller mongoSubCacheItemInsert:83)
time=2021-02-23T06:16:04.564Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }'
Update3:
I have added -noCache and deployed again. Below are the commands output and logs for your reference.
Process check:
#ps ax | grep contextBroker
1 ? Ssl 0:00 /usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -disableFileLog -dbhost xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com -dbuser xxxxxxxx -dbpwd xxxxxxxx -logLevel DEBUG -noCache
Entries in DB:
rs0:PRIMARY> show dbs
orion 0.000GB
rs0:PRIMARY> use orion
switched to db orion
rs0:PRIMARY> show collections
entities
rs0:PRIMARY> db.entities.find()
{ "_id" : { "id" : "Room2", "type" : "Room", "servicePath" : "/" }, "attrNames" : [ "temperature", "pressure" ], "attrs" : { "temperature" : { "type" : "Number", "creDate" : 1614323032.671698, "modDate" : 1614323032.671698, "value" : 23, "mdNames" : [ ] }, "pressure" : { "type" : "Number", "creDate" : 1614323032.671698, "modDate" : 1614323032.671698, "value" : 720, "mdNames" : [ ] } }, "creDate" : 1614323032.671698, "modDate" : 1614323032.671698, "lastCorrelator" : "c8a73f40-7800-11eb-bd9b-bea9c419835d" }
Orion Pod Logs:
time=2021-02-26T06:46:33.966Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1008]:main | msg=start command line </usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -disableFileLog -dbhost -dbhost xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com -dbuser xxxxxxxx -dbpwd xxxxxxxx -logLevel DEBUG -noCache>
time=2021-02-26T06:46:33.966Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1076]:main | msg=Orion Context Broker is running
time=2021-02-26T06:46:34.280Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=MongoGlobal.cpp[243]:mongoInit | msg=Connected to mongo at xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com/orion, as user 'xxxxxxx' (poolsize: 10)
time=2021-02-26T06:46:34.282Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1202]:main | msg=Startup completed
time=2021-02-26T07:03:24.546Z | lvl=INFO | corr=b7e44e5a-7800-11eb-9531-bea9c419835d | trans=1614321993-966-00000000001 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=logTracing.cpp[79]:logInfoRequestWithoutPayload | msg=Request received: GET /version, response code: 200
time=2021-02-26T07:03:52.672Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }> from caller processContextElement:3493)
time=2021-02-26T07:03:52.672Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }'
time=2021-02-26T07:03:52.782Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[235]:dbErrorReset | msg=Releasing alarm DatabaseError
time=2021-02-26T07:03:52.790Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }> from caller addTriggeredSubscriptions_noCache:1408)
time=2021-02-26T07:03:52.790Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }'
time=2021-02-26T07:03:52.791Z | lvl=INFO | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=logTracing.cpp[130]:logInfoRequestWithPayload | msg=Request received: POST /v2/entities, request payload (148 bytes): { "id": "Room2", "type": "Room", "temperature": { "value": 23, "type": "Number" }, "pressure": { "value": 720, "type": "Number" }}, response code: 201
time=2021-02-26T07:03:58.479Z | lvl=ERROR | corr=cc1d5934-7800-11eb-a28d-bea9c419835d | trans=1614321993-966-00000000003 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[235]:dbErrorReset | msg=Releasing alarm DatabaseError
time=2021-02-26T07:03:58.479Z | lvl=ERROR | corr=cc1d5934-7800-11eb-a28d-bea9c419835d | trans=1614321993-966-00000000003 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323038|1 }> from caller ContextElementResponse:109)
terminate called after throwing an instance of 'mongo::AssertionException'
what(): assertion src/mongo/bson/bsonelement.cpp:392
Pod exited and restarted during API call:
curl localhost:1026/v2/entities/Room2 -s -S --header 'Accept: application/json' | python -mjson.tool
command terminated with exit code 137
The following message shown in log traces is pretty significant
"Legacy opcodes are not supported"
Although the MongoDB driver used by Orion 2.5.2 and before works with official MongoDB version up to 4.4, probably it is not the case with MongoDB "clones" like AWS DocumentDB.
We are in the process to change the legacy driver used by Orion to a new one. Once this change lands in Orion master branch, I'd suggest to test it (using :latest dockerhub tag). In the meanwhile, as a workaround, I'd suggest to use a official MongoDB database.
EDIT: the process to change the MongoDB driver has finished and Orion is using the new driver since version 3.0.0. I think it would be a good idea to test with this new version and see how it goes. I can help with the test if you provide me with the access information (see here).

Unable to run Cygnus with MySQL agent

I am trying to setup and understand Cygnus. But I am facing issue during installation.
I followed below given steps.
Install Cygnus using Docker (docker run -d -p 5050:5050 -p 8081:8081
fiware/cygnus-common)
Executed version command (curl http://172.17.0.2:8081/v1/version) which gave following response
{"success":"true","version":"1.8.0_SNAPSHOT.39b2aa4789c61fa92fe6edc905410f1ddeb33490"}
Login into Cygnus container using command docker exec -it
/bin/bash
Created new file named “agent_mysql.conf” in
“/opt/apache-flume/conf/” folder.
Configuration details given below
.
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = mysql-sink
cygnus-ngsi.channels = mysql-channel
cygnus-ngsi.sources.http-source.channels = mysql-channel
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service = def_serv
cygnus-ngsi.sources.http-source.handler.default_service_path = def_servpath
cygnus-ngsi.sources.http-source.handler.events_ttl = 2
cygnus-ngsi.sources.http-source.interceptors = ts gi
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /Applications/apache-flume-1.4.0-bin/conf/grouping_rules.conf
# =============================================
# mysql-channel configuration
# channel type (must not be changed)
cygnus-ngsi.channels.mysql-channel.type = memory
# capacity of the channel
cygnus-ngsi.channels.mysql-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnus-ngsi.channels.mysql-channel.transactionCapacity = 100
# channel name from where to read notification events
cygnus-ngsi.sinks.mysql-sink.channel = mysql-channel
# sink class, must not be changed
cygnus-ngsi.sinks.mysql-sink.type = com.telefonica.iot.cygnus.sinks.NGSIMySQLSink
#com.telefonica.iot.cygnus.sinks.OrionMySQLSink
# the FQDN/IP address where the MySQL server runs
cygnus-ngsi.sinks.mysql-sink.mysql_host = localhost
# the port where the MySQL server listes for incomming connections
cygnus-ngsi.sinks.mysql-sink.mysql_port = 3306
# a valid user in the MySQL server
cygnus-ngsi.sinks.mysql-sink.mysql_username = root
# password for the user above
cygnus-ngsi.sinks.mysql-sink.mysql_password = <myPassword>
# how the attributes are stored, either per row either per column (row, column)
cygnus-ngsi.sinks.mysql-sink.attr_persistence = row
Changed "cygnus-entrypoint.sh" file in / (root) folder and added following command by removing existing one.
${FLUME_HOME}/bin/cygnus-flume-ng agent --conf ${CYGNUS_CONF_PATH} -f ${CYGNUS_CONF_PATH}/agent_mysql.conf -n cygnus-ngsi -p ${CYGNUS_API_PORT} -Dflume.root.logger=${CYGNUS_LOG_LEVEL},${CYGNUS_LOG_APPENDER} -Dfile.encoding=UTF-8
Exited Docker container and came back to Ubuntu.
Stop and restart Docker container.
And I am getting following errors in logs
Please check and let me know what am I doing wrong? Appreciate your help.
LOGS
n$AgentConfiguration[1016] : Processing:mysql-sink
time=2018-04-30T14:24:00.807Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=validateConfiguration | msg=org.apache.flume.conf.FlumeConfiguration[140] : Post-validation flume configuration contains configuration for agents: [cygnus-ngsi]
time=2018-04-30T14:24:00.808Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=loadChannels | msg=org.apache.flume.node.AbstractConfigurationProvider[150] : Creating channels
time=2018-04-30T14:24:00.816Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=create | msg=org.apache.flume.channel.DefaultChannelFactory[40] : Creating instance of channel mysql-channel type memory
time=2018-04-30T14:24:00.825Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=loadChannels | msg=org.apache.flume.node.AbstractConfigurationProvider[205] : Created channel mysql-channel
time=2018-04-30T14:24:00.832Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=create | msg=org.apache.flume.source.DefaultSourceFactory[39] : Creating instance of source http-source, type org.apache.flume.source.http.HTTPSource
time=2018-04-30T14:24:00.836Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=configure | msg=org.apache.flume.source.http.HTTPSource[113] : Error while configuring HTTPSource. Exception follows.
java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.handlers.NGSIRestHandler
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flume.source.http.HTTPSource.configure(HTTPSource.java:102)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:331)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
time=2018-04-30T14:24:00.840Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=loadSources | msg=org.apache.flume.node.AbstractConfigurationProvider[366] : Source http-source has been removed due to an error during configuration
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.handlers.NGSIRestHandler
at com.google.common.base.Throwables.propagate(Throwables.java:156)
at org.apache.flume.source.http.HTTPSource.configure(HTTPSource.java:114)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:331)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.handlers.NGSIRestHandler
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flume.source.http.HTTPSource.configure(HTTPSource.java:102)
... 11 more
time=2018-04-30T14:24:00.841Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=create | msg=org.apache.flume.sink.DefaultSinkFactory[40] : Creating instance of sink: mysql-sink, type: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink
time=2018-04-30T14:24:00.842Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=run | msg=org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable[142] : Failed to load configuration data. Exception follows.
org.apache.flume.FlumeException: Unable to load sink type: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink, class: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink
at org.apache.flume.sink.DefaultSinkFactory.getClass(DefaultSinkFactory.java:69)
at org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:415)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flume.sink.DefaultSinkFactory.getClass(DefaultSinkFactory.java:67)
... 11 more
The simplest case is installing cygnus connecting MYSQL this way, using the "root" user to connect to cygnus.
docker run -d --name cygnus_container_name --link mysql_showcases \
-p 8081:8081 -p 5050:5050 \
-e CYGNUS_MYSQL_HOST=mysql_host -e CYGNUS_MYSQL_PORT=3306 \
-e CYGNUS_MYSQL_USER=root -e CYGNUS_MYSQL_PASS=root_password \
fiware/cygnus-ngsi
If you decide not to use root user to connect mysql, you'll need to change your user and password and create the database manually and granting the permissions to your user, since cygnus won't be able to create a database with a different user.
Finally I am able to run Cygnus with MySQL agent. I am using Ubuntu. (Linux ubuntucustomfiware 4.4.0-119-generic #143-Ubuntu SMP Mon Apr 2 16:08:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux)
I followed below steps.
Used MySQL installed in main Ubuntu instance instead of Docker container.
Modified /etc/mysql/mysql.conf.d/mysqld.cnf and changed
from
bind-address = 127.0.0.1
to
bind-address = *
Login into DB and grant all privileges to root user, so that it can connect from any host.
mysql -u root -p
GRANT ALL PRIVILEGES ON . TO 'root'#'%' IDENTIFIED BY 'MyPassword';
FLUSH PRIVILEGES;
exit;
Restart MySQL server
service mysql restart
Run Cygnus-ngsi
docker run -d --name cygnus -p 8081:8081 -p 5050:5050 -e
CYGNUS_MYSQL_HOST=PublicIPOfMySQLServer -e CYGNUS_MYSQL_PORT=3306 -e
CYGNUS_MYSQL_USER=root -e CYGNUS_MYSQL_PASS=MyPassword -e
CYGNUS_LOG_LEVEL='DEBUG' fiware/cygnus-ngsi
Modified Agent file and keep only mysql-sink. After below changes, stop/start cygnus docker container.
docker exec -it cygnus /bin/bash
vi /opt/apache-flume/conf/agent.conf
cygnus-ngsi.sinks = mysql-sink
cygnus-ngsi.channels = mysql-channel
exit;
docker stop cygnus
docker start cygnus
Now publish MQTT data to modify my entity and it inserted 4 rows (one row for each attribute) into MySQL DB
mosquitto_pub -h PublicIPOfMySQLServer -u UserName -P Password -t
/swm-reader-service1/reader-device-id1/attrs -m '{"tn": "9888", "pn":
"878787", "ri": "888888", "tdt":"Monday, May 10, 2018 03:16 AM"}'
Thanks for all your support.
Regards,
Krishan

MySQL - SSL - with TLS1.2 cipher AES256-SHA256 / DHE-RSA-AES256-SHA256

I'm using MySQL with SSL with TLS1.2 cipher AES256-SHA256 / DHE-RSA-AES256-SHA256.
I have compiled MySQL with openssl. I am able to connect to MySQL over
SSL with TLS1.0 ciphers. But when I tried to connect with TLS1.2 ciphers
connection fails with error.
MySQL server version :- 5.6.23-log Source distribution
Custom OpenSSL version :- OpenSSL 1.0.1j 15 Oct 2014
Java version :- 1.8.0_40
Error thrown with TLS1.2 cipher connect
> mysql -umysql --ssl-cipher=DHE-RSA-AES256-SHA256 -T -v
ERROR 2026 (HY000): SSL connection error:
error:00000001:lib(0):func(0):reason(1)
User time 0.00, System time 0.00
Maximum resident set size 2664, Integral resident set size 0
Non-physical pagefaults 777, Physical pagefaults 0, Swaps 0
Blocks in 0 out 0, Messages in 0 out 0, Signals 0
Voluntary context switches 2, Involuntary context switches 5
Snippet of my.cnf
[client]
default-character-set=utf8
ssl=ON
ssl-ca=/home/mysql-cert/ca.pem
ssl-cert=/home/mysql-cert/client-cert.pem
ssl-key=/home/mysql-cert/client-key.pem
[mysql]
default-character-set=utf8
[mysqld]
general_log=1
ssl-cipher=DHE-RSA-AES256-SHA256
ssl-cipher=AES256-SHA256
ssl-cipher=AES256-SHA
ssl-ca=/home/mysql-cert/ca.pem
ssl-cert=/home/mysql-cert/server-cert.pem
ssl-key=/home/mysql-cert/server-key.pem
MySQL prompt snipeet with TLS1.0 cipher connected
mysql> \s
--------------
mysql Ver 14.14 Distrib 5.6.23, for Linux (x86_64) using EditLine wrapper
Connection id: 6
Current database:
Current user: root#localhost
SSL: Cipher in use is AES256-SHA
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server version: 5.6.23-log Source distribution
Protocol version: 10
Connection: Localhost via UNIX socket
Server characterset: latin1
Db characterset: latin1
Client characterset: utf8
Conn. characterset: utf8
UNIX socket: /tmp/mysql.sock
Uptime: 1 hour 32 min 40 sec
Threads: 1 Questions: 11 Slow queries: 0 Opens: 67 Flush tables: 1
Open tables: 60 Queries per second avg: 0.001
--------------
mysql> SHOW STATUS LIKE 'ssl%';
+--------------------------------+--------------------------+
| Variable_name | Value |
+--------------------------------+--------------------------+
| Ssl_accept_renegotiates | 0 |
| Ssl_accepts | 6 |
| Ssl_callback_cache_hits | 0 |
| Ssl_cipher | AES256-SHA |
| Ssl_cipher_list | AES256-SHA |
| Ssl_client_connects | 0 |
| Ssl_connect_renegotiates | 0 |
| Ssl_ctx_verify_depth | 18446744073709551615 |
| Ssl_ctx_verify_mode | 5 |
| Ssl_default_timeout | 7200 |
| Ssl_finished_accepts | 3 |
| Ssl_finished_connects | 0 |
| Ssl_server_not_after | Jan 23 10:29:20 2025 GMT |
| Ssl_server_not_before | Mar 17 10:29:20 2015 GMT |
| Ssl_session_cache_hits | 0 |
| Ssl_session_cache_misses | 0 |
| Ssl_session_cache_mode | SERVER |
| Ssl_session_cache_overflows | 0 |
| Ssl_session_cache_size | 128 |
| Ssl_session_cache_timeouts | 0 |
| Ssl_sessions_reused | 0 |
| Ssl_used_session_cache_entries | 0 |
| Ssl_verify_depth | 18446744073709551615 |
| Ssl_verify_mode | 5 |
| Ssl_version | TLSv1 |
+--------------------------------+--------------------------+
25 rows in set (0.00 sec)
mysql> SHOW VARIABLES LIKE '%ssl%';
+---------------+----------------------------------+
| Variable_name | Value |
+---------------+----------------------------------+
| have_openssl | YES |
| have_ssl | YES |
| ssl_ca | /home/mysql-cert/ca.pem |
| ssl_capath | |
| ssl_cert | /home/mysql-cert/server-cert.pem |
| ssl_cipher | AES256-SHA |
| ssl_crl | |
| ssl_crlpath | |
| ssl_key | /home/mysql-cert/server-key.pem |
+---------------+----------------------------------+
9 rows in set (0.00 sec)
MySQL compiled as
> cmake . -DCMAKE_PREFIX_PATH=/opt/scr-openssl/ssl/
-DWITH_SSL=/opt/scr-openssl/ssl/
-DWITH_OPENSSL=/opt/scr-openssl/ssl/bin/
-DWITH_OPENSSL_INCLUDES=/opt/scr-openssl/ssl/include/
-DWITH_OPENSSL_LIBS=/opt/scr-openssl/ssl/lib/ -DENABLE_DOWNLOADS=1
>make
>make install
Please help me out to configure MySQL to work with TLS1.2 cipher.
MySQL v5.6.23 can only support TLS 1.0. To get support for TLS 1.2, you need to upgrade to a later MySQL version and ensure that both client and server have been compiled to use OpenSSL.
You might be able to use MySQL 5.6.46, according to the MySQL documentation.
When compiled using OpenSSL 1.0.1 or higher, MySQL supports the TLSv1, TLSv1.1, and TLSv1.2 protocols as of MySQL 5.6.46, and TLS1v1 prior to 5.6.46.

After upgrading to MariaDB 5.5 connections are being dropped after a period of inactivity

We have upgraded our database server from MySQL 5.1 to MariaDB 5.5 (5.5.40-MariaDB-1~wheezy-log).
After this upgrade, some long running processes mysql connection si being dropped.
Common scenario for those processes is:
Connect to MySQL
Run some queries
Do some heavy lifting without connecting to MySQL for at least one minute
Try to query against the original connection
Exception giving a 2600 error - MySQL server has gone away
This does happen in PHP CLI scripts (php 5.3), but also in a Ruby application (Redmine 2.5.1). It was not happening with MySQL 5.1 and there were no changes on the applications side, so it should not be app-related.
The %timeout% variables in MariaDB are:
+----------------------------+----------+
| Variable_name | Value |
+----------------------------+----------+
| connect_timeout | 5 |
| deadlock_timeout_long | 50000000 |
| deadlock_timeout_short | 10000 |
| delayed_insert_timeout | 300 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| thread_pool_idle_timeout | 60 |
| wait_timeout | 28800 |
+----------------------------+----------+
We are not using thread pooling:
+---------------------------+---------------------------+
| Variable_name | Value |
+---------------------------+---------------------------+
| thread_cache_size | 128 |
| thread_concurrency | 10 |
| thread_handling | one-thread-per-connection |
| thread_pool_idle_timeout | 60 |
| thread_pool_max_threads | 500 |
| thread_pool_oversubscribe | 3 |
| thread_pool_size | 12 |
| thread_pool_stall_limit | 500 |
| thread_stack | 294912 |
+---------------------------+---------------------------+
When the thing happens, there is also an event logged in syslog, everytime looking the same:
Dec 16 13:00:14 DB01 mysqld: 141216 13:00:14 [Warning] Aborted connection 9202885 to db: 'some_db_name' user: 'user' host: 'app' (Unknown error)
Besides that, there are also weird root account disconnection messages:
Dec 16 13:05:02 DB01 mysqld: 141216 13:05:02 [Warning] Aborted connection 9225621 to db: 'unconnected' user: 'root' host: 'localhost' (Unknown error)
Dec 16 13:10:00 DB01 mysqld: 141216 13:10:00 [Warning] Aborted connection 9218291 to db: 'unconnected' user: 'root' host: 'localhost' (Unknown error)
Dec 16 13:10:12 DB01 mysqld: 141216 13:10:12 [Warning] Aborted connection 9232561 to db: 'unconnected' user: 'root' host: 'localhost' (Unknown error)
Dec 16 13:17:01 DB01 /USR/SBIN/CRON[41343]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Dec 16 13:20:02 DB01 mysqld: 141216 13:20:02 [Warning] Aborted connection 9248777 to db: 'unconnected' user: 'root' host: 'localhost' (Unknown error)
Dec 16 13:20:02 DB01 mysqld: 141216 13:20:02 [Warning] Aborted connection 9248788 to db: 'unconnected' user: 'root' host: 'localhost' (Unknown error)
Dec 16 13:20:12 DB01 mysqld: 141216 13:20:12 [Warning] Aborted connection 9248798 to db: 'unconnected' user: 'root' host: 'localhost' (Unknown error)
Out of those settings is there any, that should be changed to fix the weird server has gone away errors?
In the end we have found out that the DB is not the cause of exceptions dropping, as the drops are appearing also in other unrelated systems.