CrateDB is failing to start - fiware

After figuring everything out I am close to finish up this project but it seems the new crateDB version is failing to start. Here is my crate docker-compose.yml file:
crate:
image: crate:4.1.4
hostname: crate
expose:
- "4200"
- "4300"
- "5432"
networks:
- default
ports:
- "4200:4200"
- "4300:4300"
- "5432:5432"
command:
-Clicense.enterprise=false -Cauth.host_based.enabled=false -Ccl>
-Chttp.cors.enabled=truue -Chttp.cors.allow-origin="*"
environment:
- CRATE_HEAP_SIZE=2g
And here is the exit code:
srdjan-crate-1 | [2022-08-03T13:02:41,141][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [Schermerspitze] uncaught exception in thread [main]
srdjan-crate-1 | org.elasticsearch.bootstrap.StartupException: ElasticsearchException[java.io.IOException: failed to read /data/data/nodes/0/_state/node-2.st]; nested: IOException[failed to read /data/data/nodes/0/_state/node-2.st]; nested: XContentParseException[[-1:36] [node_meta_data] unknown field [node_version], parser not found];
srdjan-crate-1 | at org.elasticsearch.bootstrap.StartupExceptionProxy.<init>(StartupExceptionProxy.java:31) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at io.crate.bootstrap.CrateDB.init(CrateDB.java:162) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at io.crate.bootstrap.CrateDB.execute(CrateDB.java:138) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:82) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-7.0.0.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.0.0.jar:4.1.4]
srdjan-crate-1 | at io.crate.bootstrap.CrateDB.main(CrateDB.java:91) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at io.crate.bootstrap.CrateDB.main(CrateDB.java:84) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | Caused by: org.elasticsearch.ElasticsearchException: java.io.IOException: failed to read /data/data/nodes/0/_state/node-2.st
srdjan-crate-1 | at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:158) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.gateway.MetaDataStateFormat.loadGeneration(MetaDataStateFormat.java:416) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestStateWithGeneration(MetaDataStateFormat.java:435) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:456) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.env.NodeEnvironment.loadOrCreateNodeMetaData(NodeEnvironment.java:406) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:302) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.node.Node.<init>(Node.java:239) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at io.crate.node.CrateNode.<init>(CrateNode.java:63) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.bootstrap.BootstrapProxy$1.<init>(BootstrapProxy.java:184) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.bootstrap.BootstrapProxy.setup(BootstrapProxy.java:184) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.bootstrap.BootstrapProxy.init(BootstrapProxy.java:252) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at io.crate.bootstrap.CrateDB.init(CrateDB.java:158) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | ... 6 more
srdjan-crate-1 | Caused by: java.io.IOException: failed to read /data/data/nodes/0/_state/node-2.st
srdjan-crate-1 | at org.elasticsearch.gateway.MetaDataStateFormat.loadGeneration(MetaDataStateFormat.java:410) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestStateWithGeneration(MetaDataStateFormat.java:435) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:456) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.env.NodeEnvironment.loadOrCreateNodeMetaData(NodeEnvironment.java:406) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:302) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.node.Node.<init>(Node.java:239) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at io.crate.node.CrateNode.<init>(CrateNode.java:63) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.bootstrap.BootstrapProxy$1.<init>(BootstrapProxy.java:184) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.bootstrap.BootstrapProxy.setup(BootstrapProxy.java:184) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.bootstrap.BootstrapProxy.init(BootstrapProxy.java:252) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at io.crate.bootstrap.CrateDB.init(CrateDB.java:158) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | ... 6 more
srdjan-crate-1 | Caused by: org.elasticsearch.common.xcontent.XContentParseException: [-1:36] [node_meta_data] unknown field [node_version], parser not found
srdjan-crate-1 | at org.elasticsearch.common.xcontent.ObjectParser.getParser(ObjectParser.java:326) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.common.xcontent.ObjectParser.parse(ObjectParser.java:150) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.common.xcontent.ObjectParser.apply(ObjectParser.java:174) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.env.NodeMetaData$1.fromXContent(NodeMetaData.java:110) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.env.NodeMetaData$1.fromXContent(NodeMetaData.java:94) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.gateway.MetaDataStateFormat.read(MetaDataStateFormat.java:303) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.gateway.MetaDataStateFormat.loadGeneration(MetaDataStateFormat.java:406) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestStateWithGeneration(MetaDataStateFormat.java:435) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:456) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.env.NodeEnvironment.loadOrCreateNodeMetaData(NodeEnvironment.java:406) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:302) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.node.Node.<init>(Node.java:239) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at io.crate.node.CrateNode.<init>(CrateNode.java:63) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.bootstrap.BootstrapProxy$1.<init>(BootstrapProxy.java:184) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.bootstrap.BootstrapProxy.setup(BootstrapProxy.java:184) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at org.elasticsearch.bootstrap.BootstrapProxy.init(BootstrapProxy.java:252) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | at io.crate.bootstrap.CrateDB.init(CrateDB.java:158) ~[crate-app.jar:4.1.4]
srdjan-crate-1 | ... 6 more
srdjan-crate-1 exited with code 1
All i need is some data pushing container, I am using FIWARE tutorials and webinar in order to visualize my data, and the crateDB container always fails to start, I mean it starts but later it just goes off... any help will be appreciated, thanks in advance!
This is when i run docker inspect command:
sudo docker inspect crate
[
{
"Id": "sha256:33244aff8840432e394adcda8d92e07cfd3a7b33c63f5c39e51707c93aea3b55",
"RepoTags": [
"crate:5.0.0",
"crate:latest"
],
"RepoDigests": [
"crate#sha256:3da3f761952d46279cb35dd28d33f93fa36cce6cb7fe0dfb5d90e52d73c1d400"
],
"Parent": "",
"Comment": "",
"Created": "2022-07-18T18:28:30.251880519Z",
"Container": "b10af8c7173a0753094c02fcc75cfbfd712588870a727b7fce113d83734379b1",
"ContainerConfig": {
"Hostname": "b10af8c7173a",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"4200/tcp": {},
"4300/tcp": {},
"5432/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/crate/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"CRATE_HEAP_SIZE=512M"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"crate\"]"
],
"Image": "sha256:0b000e181ffab1924c18f0d9b14f801d9c921747270d9400dbf0ed4bbd60dada",
"Volumes": {
"/data": {}
},
"WorkingDir": "/data",
"Entrypoint": [
"/docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"maintainer": "Crate.io <office#crate.io>",
"org.label-schema.build-date": "20201113",
"org.label-schema.license": "GPLv2",
"org.label-schema.name": "CentOS Base Image",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "CentOS",
"org.opencontainers.image.created": "2022-07-13T09:51:35.203197",
"org.opencontainers.image.description": "CrateDB is a distributed SQL database that handles massive amounts of machine data in real-time.",
"org.opencontainers.image.licenses": "GPL-2.0-only",
"org.opencontainers.image.source": "https://github.com/crate/docker-crate",
"org.opencontainers.image.title": "crate",
"org.opencontainers.image.url": "https://crate.io/products/cratedb/",
"org.opencontainers.image.vendor": "Crate.io",
"org.opencontainers.image.version": "5.0.0"
}
},
"DockerVersion": "20.10.12",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"4200/tcp": {},
"4300/tcp": {},
"5432/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/crate/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"CRATE_HEAP_SIZE=512M"
],
"Cmd": [
"crate"
],
"Image": "sha256:0b000e181ffab1924c18f0d9b14f801d9c921747270d9400dbf0ed4bbd60dada",
"Volumes": {
"/data": {}
},
"WorkingDir": "/data",
"Entrypoint": [
"/docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"maintainer": "Crate.io <office#crate.io>",
"org.label-schema.build-date": "20201113",
"org.label-schema.license": "GPLv2",
"org.label-schema.name": "CentOS Base Image",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "CentOS",
"org.opencontainers.image.created": "2022-07-13T09:51:35.203197",
"org.opencontainers.image.description": "CrateDB is a distributed SQL database that handles massive amounts of machine data in real-time.",
"org.opencontainers.image.licenses": "GPL-2.0-only",
"org.opencontainers.image.source": "https://github.com/crate/docker-crate",
"org.opencontainers.image.title": "crate",
"org.opencontainers.image.url": "https://crate.io/products/cratedb/",
"org.opencontainers.image.vendor": "Crate.io",
"org.opencontainers.image.version": "5.0.0"
}
},
"Architecture": "amd64",
"Os": "linux",
"Size": 835792459,
"VirtualSize": 835792459,
"GraphDriver": {
"Data": {
"RootDir": "/mnt/docker-data/overlay/b7ac674f087e8ed9cc8c1739dfc5e89186bdd95d5e1be27b22cc0220311a39f6/root"
},
"Name": "overlay"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:174f5685490326fc0a1c0f5570b8663732189b327007e47ff13d2ca59673db02",
"sha256:0d4a826419c2dbb52dc24a4014281a339ec6fdb9ca2f0d5fd06c704c1d93f967",
"sha256:8969ec2851837211e98eccf1295c27dc1de127f34bc258fa483147783c12589a",
"sha256:4e669d13da98c64a63c1b4d342e1268b7134804f8060b4bbcd00cf44be59093b",
"sha256:1e6365a3e5763f9d18849913accb29b3b938b934bc30d193fc6c2b73cea46345",
"sha256:bd8cdcad9e0050447018c5f0c7dee8dcb2d27bc7b2ab4dcc0ca5d77c2e342dc8",
"sha256:0add4659d3892065ce42055bef93cf56a1c8c1b7ea335dfe0add0899ae0437c3",
"sha256:5321d2882bd76b170c6b93cce450679c204d520704c906c09ebf089ee5e93e3b"
]
},
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
]
Still can't find a way to start it, i looked all the other issues and i changed the memory and it still fails... Any help will be appreciated :), My OS is Linux Mint

Related

Convert string in dataframe pyspark to table, obtaining only the necessary from string

{
"schema": {
"type": "struct",
"fields": [
{
"type": "int32",
"optional": true,
"field": "c1"
},
{
"type": "string",
"optional": true,
"field": "c2"
},
{
"type": "int64",
"optional": false,
"name": "org.apache.kafka.connect.data.Timestamp",
"version": 1,
"field": "create_ts"
},
{
"type": "int64",
"optional": false,
"name": "org.apache.kafka.connect.data.Timestamp",
"version": 1,
"field": "update_ts"
}
],
"optional": false,
"name": "foobar"
},
"payload": {
"c1": 67,
"c2": "foo",
"create_ts": 1663920002000,
"update_ts": 1663920002000
}
}
I have my json string in this format and I don't want the whole data into data into table , wanted the table in this format.
| c1 | c2 | create_ts | update_ts |
+------+------+------------------+---------------------+
| 1 v| foo | 2022-09-21 10:47:54 | 2022-09-21 10:47:54 |
| 28 | foo | 2022-09-21 13:16:45 | 2022-09-21 13:16:45 |
| 29 | foo | 2022-09-21 14:19:10 | 2022-09-21 14:19:10 |
| 30 | foo | 2022-09-21 14:19:20 | 2022-09-21 14:19:20 |
| 31 | foo | 2022-09-21 14:29:19 | 2022-09-21 14:29:19 |
Skip other (nested) attributes by specifying the only one you want to see in the resulting output:
(
spark
.read
.option("multiline","true")
.json("/path/json-path")
.select("payload.*")
.show()
)

Agent Orion JSON MQTT

I have been able to send JSON format measures via MQTT publisher to the mosquitto docker container ( that is a broker ), also i have been able to pass that data to the IOT-Agent, these are curl commands that i used to create a service-path and to register a device on the IOT-Agent:
curl -iX POST 'http://localhost:4041/iot/services' -H 'Content-Type: application/json' -H 'Fiware-Service: myClassRoom' -H 'Fiware-ServicePath: /' -d '{ "services": [ { "apikey": "12345", "cbroker": "http://localhost:1026", "entity_type": "Device", "resource": /iot/d" } ] }
This is how i registered the device in the IOT-Agent:
curl -iX POST 'http://localhost:4041/iot/devices' -H 'Content-Type: application/json' -H 'Fiware-Service: myClassRoom' -H 'Fiware-ServicePath: /' -d '{ "devices": [ { "device_id": "SensTemp", "entity_name": "urn:ngsi-ld:temperature-sensor:001", "entity_type": "Device", "protocol": "PDI-IoTA-UltraLight", "transport": "MQTT", "timezone": "Europe/Madrid", "attributes":[ { "object_id": "temperature", "name": "temperature", "type": "Integer"} ] } ] }'
And this is the error that is giving me in the Agent, can someone help me to solve this issue?
fiware-agent | time=2022-07-25T08:19:11.562Z | lvl=FATAL | corr=n/a | trans=n/a | op=IoTAgentNGSI.Global | from=n/a | srv=n/a | subsrv=n/a | msg=An unexpected exception has been raised. Ignoring: TypeError: parsedMessage.reduce is not a function
And here is the evidence of connecting, Service Path, information about the device etc...
fiware-agent | time=2022-07-25T08:20:11.931Z | lvl=DEBUG | corr=785bb065-6c0a-41ba-96d2-182f0040ea93 | trans=785bb065-6c0a-41ba-96d2-182f0040ea93 | op=IOTAUL.IoTUtils | from=n/a | srv=n/a | subsrv=n/a | msg=deviceData after merge with conf: {"_id":"62d931f861815b54e26fa2b2","active":[{"object_id":"temperature","name":"temperature","type":"Integer"}],"commands":[],"staticAttributes":[],"subscriptions":[],"creationDate":"2022-07-21T11:01:12.968Z","id":"SensTemp","type":"Device","name":"urn:ngsi-ld:temperature-sensor:001","service":"myclassroom","subservice":"/","internalId":null,"protocol":"PDI-IoTA-UltraLight","transport":"MQTT","explicitAttrs":false,"lazy":null,"internalAttributes":null} | comp=IoTAgent
And here is what i dont understand, why is giving me that error, parsed message error?
fiware-agent | time=2022-07-25T08:20:11.932Z | lvl=DEBUG | corr=785bb065-6c0a-41ba-96d2-182f0040ea93 | trans=785bb065-6c0a-41ba-96d2-182f0040ea93 | op=IOTAUL.Common.Binding | from=n/a | srv=n/a | subsrv=n/a | msg=Processing multiple measures for device [SensTemp] with apiKey [12345] | comp=IoTAgent
fiware-agent | time=2022-07-25T08:20:11.932Z | lvl=DEBUG | corr=785bb065-6c0a-41ba-96d2-182f0040ea93 | trans=785bb065-6c0a-41ba-96d2-182f0040ea93 | op=IOTAUL.Common.Binding | from=n/a | srv=n/a | subsrv=n/a | msg=Parse error parsing incoming message [%]. Forcing to hex string | comp=IoTAgent {"temperature": 47}
fiware-agent | time=2022-07-25T08:20:11.932Z | lvl=DEBUG | corr=785bb065-6c0a-41ba-96d2-182f0040ea93 | trans=785bb065-6c0a-41ba-96d2-182f0040ea93 | op=IOTAUL.Common.Binding | from=n/a | srv=n/a | subsrv=n/a | msg=stringMessage: [7b2274656d7065726174757265223a2034377d] parsedMessage: [7b2274656d7065726174757265223a2034377d] | comp=IoTAgent
And Why cant i see the Temperature change in the MongoDB depository?
Thank you !!!
Problem solved! The error was in the Agent, I was using Ultralight IOT Agent. The solution is simple, when i used IOT JSON agent the problem was solved!
Now the Agent tells me that the measures are updated successfully!

How to return parent value for each array iteration with jsonpath?

I'm attempting to import some json data into grafana via the JSON API.
Here's a snippet of the json structure I'm working with:
[
{
"origin": "TS",
"id": "M8C8E02434D442725422CCB337057792F",
"type": "1.5.1:1",
"self": "https://metricsourcehost01/uimapiM8C8E02434D442725422CCB337057792F",
"source": "destinationhost01.our.domain.net",
"target": "destinationhost01.our.domain.net-0",
"probe": "cdm",
"for_computer_system": {
"id": "14873",
"self": "https://metricsourcehost01/uimapi/devices/14873",
"name": "destinationhost01.our.domain.net",
"ip": "10.1.1.16"
},
"for_device": {
"id": "D4F3D290D787D3FA4E7CD2824BFA6B1C8",
"self": "https://metricsourcehost01/uimapi/devices/D4F3D290D787D3FA4E7CD2824BFA6B1C8"
},
"for_configuration_item": {
"id": "CCE5006B73554FE7D307C1A355429286A",
"self": "https://metricsourcehost01/uimapi/TBD/CCE5006B73554FE7D307C1A355429286A",
"name": "CPU-0",
"qosName": "QOS_CPU_MULTI_USAGE",
"description": "Individual CPU Usage",
"unit": "%"
},
"uimMetricDefinition": null,
"minSampleValue": 61.17,
"maxSampleValue": 72.78,
"meanSampleValue": 64.864,
"sample": [
{
"time": "2021-09-02T00:50:32.000Z",
"timeSinceEpoch": 1630543832,
"value": 61.17,
"rate": 60
},
{
"time": "2021-09-02T00:49:32.000Z",
"timeSinceEpoch": 1630543772,
"value": 63.52,
"rate": 60
},
{
"time": "2021-09-02T00:48:32.000Z",
"timeSinceEpoch": 1630543712,
"value": 62.79,
"rate": 60
},
{
"time": "2021-09-02T00:47:32.000Z",
"timeSinceEpoch": 1630543652,
"value": 64.06,
"rate": 60
},
{
"time": "2021-09-02T00:46:32.000Z",
"timeSinceEpoch": 1630543592,
"value": 72.78,
"rate": 60
}
]
},
{
"origin": "TS",
"id": "M9D90857B9F9BE73EB15912D3314DB2DA",
"type": "1.5.1:1",
"self": "https://metricsourcehost01/uimapiM9D90857B9F9BE73EB15912D3314DB2DA",
"source": "destinationhost01.our.domain.net",
"target": "destinationhost01.our.domain.net-1",
"probe": "cdm",
"for_computer_system": {
"id": "14873",
"self": "https://metricsourcehost01/uimapi/devices/14873",
"name": "destinationhost01.our.domain.net",
"ip": "10.1.1.16"
},
"for_device": {
"id": "D4F3D290D787D3FA4E7CD2824BFA6B1C8",
"self": "https://metricsourcehost01/uimapi/devices/D4F3D290D787D3FA4E7CD2824BFA6B1C8"
},
"for_configuration_item": {
"id": "CF1D7A708DD4C6C9D303025AE3D2334AE",
"self": "https://metricsourcehost01/uimapi/TBD/CF1D7A708DD4C6C9D303025AE3D2334AE",
"name": "CPU-1",
"qosName": "QOS_CPU_MULTI_USAGE",
"description": "Individual CPU Usage",
"unit": "%"
},
"uimMetricDefinition": null,
"minSampleValue": 59.85,
"maxSampleValue": 72.31,
"meanSampleValue": 64.296,
"sample": [
{
"time": "2021-09-02T00:50:32.000Z",
"timeSinceEpoch": 1630543832,
"value": 59.85,
"rate": 60
},
{
"time": "2021-09-02T00:49:32.000Z",
"timeSinceEpoch": 1630543772,
"value": 63.88,
"rate": 60
},
{
"time": "2021-09-02T00:48:32.000Z",
"timeSinceEpoch": 1630543712,
"value": 60.17,
"rate": 60
},
{
"time": "2021-09-02T00:47:32.000Z",
"timeSinceEpoch": 1630543652,
"value": 65.27,
"rate": 60
},
{
"time": "2021-09-02T00:46:32.000Z",
"timeSinceEpoch": 1630543592,
"value": 72.31,
"rate": 60
}
]
}
]
It's CPU utilisation for 2 CPU cores from the same host.
Using $.[*].sample[*].time and $.[*].sample[*].value successfully returns the required time and value data which can be easily graphed:
| time | value |
| ------------------------ | ----- |
| 2021-09-02T00:50:32.000Z | 61.17 |
| 2021-09-02T00:49:32.000Z | 63.52 |
| 2021-09-02T00:48:32.000Z | 62.79 |
| 2021-09-02T00:47:32.000Z | 64.06 |
| 2021-09-02T00:46:32.000Z | 72.78 |
| 2021-09-02T00:50:32.000Z | 59.85 |
| 2021-09-02T00:49:32.000Z | 63.88 |
| 2021-09-02T00:48:32.000Z | 60.17 |
| 2021-09-02T00:47:32.000Z | 65.27 |
| 2021-09-02T00:46:32.000Z | 72.31 |
However, it combines all the data with no way to differenciate between the two CPU core data samples.
I've been trying to figure out a way to get a third column utilising the target value for each iteration of the sample array.
Ideally, the output should look like this when tabled:
| target | time | value |
| -------------------------------------- | ------------------------ | ----- |
| destinationhost01.our.domain.net-**0** | 2021-09-02T00:50:32.000Z | 61.17 |
| destinationhost01.our.domain.net-**0** | 2021-09-02T00:49:32.000Z | 63.52 |
| destinationhost01.our.domain.net-**0** | 2021-09-02T00:48:32.000Z | 62.79 |
| destinationhost01.our.domain.net-**0** | 2021-09-02T00:47:32.000Z | 64.06 |
| destinationhost01.our.domain.net-**0** | 2021-09-02T00:46:32.000Z | 72.78 |
| destinationhost01.our.domain.net-**1** | 2021-09-02T00:50:32.000Z | 59.85 |
| destinationhost01.our.domain.net-**1** | 2021-09-02T00:49:32.000Z | 63.88 |
| destinationhost01.our.domain.net-**1** | 2021-09-02T00:48:32.000Z | 60.17 |
| destinationhost01.our.domain.net-**1** | 2021-09-02T00:47:32.000Z | 65.27 |
| destinationhost01.our.domain.net-**1** | 2021-09-02T00:46:32.000Z | 72.31 |
Any advice would be greatly appreciated. I'm not sure it's even doable with jsonpath... hence why i'm reaching out to the experts.
Thanks
As the JSON API for Grafana uses the JSONPath Plus package, it's quite easy to accomplish what I was after.
The ^ is able to grab the parent of any matching item. Playing around with this in the JSONPath Demo site got me there. You can paste in my example from the original post and test the following queries:
$.[*].sample[*].time obtains the time from each sample.
$.[*].sample[*].value obtains the value from each sample.
$.[*].sample[*].value^^^^.for_configuration_item.name is the special sauce that will grab the for_configuration_item.name for each sample
Providing these three queries to Grafana makes a table like this:
core
time
value
CPU-0
2021-09-02T00:50:32.000Z
61.17
CPU-0
2021-09-02T00:49:32.000Z
63.52
CPU-0
2021-09-02T00:48:32.000Z
62.79
CPU-0
2021-09-02T00:47:32.000Z
64.06
CPU-0
2021-09-02T00:46:32.000Z
72.78
CPU-1
2021-09-02T00:50:32.000Z
59.85
CPU-1
2021-09-02T00:49:32.000Z
63.88
CPU-1
2021-09-02T00:48:32.000Z
60.17
CPU-1
2021-09-02T00:47:32.000Z
65.27
CPU-1
2021-09-02T00:46:32.000Z
72.31
From there, using the Group by feature in the Experimental tab on the core column graphs the values exactly as required.

How to change values from writable basic Datatypes on an OPC UA Server with FIWARE OPC UA AGENT

GOAL
Change writable values on the OPC UA Server by using the Fiware OPC UA Agent.
My test implementation
Adding the NodeId "7:PLC1_7:G_Communication_7:fi_heartbeat_i" to the "command" and "contextSubscription" sections in the config.json file. The value data type of the NodeId is Int16, but because this value is supposed to be written, I assume that "command" must be used as the type. Unfortunately, more detailed information cannot be found in the manual fiware opcua agent.
Start a new test environment with OpcUa Agent, Orion Context broker and mongodb.
Expectet behavior
The value on the server is updated when a request is sent to the Context Broker.
Current behaviour
The value of the parameter is read out correctly but with an incorrect data type (string instead of integer).
The value of the parameter is not updated when a request is sent to the Orion Context Broker.
additional informations
config.json
{
"logLevel" : "DEBUG",
"contextBroker" : {
"host" : "orion",
"port" : 1026
},
"server" : {
"port" : 4001,
"baseRoot" : "/"
},
"deviceRegistry" : {
"type" : "memory"
},
"mongodb" : {
"host" : "iotmongo",
"port" : "27017",
"db" : "iotagent",
"retries" : 5,
"retryTime" : 5
},
"providerUrl" : "http://iotopcua:4001",
"pollingExpiration" : "200000",
"pollingDaemonFrequency" : "20000",
"deviceRegistrationDuration" : "P1M",
"defaultType" : null,
"browseServerOptions" : null,
"service" : "test",
"subservice" : "/test",
"types" : {
"g_communication" : {
"service" : "test",
"subservice" : "/test",
"active" : [{
"name" : "7:PLC1_7:G_Communication_7:fo_smartControllerActive_b",
"type" : "Boolean"
} ],
"lazy" : [ ],
"commands" : [{
"name" : "7:PLC1_7:G_Communication_7:fi_heartbeat_i",
"type" : "Command"
}]
}
},
"contexts" : [ {
"id" : "plant",
"type" : "g_communication",
"service" : "test",
"subservice" : "/test",
"polling" : null,
"mappings" : [{
"ocb_id" : "7:PLC1_7:G_Communication_7:fo_smartControllerActive_b",
"opcua_id" : "ns=7;s=G_Communication.fo_smartControllerActive_b",
"object_id" : null,
"inputArguments" : []
} ]
}],
"contextSubscriptions" : [{
"id" : "plant",
"type" : "g_communication",
"mappings" : [{
"ocb_id" : "7:PLC1_7:G_Communication_7:fi_heartbeat_i",
"opcua_id" : "ns=7;s=G_Communication.fi_heartbeat_i",
"object_id" : "ns=7;s=G_Communication",
"inputArguments" : [{
"type": "Number"
}]
}]
}]
}
List Entities
curl 'http://localhost:1026/v2/entities/plant/' -H 'fiware-service: test' -H 'fiwate-servicepath: /test' | python -m json.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 684 100 684 0 0 13751 0 --:--:-- --:--:-- --:--:-- 13959
{
"7:PLC1_7:G_Communication_7:fi_heartbeat_i": {
"metadata": {
"ServerTimestamp": {
"type": "ISO8601",
"value": "null"
},
"SourceTimestamp": {
"type": "ISO8601",
"value": "null"
}
},
"type": "string",
"value": "4"
},
"7:PLC1_7:G_Communication_7:fi_heartbeat_i_info": {
"metadata": {},
"type": "commandResult",
"value": " "
},
"7:PLC1_7:G_Communication_7:fi_heartbeat_i_status": {
"metadata": {},
"type": "commandStatus",
"value": "UNKNOWN"
},
"7:PLC1_7:G_Communication_7:fo_smartControllerActive_b": {
"metadata": {
"ServerTimestamp": {
"type": "ISO8601",
"value": "2021-05-04T07:38:01.150Z"
},
"SourceTimestamp": {
"type": "ISO8601",
"value": "2021-05-04T07:37:59.934Z"
}
},
"type": "Boolean",
"value": false
},
"id": "plant",
"type": "g_communication"
}
Registrations
curl 'http://localhost:1026/v2/registrations' -H 'fiware-service: test' -H 'fiwate-servicepath: /test' | python -m json.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 320 100 320 0 0 46049 0 --:--:-- --:--:-- --:--:-- 53333
[
{
"dataProvided": {
"attrs": [
"7:PLC1_7:G_Communication_7:fi_heartbeat_i"
],
"entities": [
{
"id": "plant",
"type": "g_communication"
}
]
},
"expires": "2021-06-03T07:37:38.00Z",
"id": "6090f9c254b918756abf1a7d",
"provider": {
"http": {
"url": "http://iotopcua:4001"
},
"legacyForwarding": true,
"supportedForwardingMode": "all"
},
"status": "active"
}
]
Test communication with iotopcua
curl "http://iotopcua:4001/version"
{"libVersion":"2.12.0-next","port":4001,"baseRoot":"/"}
Request for update
curl -X PUT \
'http://localhost:1026/v2/entities/plant/attrs/7:PLC1_7:G_Communication_7:fi_heartbeat_i?type=g_communication' \
-H 'content-type: application/json' \
-H 'fiware-service: test' \
-H 'fiware-servicepath: /test' \
-d '{
"value": 2
}'
Log OCB
from=0.0.0.0 | srv=test | subsrv=/test | comp=Orion | op=logMsg.h[1844]:lmTransactionStart | msg=Starting transaction from 0.0.0.0:54232/v2/entities/plant/attrs/7:PLC1_7:G_Communication_7:fi_heartbeat_i
from=0.0.0.0 | srv=test | subsrv=/test | comp=Orion | op=rest.cpp[874]:servicePathSplit | msg=Service Path 0: '/test'
from=0.0.0.0 | srv=test | subsrv=/test | comp=Orion | op=connectionOperations.cpp[244]:collectionCount | msg=Database Operation Successful (count: { _id.id: "plant", _id.type: "g_communication", _id.servicePath: "/test" })
from=0.0.0.0 | srv=test | subsrv=/test | comp=Orion | op=connectionOperations.cpp[94]:collectionQuery | msg=Database Operation Successful (query: { _id.id: "plant", _id.type: "g_communication", _id.servicePath: "/test" })
from=0.0.0.0 | srv=test | subsrv=/test | comp=Orion | op=connectionOperations.cpp[182]:collectionRangedQuery | msg=Database Operation Successful (query: { query: { $or: [ { contextRegistration.entities.id: "plant", contextRegistration.entities.type: "g_communication" }, { contextRegistration.entities.id: ".*", contextRegistration.entities.isPattern: "true", contextRegistration.entities.type: { $in: [ "g_communication" ] } }, { contextRegistration.entities.id: ".*", contextRegistration.entities.isPattern: "true", contextRegistration.entities.type: { $exists: false } } ], expiration: { $gt: 1620114947 }, contextRegistration.attrs.name: { $in: [ "7:PLC1_7:G_Communication_7:fi_heartbeat_i" ] }, servicePath: "/test" }, orderby: { _id: 1 } })
from=0.0.0.0 | srv=test | subsrv=/test | comp=Orion | op=logMsg.h[1844]:lmTransactionStart | msg=Starting transaction to http://iotopcua:4001//updateContext
from=0.0.0.0 | srv=test | subsrv=/test | comp=Orion | op=httpRequestSend.cpp[550]:httpRequestSendWithCurl | msg=Sending message 4 to HTTP server: sending message of 458 bytes to HTTP server
from=10.1.17.1 | srv=test | subsrv=/test | comp=Orion | op=logMsg.h[1844]:lmTransactionStart | msg=Starting transaction from 10.1.17.1:58162/v1/updateContext
from=10.1.17.1 | srv=test | subsrv=/test | comp=Orion | op=rest.cpp[874]:servicePathSplit | msg=Service Path 0: '/test'
from=10.1.17.1 | srv=test | subsrv=/test | comp=Orion | op=connectionOperations.cpp[94]:collectionQuery | msg=Database Operation Successful (query: { _id.id: "plant", _id.type: "g_communication", _id.servicePath: "/test" })
from=10.1.17.1 | srv=test | subsrv=/test | comp=Orion | op=connectionOperations.cpp[454]:collectionUpdate | msg=Database Operation Successful (update: <{ _id.id: "plant", _id.type: "g_communication", _id.servicePath: "/test" }, { $set: { attrs.7:PLC1_7:G_Communication_7:fi_heartbeat_i_status: { value: "PENDING", type: "commandStatus", mdNames: [], creDate: 1620113858, modDate: 1620114947 }, modDate: 1620114947, lastCorrelator: "2324ca1e-acae-11eb-a4f7-226cad26e2cc" }, $unset: { location: 1, expDate: 1 } }>)
from=10.1.17.1 | srv=test | subsrv=/test | comp=Orion | op=logMsg.h[1874]:lmTransactionEnd | msg=Transaction ended
from=0.0.0.0 | srv=test | subsrv=/test | comp=Orion | op=httpRequestSend.cpp[570]:httpRequestSendWithCurl | msg=Notification Successfully Sent to http://iotopcua:4001//updateContext
from=0.0.0.0 | srv=test | subsrv=/test | comp=Orion | op=httpRequestSend.cpp[579]:httpRequestSendWithCurl | msg=Notification response OK, http code: 200
from=0.0.0.0 | srv=test | subsrv=/test | comp=Orion | op=logMsg.h[1874]:lmTransactionEnd | msg=Transaction ended
Log OPCUA Client
time=2021-05-04T07:55:47.191Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.GenericMiddlewares | msg=Request for path [//updateContext] from [iotopcua:4001]
time=2021-05-04T07:55:47.191Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.GenericMiddlewares | msg=Body:
{
"contextElements": [
{
"type": "g_communication",
"isPattern": "false",
"id": "plant",
"attributes": [
{
"name": "7:PLC1_7:G_Communication_7:fi_heartbeat_i",
"type": "Number",
"value": 2
}
]
}
],
"updateAction": "UPDATE"
}
time=2021-05-04T07:55:47.193Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.ContextServer | msg=Handling update from [iotopcua:4001]
time=2021-05-04T07:55:47.193Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.ContextServer | msg=[object Object]
time=2021-05-04T07:55:47.194Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.InMemoryGroupRegister | msg=Looking for device params ["service","subservice","type"]
time=2021-05-04T07:55:47.194Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.DeviceService | msg=deviceData after merge with conf: {"id":"plant","name":"plant","type":"g_communication","active":[{"name":"7:PLC1_7:G_Communication_7:fo_smartControllerActive_b","type":"Boolean","object_id":"7:PLC1_7:G_Communication_7:fo_smartControllerActive_b"}],"service":"test","subservice":"/test","polling":null,"endpoint":"opc.tcp://109.68.106.155:48050","registrationId":"6090f9c254b918756abf1a7d","creationDate":1620113858802}
time=2021-05-04T07:55:47.194Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.DeviceService | msg=deviceData before merge with conf: {"id":"plant","name":"plant","type":"g_communication","active":[{"name":"7:PLC1_7:G_Communication_7:fo_smartControllerActive_b","type":"Boolean","object_id":"7:PLC1_7:G_Communication_7:fo_smartControllerActive_b"}],"lazy":[],"commands":[{"name":"7:PLC1_7:G_Communication_7:fi_heartbeat_i","type":"Command","object_id":"7:PLC1_7:G_Communication_7:fi_heartbeat_i"}],"service":"test","subservice":"/test","polling":null,"endpoint":"opc.tcp://109.68.106.155:48050","registrationId":"6090f9c254b918756abf1a7d","creationDate":1620113858802,"internalAttributes":null,"staticAttributes":[],"subscriptions":[]}
time=2021-05-04T07:55:47.195Z | lvl=INFO | corr=n/a | trans=n/a | op=Index.CommandContextHandler | comp=iotAgent-OPCUA | srv=test | subsrv=/test | msg=method to call =[{"objectId":"ns=7;s=G_Communication","methodId":"ns=7;s=G_Communication.fi_heartbeat_i","inputArguments":[{"type":"Number"}]}]
time=2021-05-04T07:55:47.879Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.NGSIService | msg=executeWithDeviceInfo entityName plant type undefined apikey undefined attributes [{"name":"7:PLC1_7:G_Communication_7:fi_heartbeat_i_status","type":"commandStatus","value":"PENDING"}] deviceInformation {"id":"plant","name":"plant","type":"g_communication","active":[{"name":"7:PLC1_7:G_Communication_7:fo_smartControllerActive_b","type":"Boolean","object_id":"7:PLC1_7:G_Communication_7:fo_smartControllerActive_b"}],"lazy":[],"commands":[{"name":"7:PLC1_7:G_Communication_7:fi_heartbeat_i","type":"Command","object_id":"7:PLC1_7:G_Communication_7:fi_heartbeat_i"}],"service":"test","subservice":"/test","polling":null,"endpoint":"opc.tcp://109.68.106.155:48050","registrationId":"6090f9c254b918756abf1a7d","creationDate":1620113858802,"internalAttributes":null,"staticAttributes":[],"subscriptions":[]}
time=2021-05-04T07:55:47.879Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.NGSIService | msg=error {"name":"DEVICE_GROUP_NOT_FOUND","message":"Couldn\t find device group","code":404} in get group device
time=2021-05-04T07:55:47.880Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.NGSIService | msg=typeInformation {"id":"plant","name":"plant","type":"g_communication","active":[{"name":"7:PLC1_7:G_Communication_7:fo_smartControllerActive_b","type":"Boolean","object_id":"7:PLC1_7:G_Communication_7:fo_smartControllerActive_b"}],"lazy":[],"commands":[{"name":"7:PLC1_7:G_Communication_7:fi_heartbeat_i","type":"Command","object_id":"7:PLC1_7:G_Communication_7:fi_heartbeat_i"}],"service":"test","subservice":"/test","polling":null,"endpoint":"opc.tcp://109.68.106.155:48050","registrationId":"6090f9c254b918756abf1a7d","creationDate":1620113858802,"internalAttributes":null,"staticAttributes":[],"subscriptions":[]}
time=2021-05-04T07:55:47.880Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.NGSIService | msg=Updating device value in the Context Broker at [http://orion:1026/v1/updateContext]
time=2021-05-04T07:55:47.880Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.NGSIService | msg=Using the following request:
{
"url": "http://orion:1026/v1/updateContext",
"method": "POST",
"headers": {
"fiware-service": "test",
"fiware-servicepath": "/test"
},
"json": {
"contextElements": [
{
"type": "g_communication",
"isPattern": "false",
"id": "plant",
"attributes": [
{
"name": "7:PLC1_7:G_Communication_7:fi_heartbeat_i_status",
"type": "commandStatus",
"value": "PENDING"
}
]
}
],
"updateAction": "UPDATE"
}
}
time=2021-05-04T07:55:47.886Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.NGSIService | msg=Received the following request from the CB:
{
"contextResponses": [
{
"contextElement": {
"type": "g_communication",
"isPattern": "false",
"id": "plant",
"attributes": [
{
"name": "7:PLC1_7:G_Communication_7:fi_heartbeat_i_status",
"type": "commandStatus",
"value": ""
}
]
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}
]
}
time=2021-05-04T07:55:47.886Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.NGSIService | msg=Value updated successfully
time=2021-05-04T07:55:47.886Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.ContextServer | msg=Update action from [iotopcua:4001] handled successfully.
time=2021-05-04T07:55:47.886Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.ContextServer | msg=Generated update response: {"contextResponses":[{"contextElement":{"attributes":[{"name":"7:PLC1_7:G_Communication_7:fi_heartbeat_i","type":"Number","value":""}],"id":"plant","isPattern":false,"type":"g_communication"},"statusCode":{"code":200,"reasonPhrase":"OK"}}]}
time=2021-05-04T07:55:47.887Z | lvl=DEBUG | corr=n/a | trans=n/a | op=IoTAgentNGSI.DomainControl | msg=response-time: 697
as far as i can tell, i think the error resides in your contextSubscription snippet inside your config.json which should look like :
"contextSubscriptions": [{
"id": "plant",
"type": "g_communication",
"service": "test",
"subservice": "/test",
"mappings": [{
"ocb_id": "7:PLC1_7:G_Communication_7:fi_heartbeat_i",
"opcua_id": "ns=7;s=G_Communication.fi_heartbeat_i",
"object_id": "ns=7;s=G_Communication",
"inputArguments": [{
"dataType": 4,
"type": "Intensity"
}]
}]
}]
Could you please give it a try?

Is Orion compatible with AWS DocumentDB

I am trying to connect Orion with AWS DocumentDB but it's not getting connected. However I tried two other FIWARE components IoTAgent and Sth-Comet with DocumentDB and both are working fine.
Same hostname and credential are working for IoTAgent and Sth-Comet. I also checked for the connectivity, which is fine, as IoTAgent and Sth-Comet are in same network. I also checked from a different mongo host in same network and this also worked. Below is the error that I am getting for Orion.
time=2021-02-18T07:03:46.293Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=mongoConnectionPool.cpp[180]:mongoConnect | msg=Database Startup Error (cannot connect to mongo - doing 100 retries with a 1000 millisecond interval)
Is there any possibility that Orion is not compatible with AWS DocumentDB?
Update1:
bash-4.2$ ps ax | grep contextBroker
1 ? Ss 0:00 /usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -disableFileLog -dbhost xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com -db admin -dbuser test -dbpwd xxxxxxxxxx
Update2:
Earlier, I was using Orion docker images by pulling directly from dockerhub and that was not working. So this time, I build two docker images by building source code of version 2.4.2 and 2.5.2. Now, I was able to connect with AWS DocuemntDB with these docker images but getting a different error as below.
time=2021-02-23T06:10:41.982Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }> from caller mongoSubCacheItemInsert:83)
time=2021-02-23T06:10:41.982Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }'
Below is the Orion version
contextBroker --version
2.5.0-next (git version: 3984f9fc30e90fa04682131ca4516b4d277eb27e)
curl -X GET 'http://localhost:1026/version'
{
"orion" : {
"version" : "2.5.0-next",
"uptime" : "0 d, 0 h, 4 m, 56 s",
"git_hash" : "3984f9fc30e90fa04682131ca4516b4d277eb27e",
"compile_time" : "Mon Feb 22 17:39:30 UTC 2021",
"compiled_by" : "root",
"compiled_in" : "4c7575c7c27f",
"release_date" : "Mon Feb 22 17:39:30 UTC 2021",
"doc" : "https://fiware-orion.rtfd.io/",
"libversions": {
"boost": "1_53",
"libcurl": "libcurl/7.29.0 NSS/3.53.1 zlib/1.2.7 libidn/1.28 libssh2/1.8.0",
"libmicrohttpd": "0.9.70",
"openssl": "1.0.2k",
"rapidjson": "1.1.0",
"mongodriver": "legacy-1.1.2"
}
}
}
I am also able to connect to DocumentDB from Orion Pod using Mongo Shell.
mongo --host xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com:27017 --username xxxx --password xxxx
rs0:PRIMARY> show dbs;
rs0:PRIMARY>
I am also able to create entries using below command and it creates a DB and collection in DocumentDB:
curl localhost:1026/v2/entities -s -S --header 'Content-Type: application/json' \
> -X POST -d #- <<EOF
> {
> "id": "Room2",
> "type": "Room",
> "temperature": {
> "value": 23,
> "type": "Number"
> },
> "pressure": {
> "value": 720,
> "type": "Number"
> }
> }
> EOF
rs0:PRIMARY> show dbs;
orion 0.000GB
But I am not able to get that data using orion API and after executing this command it getting exited from container with a empty response. I have checked the same with Orion version 2.4.2 and 2.5.2 with DocumentDB 4.0 and 3.6.
[root#orion-docdb-7748fd9c85-gbjz7 /]# curl localhost:1026/v2/entities/Room2 -s -S --header 'Accept: application/json' | python -mjson.tool
curl: (52) Empty reply from server
command terminated with exit code 137
At the end, still getting same error in logs.
time=2021-02-23T06:16:04.564Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }> from caller mongoSubCacheItemInsert:83)
time=2021-02-23T06:16:04.564Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }'
Update3:
I have added -noCache and deployed again. Below are the commands output and logs for your reference.
Process check:
#ps ax | grep contextBroker
1 ? Ssl 0:00 /usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -disableFileLog -dbhost xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com -dbuser xxxxxxxx -dbpwd xxxxxxxx -logLevel DEBUG -noCache
Entries in DB:
rs0:PRIMARY> show dbs
orion 0.000GB
rs0:PRIMARY> use orion
switched to db orion
rs0:PRIMARY> show collections
entities
rs0:PRIMARY> db.entities.find()
{ "_id" : { "id" : "Room2", "type" : "Room", "servicePath" : "/" }, "attrNames" : [ "temperature", "pressure" ], "attrs" : { "temperature" : { "type" : "Number", "creDate" : 1614323032.671698, "modDate" : 1614323032.671698, "value" : 23, "mdNames" : [ ] }, "pressure" : { "type" : "Number", "creDate" : 1614323032.671698, "modDate" : 1614323032.671698, "value" : 720, "mdNames" : [ ] } }, "creDate" : 1614323032.671698, "modDate" : 1614323032.671698, "lastCorrelator" : "c8a73f40-7800-11eb-bd9b-bea9c419835d" }
Orion Pod Logs:
time=2021-02-26T06:46:33.966Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1008]:main | msg=start command line </usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -disableFileLog -dbhost -dbhost xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com -dbuser xxxxxxxx -dbpwd xxxxxxxx -logLevel DEBUG -noCache>
time=2021-02-26T06:46:33.966Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1076]:main | msg=Orion Context Broker is running
time=2021-02-26T06:46:34.280Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=MongoGlobal.cpp[243]:mongoInit | msg=Connected to mongo at xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com/orion, as user 'xxxxxxx' (poolsize: 10)
time=2021-02-26T06:46:34.282Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1202]:main | msg=Startup completed
time=2021-02-26T07:03:24.546Z | lvl=INFO | corr=b7e44e5a-7800-11eb-9531-bea9c419835d | trans=1614321993-966-00000000001 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=logTracing.cpp[79]:logInfoRequestWithoutPayload | msg=Request received: GET /version, response code: 200
time=2021-02-26T07:03:52.672Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }> from caller processContextElement:3493)
time=2021-02-26T07:03:52.672Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }'
time=2021-02-26T07:03:52.782Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[235]:dbErrorReset | msg=Releasing alarm DatabaseError
time=2021-02-26T07:03:52.790Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }> from caller addTriggeredSubscriptions_noCache:1408)
time=2021-02-26T07:03:52.790Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }'
time=2021-02-26T07:03:52.791Z | lvl=INFO | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=logTracing.cpp[130]:logInfoRequestWithPayload | msg=Request received: POST /v2/entities, request payload (148 bytes): { "id": "Room2", "type": "Room", "temperature": { "value": 23, "type": "Number" }, "pressure": { "value": 720, "type": "Number" }}, response code: 201
time=2021-02-26T07:03:58.479Z | lvl=ERROR | corr=cc1d5934-7800-11eb-a28d-bea9c419835d | trans=1614321993-966-00000000003 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[235]:dbErrorReset | msg=Releasing alarm DatabaseError
time=2021-02-26T07:03:58.479Z | lvl=ERROR | corr=cc1d5934-7800-11eb-a28d-bea9c419835d | trans=1614321993-966-00000000003 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323038|1 }> from caller ContextElementResponse:109)
terminate called after throwing an instance of 'mongo::AssertionException'
what(): assertion src/mongo/bson/bsonelement.cpp:392
Pod exited and restarted during API call:
curl localhost:1026/v2/entities/Room2 -s -S --header 'Accept: application/json' | python -mjson.tool
command terminated with exit code 137
The following message shown in log traces is pretty significant
"Legacy opcodes are not supported"
Although the MongoDB driver used by Orion 2.5.2 and before works with official MongoDB version up to 4.4, probably it is not the case with MongoDB "clones" like AWS DocumentDB.
We are in the process to change the legacy driver used by Orion to a new one. Once this change lands in Orion master branch, I'd suggest to test it (using :latest dockerhub tag). In the meanwhile, as a workaround, I'd suggest to use a official MongoDB database.
EDIT: the process to change the MongoDB driver has finished and Orion is using the new driver since version 3.0.0. I think it would be a good idea to test with this new version and see how it goes. I can help with the test if you provide me with the access information (see here).