I use Cepheus GE for my use case and I enabled it in the multi tenant mode and uploaded my config.json file to it, But when I start to send updates to the Cepheus broker in order to forward them to the CEP, the Cepheus broker just receives the updates but doesn't forward them to the Cepheus cap as it couldn't recognize the service and service-path that are set in the config.json. And when I tried to send my updates directly to the Cepheus-CEP, it accepted them and processed them successfully. So I wonder why the Cepheus broker cannot recognize the Fiware-service when it's enabled in the multi tenant mode.
the config file service definition is as follows
"brokers":[
{
"url":"http://XXX.XX.XX.XX:1026",
"serviceName": "f",
"servicePath": "/f",
"authToken": "XXX"
}
]
this is the Logs of Cepheus broker.
2017-09-02 08:55:32,546 [/O dispatcher 1] WARN c.o.c.b.c.NgsiController - NotifyContext failed for http://localhost:8080/ngsi10/notifyContext$
2017-09-02 09:05:33,358 [nio-8081-exec-1] WARN c.o.c.b.c.NgsiController - UpdateContext failed for http://localhost:8082: Connection refused
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_72-internal]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_72-internal]
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:173) ~[httpcore-nio-4.4.1.$
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:147) ~[httpcore-nio-4.4.1$
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:350) ~[httpcore-nio-4.4.1.j$
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:191) ~[httpasync$
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ~[httpasyncclient-4.1.jar!$
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_72-internal]
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_72-internal]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_72-internal]
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:173) ~[httpcore-nio-4.4.1.$
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:147) ~[httpcore-nio-4.4.1$
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:350) ~[httpcore-nio-4.4.1.j$
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:191) ~[httpasync$
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ~[httpasyncclient-4.1.jar!$
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_72-internal]
2017-09-02 09:33:12,316 [pool-2-thread-1] WARN c.o.c.b.c.NgsiController - UpdateContext failed for http://localhost:8082: Connection refused
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_72-internal]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_72-internal]
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:173) [httpcore-nio-4.4.1.j$
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:147) [httpcore-nio-4.4.1.$
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:350) ~[httpcore-nio-4.4.1.j$
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:191) ~[httpasync$
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ~[httpasyncclient-4.1.jar!$
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_72-internal]
And the script file that sends the updates is as follows
(curl XXX.XXX.XXX.XXX:8081/v1/updateContext/ -s -S --header 'Content-Type: application/json' --header "Fiware-Service: f" --header "Fiware-ServicePath:/f " --header 'Accept: application/json' -d #- | python -mjson.tool ) <<EOF
{ "contextElements": [
{
"type": "Lab",
"isPattern": "false",
"id": "Lab111",
"attributes": [
{
"name": "priority",
"type": "double",
"value": "1"
},
{
"name": "controller",
"type": "string",
"value": "Controller111"
}
]
}
],
"updateAction": "UPDATE"
}
EOF
Now please I want to know where the problem could be?
As of today, the Fiware-Cepheus broker does not support multi-tenant requests (using Fiware-Service and Fiware-ServicePath headers), only the CEP can handle multi-tenancy. More generally, the broker has very few features compared to a full-fledge broker like Orion.
If you need a multi-tenant broker, use the Orion Context Broker: https://fiware-orion.readthedocs.io/en/master/user/multitenancy/index.html
Related
My Topic Schema looks like this,
{
"schema": {
"type": "string",
"optional": false
},
"payload": "{...}"
}
I am trying to sink this into an Influx DB instance with this configuration,
curl -X PUT \
-H "Content-Type: application/json" \
--data '{
"connector.class":"io.confluent.influxdb.InfluxDBSinkConnector",
"tasks.max": "1",
"topics" : "mongo-sink-json.tractor.job",
"influxdb.url": "http://10.100.87.169:8086",
"influxdb.db": "mongo-sink",
"infuxdb.password": "password",
"influxdb.username": "user",
"measurement.name.format": "${topic}",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "true"
}' \
But I get this error in Kafka Connect logs,
ERROR
WorkerSinkTask{id=influx_sink_json_job-0} Task threw an uncaught and
unrecoverable exception. Task is being killed and will not recover
until manually restarted. Error: java.util.ArrayList cannot be cast to
java.util.Map (org.apache.kafka.connect.runtime.WorkerSinkTask)
java.lang.ClassCastException: java.util.ArrayList cannot be cast to
java.util.Map at
io.confluent.influxdb.sink.writer.InfluxDBWriter.extractPointTags(InfluxDBWriter.java:288)
at
io.confluent.influxdb.sink.writer.InfluxDBWriter.getPointTags(InfluxDBWriter.java:266)
at
io.confluent.influxdb.sink.writer.InfluxDBWriter.getKey(InfluxDBWriter.java:329)
at
io.confluent.influxdb.sink.writer.InfluxDBWriter.getBatch(InfluxDBWriter.java:149)
at
io.confluent.influxdb.sink.writer.InfluxDBWriter.write(InfluxDBWriter.java:126)
at
io.confluent.influxdb.sink.InfluxDBSinkTask.put(InfluxDBSinkTask.java:40)
at
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
at
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
at
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
at
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
at
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) [2020-09-25 20:46:53,519]
ERROR WorkerSinkTask{id=influx_sink_json_job-0} Task threw an uncaught
and unrecoverable exception
(org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Exiting
WorkerSinkTask due to unrecoverable exception. at
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:568)
at
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
at
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
at
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
at
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by:
java.lang.ClassCastException: java.util.ArrayList cannot be cast to
java.util.Map at
io.confluent.influxdb.sink.writer.InfluxDBWriter.extractPointTags(InfluxDBWriter.java:288)
at
io.confluent.influxdb.sink.writer.InfluxDBWriter.getPointTags(InfluxDBWriter.java:266)
at
io.confluent.influxdb.sink.writer.InfluxDBWriter.getKey(InfluxDBWriter.java:329)
at
io.confluent.influxdb.sink.writer.InfluxDBWriter.getBatch(InfluxDBWriter.java:149)
at
io.confluent.influxdb.sink.writer.InfluxDBWriter.write(InfluxDBWriter.java:126)
at
io.confluent.influxdb.sink.InfluxDBSinkTask.put(InfluxDBSinkTask.java:40)
at
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
Anyone has any ideas?
I used the similar config to sink the same topic in MongoDB and it works perfectly.
Should I use any other influxdb sink connector or fall onto using avro serialization or this is unrelated?
I'm trying to send commands to the Orion Context Broker using iotagent-ul with HTTP protcol.
Context Broker and IoT Agent are in different servers (actually IoTA is in running in my laptop).
I've configured the necessary parameters in config.js file.
My request is as follows:
curl -L POST -H "Fiware-Service: myHome" -H "Fiware-ServicePath: /environment" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{
"devices": [
{
"device_id": "sensor01",
"entity_name": "LivingRoomSensor",
"entity_type": "multiSensor",
"attributes": [
{ "object_id": "t", "name": "Temperature", "type": "celsius" },
{ "object_id": "l", "name": "Luminosity", "type": "lumens" }
]
}
]
}
' 'http://localhost:4061/iot/devices'
It shows the following erros:
In IoTA terminal:
time=2017-02-14T15:06:14.832Z | lvl=ERROR | corr=88ed3729-6682-44ce-9b0a-28098e54c94e | trans=88ed3729-6682-44ce-9b0a-28098e54c94e | op=IoTAgentNGSI.DomainControl | srv=myHome | subsrv=/environment | msg=TypeError: Cannot read property 'findOne' of undefined | comp=IoTAgent
In "cURL terminal":
curl: (52) Empty reply from server
Please can you tell us what's the IoT Agent UL version you are using?
On the other hand, it seems you are missing the 'protocol' field in the payload, please check
http://fiwaretourguide.readthedocs.io/en/latest/connection-to-the-internet-of-things/how-to-read-measures-captured-from-iot-devices/
best
I'm working with an Orion context broker version 1.2.0. I have subscribed in it two different cygnus (0.11 and 0.13) using NGSIv2, as follows:
(curl 172.21.0.23:1026/v2/subscriptions -s -S --header 'Fiware-Service: prueba_015_adapter' --header 'Fiware-ServicePath: /Prueba/Planta_3' --header 'Content-Type: application/json' -d #- ) <<EOF
{
"description": "Cygnus subscription",
"subject": {
"entities": [
{
"idPattern": ".*",
"type": "density_algorithm"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"http": {
"url": "http://172.21.0.33:5050/notify"
},
"attrs": []
}
}
EOF
But when the context broker sends a notification to any of these cygnus modules, the next error appears in the log:
15 jun 2016 12:46:48,641 INFO [1469152682#qtp-857344131-3153] (com.telefonica.iot.cygnus.handlers.OrionRestHandler.getEvents:150) - Starting transaction (1463998603-759-0001644173) 15 jun 2016 12:46:48,641 INFO [1469152682#qtp-857344131-3153] (com.telefonica.iot.cygnus.handlers.OrionRestHandler.getEvents:232) - Received data ({"subscriptionId":"57612ed9efa20b5b23e71bd5","data":[{"id":"C-A2","type":"density_algorithm","densityPlan":{"type":"string","value":"C-A2","metadata":{}},"devices":{"type":"string","value":"43","metadata":{}},"timestamp":{"type":"string","value":"2016-06-15T12:53:26.294+02:00","metadata":{}}}]}) 15 jun 2016 12:46:48,641 INFO [1469152682#qtp-857344131-3153] (com.telefonica.iot.cygnus.handlers.OrionRestHandler.getEvents:255) - Event put in the channel (id=957931298, ttl=-1) 15 jun 2016 12:46:48,642 WARN [1469152682#qtp-857344131-3153] (com.telefonica.iot.cygnus.interceptors.GroupingInterceptor.intercept:289)
- No context responses within the notified entity, nothing is done 15 jun 2016 12:46:48,642 WARN [1469152682#qtp-857344131-3153] (org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet.doPost:203)
- Error appending event to channel. Channel might be full. Consider increasing the channel capacity or make sure the sinks perform faster. org.apache.flume.ChannelException: Unable to put batch on required channel: org.apache.flume.channel.MemoryChannel{name: mongo-channel}
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:200)
at org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet.doPost(HTTPSource.java:201)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:814)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Caused by: java.lang.IllegalArgumentException: put() called with null event!
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:89)
at org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80)
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189)
... 16 more
If I use NGSIv1 instead to register both subscriptions, everything goes fine: no log error is shown and the data is persisted into both cygnus modules.
(curl 172.21.0.23:1026/v1/subscribeContext -s -S --header 'Fiware-Service: prueba_015_adapter' --header 'Fiware-ServicePath: /Prueba/Planta_3' --header 'Content-Type: application/json' --header 'Accept: application/json' -d #- ) <<EOF
{
"entities": [
{
"type": "density_algorithm",
"isPattern": "true",
"id": ".*"
}
],
"attributes": [],
"reference": "http://172.21.0.33:5050/notify",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": []
}
]
}
EOF
I'm sending the entities to the context broker using NGSIv1. Can the problem be due to an incompatibility between NGSIv1 and NGSIv2?
Thanks in advance
For the time being, NGSIv2 notifications are not supported in Cygnus. It is expected to be implemented, but it has not been scheduled yet.
However, you can use attrFormat (inside nofitication field) equal to legacy to use NGSIv1 notification format (have a look to more detailed information here). NGSIv1 notification format is fully supported by Cygnus, so that should work.
I have installed cygnus 0.8.2 on fiware image CentOS-7-x64, I subscribed to orion context broker using:
(curl 193.48.247.246:1026/v1/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Fiware-Service: egmmqtt' --header 'Fiware-ServicePath: /egmmqttpath' -d #- | python -mjson.tool) <<EOF
{
"entities": [
{
"type": "sensors",
"isPattern": "false",
"id": "sensors:switch2A"
}
],
"attributes": [
"switch2A"
],
"reference": "http://193.48.247.223:5050/notify",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [
"switch2A"
]
}
],
"throttling": "PT1S"
}
EOF
No notification has reached cygnus and I got this error on orionContextBroker logs:
time=2015-10-06T17:43:37.898CEST | lvl=WARNING | trans=1443447780-161-00000000423 | function=sendHttpSocket | comp=Orion | msg=clientSocketHttp.cpp[358]: Notification failure for 193.48.247.223:5050 (curl_easy_perform failed: Couldn't connect to server)
I dont know why the cygnus instance is not reached under the associated public IP adress. In fact I can't ping cygnus machine instance from Orion instance. Any ideas of what I have missed? thanks!
In the security rules of cygnus instance the port on which cygnus is listenning (in my case 5050) has to be open so orion can reach cygnus instance.
I am trying out orion context broker comunication on two CentOS 6.6 machines. On the target machine I did:
./accumulator-server.py 1028 /accumulate mywebpage.lan on
And on my local machine I did:
[DevF12#localhost ~]$ (curl mywebpage.lan:1028/v1/updateContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d #- | python -mjson.tool ) <<EOF
> {
> "contextElements": [
> {
> "type": "Room",
> "isPattern": "false",
> "id": "Room2",
> "attributes": [
> {
> "name": "temperature",
> "type": "float",
> "value": "777"
> },
> {
> "name": "pressure",
> "type": "integer",
> "value": "711"
> }
> ]
> }
> ],
> "updateAction": "APPEND"
> }
> EOF
The result on the target machine is:
POST http://mywebpage.lan:1028/v1/updateContext
Content-Length: 456
User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
Host: mywebpage.lan:1028
Accept: application/json
Content-Type: application/json
{ "contextElements": [ { "type": "Room", "isPattern": "false", "id": "Room2", "attributes": [ { "name": "temperature", "type": "float", "value": "777" }, { "name": "pressure", "type": "integer", "value": "711" } ] } ], "updateAction": "APPEND"}=======================================
192.168.1.11 - - [14/Apr/2015 15:07:36] "POST /v1/updateContext HTTP/1.1" 200 -
And the message I get from the local machine is:
No JSON object could be decoded
So, what does this all mean?
1. Is the 200 code saying that it is successfully creating Room2?
2. Why am I getting the could not decode JSON then?
3. All of this brings up another question, does this mean that the weather station described in my previous post also has to run on CentOS in order to send context broker messages?
I think there is a "conceptual" misunderstanding in your test. You are sending the updateContext to 1028 which is not the port of CB (the one supposed to process updateContext messages) but the accumulator port (which purpose is process notifyContext message sent by CB as a consequence of updateContext messages, but not processing updateContext messages themselves).
Typically, Orion Context Broker runs in port 1026 by default.
Taking into account this, the specific answers are:
Is the 200 code saying that it is successfully creating Room2? No. 200 reported by accumulator just mean that the accumulator has received and acknowledge a message (any message, given that accumulator is a "dummy" application just for testing, it is not doing any real processing of that).
Why am I getting the could not decode JSON then? Try to remove | python -mjson.tool in the curl command line.
All of this brings up another question, does this mean that the weather station described in my previous post also has to run on CentOS in order to send context broker messages? Not sure about the "weather station" in your case, but if you mean the client sending updateContext to CB, it doesn't need to run in CentOS. The only requirements for the client is to be compliant with Orion API and have network connectivity to that host (and port) where Orion is listening.