FIWARE Cygnus -> cartodb sinks.NGSISink: Persistence error, 400 Bad request - fiware

I'm trying to connect cygnus (1.4.0_SNAPSHOT) to cartodb. I run it locally, and I use a script to send a notification to cygnus. The script runs Ok, but cygnus says:
ERROR sinks.NGSISink: Persistence error (The query 'INSERT INTO jcarneroatos.x002fpeoplelocation (recvtime,fiwareservicepath,entityid,entitytype,the_geom) VALUES ('2016-10-31T19:04:00.994Z','/peoplelocation','Person:1','Person',ST_SetSRID(ST_MakePoint({"coordinates":[-4.423032856,36.721290055]), 4326))' could not be executed. CartoDB response: 400 Bad Request)
Anyone knows what could be happening? Below I put my config files for information, thanks!
My username at CARTO is "jcarneroatos", and the domain is https://jcarneroatos.carto.com. This is the script I'm using to simulate the notification from Orion Context Broker:
#/bin/bash
HOST=localhost
PORT=5050
SERVICE=jcarneroatos
SUBSERVICE=/peoplelocation
#send notification
NOTIFICATION=$(\
curl http://$HOST:$PORT/notify \
-v -s -S \
--header "Content-Type: application/json; charset=utf-8" \
--header 'Accept: application/json' \
--header "Fiware-Service: $SERVICE" \
--header "Fiware-ServicePath: $SUBSERVICE" \
-d '
{
"contextResponses": [
{
"contextElement": {
"attributes": [
{
"metadatas": [
{
"name": "location",
"type": "string",
"value": "WGS84"
}
],
"name": "location",
"type": "geo:json",
"value": {
"coordinates": [
-4.423032856,
36.721290055
],
"type": "Point"
}
}
],
"id": "Person:1",
"isPattern": "false",
"type": "Person"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}
],
"originator": "localhost",
"subscriptionId": "58178396634ded66caac35b2"
}')
if [ -z "$NOTIFICATION" ]; then
echo "Ok"
else
echo $NOTIFICATION
fi
This is the structure of the dataset at cartodb:
x002fpeoplelocation
cartodb_id | the_geom | entityid | entitytype | fiwareservicepath | recvtime
number | geometry | string | string | string | date
This is the cygnus config file:
cygnusagent.sources = http-source
cygnusagent.sinks = cartodb-sink
cygnusagent.channels =cartodb-channel
cygnusagent.sources.http-source.channels = cartodb-channel
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnusagent.sources.http-source.port = 5050
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnusagent.sources.http-source.handler.notification_target = /notify
cygnusagent.sources.http-source.handler.default_service = jcarneroatos
cygnusagent.sources.http-source.handler.default_service_path = /peoplelocation
cygnusagent.sources.http-source.interceptors = ts gi
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /home/cygnus/APACHE_FLUME_HOME/conf/grouping_rules.conf
cygnusagent.sinks.cartodb-sink.type = com.telefonica.iot.cygnus.sinks.NGSICartoDBSink
cygnusagent.sinks.cartodb-sink.channel = cartodb-channel
cygnusagent.sinks.cartodb-sink.enable_grouping = false
cygnusagent.sinks.cartodb-sink.enable_name_mappings = false
cygnusagent.sinks.cartodb-sink.enable_lowercase = false
cygnusagent.sinks.cartodb-sink.data_model = dm-by-service-path
cygnusagent.sinks.cartodb-sink.keys_conf_file = /home/cygnus/APACHE_FLUME_HOME/conf/cartodb_keys.conf
cygnusagent.sinks.cartodb-sink.flip_coordinates = false
cygnusagent.sinks.cartodb-sink.enable_raw = true
cygnusagent.sinks.cartodb-sink.enable_distance = false
cygnusagent.sinks.cartodb-sink.batch_size = 100
cygnusagent.sinks.cartodb-sink.batch_timeout = 30
cygnusagent.sinks.cartodb-sink.batch_ttl = 10
cygnusagent.sinks.cartodb-sink.backend.max_conns = 500
cygnusagent.sinks.cartodb-sink.backend.max_conns_per_route = 100
cygnusagent.channels.cartodb-channel.type = memory
cygnusagent.channels.cartodb-channel.capacity = 1000
cygnusagent.channels.cartodb-channel.transactionCapacity = 100
And finally the cartodb_keys.conf file (without key):
{
"cartodb_keys": [
{
"username": "jcarneroatos",
"endpoint": "https://jcarneroatos.carto.com",
"key": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
]
}
Update:
After executing Cygnus in DEBUG mode and check the logs, it seems that CARTO is returning:
{"error":["syntax error at or near \"{\""]}
This the complete log: http://pastebin.com/p9VyUU8n

Finally, after offline discussion with #Javi Carnero, we found there was a bug in Cygnus code related to the way Carto differentiates among "enterprise" and "personal" accounts. First ones allow for PostgreSQL schemas per user under the enterprise organization, while second ones have "hardcoded" schemas named public. Since the notified FIWARE service was used as the schema name, Cygnus was not properly working for "personal" accounts.
Such a bug has been fixed:
Issue: https://github.com/telefonicaid/fiware-cygnus/issues/1382
PR: https://github.com/telefonicaid/fiware-cygnus/issues/1393
At the moment of writting this, the bug is fixed in master branch. By the end of sprint/month Cygnus 1.7.0 will be released, including this fix.
Please observe my previous answer is perfectly valid if you have an "enterprise" account. Anyway, I'll edit it in order to explain this.

The problem is geo:json type is not currently supported by NGSICartoDBSink. This sink understands ceratain ways of notifying geolocated attributes, according to Orion Context Broker specification; these ones:
Using the geo:point type, and sending the coordinates in the value field with format "latitude, longitude".
Using the location metadata, of type string and value WGS84, and sending the coordinates in the value field with format "latitude, longitude".
Please observe:
The above options are exclussive, i.e. cannot be used at the same time.
The location metadata is deprecated in Orion, nevertheless it can be still used.
While geo:json is supported (I'll start working on that, it could be ready during this sprint/month), I'll recommend you to use the geo:point type.
EDIT 1
I'm adding here an example of Cygnus execution, when receiving a notification involving a geolocated attribute (geo:point type).
Cygnus version:
1.6.0
Cygnus configuration:
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = raw-sink
cygnus-ngsi.channels = raw-channel
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnus-ngsi.sources.http-source.channels = raw-channel
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service = default
cygnus-ngsi.sources.http-source.handler.default_service_path = /
cygnus-ngsi.sources.http-source.interceptors = ts
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
cygnus-ngsi.sinks.raw-sink.type = com.telefonica.iot.cygnus.sinks.NGSICartoDBSink
cygnus-ngsi.sinks.raw-sink.channel = raw-channel
cygnus-ngsi.sinks.raw-sink.enable_grouping = false
cygnus-ngsi.sinks.raw-sink.keys_conf_file = /usr/cygnus/conf/cartodb_keys.conf
cygnus-ngsi.sinks.raw-sink.swap_coordinates = false
cygnus-ngsi.sinks.raw-sink.enable_raw = true
cygnus-ngsi.sinks.raw-sink.enable_distance = false
cygnus-ngsi.sinks.raw-sink.enable_raw_snapshot = false
cygnus-ngsi.sinks.raw-sink.data_model = dm-by-entity
cygnus-ngsi.sinks.raw-sink.batch_size = 50
cygnus-ngsi.sinks.raw-sink.batch_timeout = 10
cygnus-ngsi.sinks.raw-sink.batch_ttl = 0
cygnus-ngsi.sinks.raw-sink.batch_retries = 5000
cygnus-ngsi.channels.raw-channel.type = com.telefonica.iot.cygnus.channels.CygnusMemoryChannel
cygnus-ngsi.channels.raw-channel.capacity = 1000
cygnus-ngsi.channels.raw-channel.transactionCapacity = 100
Create the table:
$ curl -X GET -G "https://<my_user>.cartodb.com/api/v2/sql?api_key=<my_key>" --data-urlencode "q=CREATE TABLE x002ftestxffffx0043ar1xffffx0043ar (recvTime text, fiwareServicePath text, entityId text, entityType text, speed float, speed_md text, the_geom geometry(POINT,4326))"
{"rows":[],"time":0.005,"fields":{},"total_rows":0}
Script simulating a notification:
$ cat notification.sh
#!/bin/sh
URL=$1
if [ "$2" != "" ]
then
SERVICE=$2
else
SERVICE=default
fi
if [ "$3" != "" ]
then
SERVICE_PATH=$3
else
SERVICE_PATH=/
fi
curl $URL -v -s -S --header 'Content-Type: application/json; charset=utf-8' --header 'Accept: application/json' --header 'User-Agent: orion/0.10.0' --header "Fiware-Service: $SERVICE" --header "Fiware-ServicePath: $SERVICE_PATH" -d #-
<<EOF
{
"subscriptionId" : "51c0ac9ed714fb3b37d7d5a8",
"originator" : "localhost",
"contextResponses" : [
{
"contextElement" : {
"attributes" : [
{
"name" : "speed",
"type" : "float",
"value" : "$6"
},
{
"name" : "the_geom",
"type" : "geo:point",
"value" : "$4, $5"
}
],
"type" : "Car",
"isPattern" : "false",
"id" : "Car1"
},
"statusCode" : {
"code" : "200",
"reasonPhrase" : "OK"
}
}
]
}
EOF
Script execution:
$ ./notification.sh http://localhost:5050/notify <my_user> /test 40.40 -3.4 120
* Trying ::1...
* Connected to localhost (::1) port 5050 (#0)
> POST /notify HTTP/1.1
> Host: localhost:5050
> Content-Type: application/json; charset=utf-8
> Accept: application/json
> User-Agent: orion/0.10.0
> Fiware-Service: <my_user>
> Fiware-ServicePath: /test
> Content-Length: 569
>
* upload completely sent off: 569 out of 569 bytes
< HTTP/1.1 200 OK
< Transfer-Encoding: chunked
< Server: Jetty(6.1.26)
<
* Connection #0 to host localhost left intact
Cygnus logs upon notification reception:
time=2016-12-02T13:48:27.310UTC | lvl=INFO | corr=eb73b8d5-af9b-48ea-8ce7-ff21edc957f3 | trans=eb73b8d5-af9b-48ea-8ce7-ff21edc957f3 | srv=<my_user> | subsrv=/test | comp=cygnus-ngsi | op=getEvents | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[282] : [NGSIRestHandler] Starting internal transaction (eb73b8d5-af9b-48ea-8ce7-ff21edc957f3)
time=2016-12-02T13:48:27.312UTC | lvl=INFO | corr=eb73b8d5-af9b-48ea-8ce7-ff21edc957f3 | trans=eb73b8d5-af9b-48ea-8ce7-ff21edc957f3 | srv=<my_user> | subsrv=/test | comp=cygnus-ngsi | op=getEvents | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[299] : [NGSIRestHandler] Received data ({ "subscriptionId" : "51c0ac9ed714fb3b37d7d5a8", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "attributes" : [ { "name" : "speed", "type" : "float", "value" : "120" }, { "name" : "the_geom", "type" : "geo:point", "value" : "40.40, -3.4" } ], "type" : "Car", "isPattern" : "false", "id" : "Car1" }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
time=2016-12-02T13:48:36.404UTC | lvl=INFO | corr=eb73b8d5-af9b-48ea-8ce7-ff21edc957f3 | trans=eb73b8d5-af9b-48ea-8ce7-ff21edc957f3 | srv=<my_user> | subsrv=/test | comp=cygnus-ngsi | op=persistRawAggregation | msg=com.telefonica.iot.cygnus.sinks.NGSICartoDBSink[553] : [raw-sink] Persisting data at NGSICartoDBSink. Schema (<my_user>), Table (x002ftestxffffx0043ar1xffffx0043ar), Data (('2016-12-02T13:48:27.381Z','/test','Car1','Car',ST_SetSRID(ST_MakePoint(40.40,-3.4), 4326),'120','[]'))
time=2016-12-02T13:48:38.237UTC | lvl=INFO | corr=eb73b8d5-af9b-48ea-8ce7-ff21edc957f3 | trans=eb73b8d5-af9b-48ea-8ce7-ff21edc957f3 | srv=<my_user> | subsrv=/test | comp=cygnus-ngsi | op=processNewBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[514] : Finishing internal transaction (eb73b8d5-af9b-48ea-8ce7-ff21edc957f3)
Getting the data:
$ curl -X GET -G "https://<my_user>.cartodb.com/api/v2/sql?api_key=<my_key>" --data-urlencode "q=select * from x002ftestxffffx0043ar1xffffx0043ar"
{"rows":[{"recvtime":"2016-12-02T13:48:27.381Z","fiwareservicepath":"/test","entityid":"Car1","entitytype":"Car","speed":120,"speed_md":"[]","the_geom":"0101000020E610000033333333333344403333333333330BC0"}],"time":0.001,"fields":{"recvtime":{"type":"string"},"fiwareservicepath":{"type":"string"},"entityid":{"type":"string"},"entitytype":{"type":"string"},"speed":{"type":"number"},"speed_md":{"type":"string"},"the_geom":{"type":"geometry"}},"total_rows":1}
EDIT 2
This answer is only valid if you own an "enterprise" Carto account. Please, see my other answer to this question.

Related

Perseo events do not seen to fire with NGSI-v2

Moro,
we have Orion CB and data (NGSI-V2) like this:
[
{
"id": "bloodm1",
"type": "BloodMeter",
"hippo": {
"type": "Number",
"value": 39,
"metadata": {}
}
}
]
and a subscription like this
{
"id": "5ecf6be4e9f143d750cb7d63",
"description": "Perseo Subscription",
"status": "active",
"subject": {
"entities": [
{
"idPattern": ".*"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 26,
"lastNotification": "2020-05-28T11:41:54.00Z",
"attrs": [],
"onlyChangedAttrs": false,
"attrsFormat": "normalized",
"http": {
"url": "http://perseo-fe.fiware-dev.svc.cluster.local:9090/notices"
},
"metadata": [
"dateCreated",
"dateModified",
"timestamp"
],
"lastSuccess": "2020-05-28T11:41:54.00Z",
"lastSuccessCode": 200
}
}
and rule like this:
{
"_id": "5ecfb70f1d163a0007dd715e",
"name": "perseo_email12",
"text": "select \"perseo_email12\" as ruleName, * from pattern [every ev=iotEvent(cast(hippo?,float) > 1)]",
"action": {
"type": "email",
"parameters": {
"to": "adf.fasdf#asdfator.fi",
"from": "mail#asdfator.fi",
"subject": "It's The End Of The World As We Know It (And I Feel Fine)"
}
},
"subservice": "/",
"service": "unknownt"
}
it seems that the email is not sent. what are we doing wrong? We can see from the peseo backend logs that the event goes there. What should we see in the logs if the action fires?
Is there any way to force some rule to fire? Or test the email (rule out misconfig)?
this is what we see in the core logs:
time=2020-05-28T13:11:19.399Z | lvl=INFO | from=::ffff:192.168.29.199 | corr=b84fca16-a0e4-11ea-9391-167c661b292c; perseocep=121 | trans=51ac0299-4308-47c9-9c1b-ceb99b257c99 | srv=perseo | subsrv=/ | op=doPost | comp=perseo-core | msg=incoming event: {"noticeId":"b8557f60-a0e4-11ea-9861-53e82ada17b4","noticeTS":1590671479382,"id":"bloodm1","type":"BloodMeter","isPattern":false,"subservice":"/","service":"perseo","hippo__type":"Number","hippo":40,"hippo__metadata__dateCreated__type":"DateTime","hippo__metadata__dateCreated__ts":1590671100000,"hippo__metadata__dateCreated__day":28,"hippo__metadata__dateCreated__month":5,"hippo__metadata__dateCreated__year":2020,"hippo__metadata__dateCreated__hour":13,"hippo__metadata__dateCreated__minute":5,"hippo__metadata__dateCreated__second":0,"hippo__metadata__dateCreated__millisecond":0,"hippo__metadata__dateCreated__dayUTC":28,"hippo__metadata__dateCreated__monthUTC":5,"hippo__metadata__dateCreated__yearUTC":2020,"hippo__metadata__dateCreated__hourUTC":13,"hippo__metadata__dateCreated__minuteUTC":5,"hippo__metadata__dateCreated__secondUTC":0,"hippo__metadata__dateCreated__millisecondUTC":0,"hippo__metadata__dateModified__type":"DateTime","hippo__metadata__dateModified__ts":1590671479000,"hippo__metadata__dateModified__day":28,"hippo__metadata__dateModified__month":5,"hippo__metadata__dateModified__year":2020,"hippo__metadata__dateModified__hour":13,"hippo__metadata__dateModified__minute":11,"hippo__metadata__dateModified__second":19,"hippo__metadata__dateModified__millisecond":0,"hippo__metadata__dateModified__dayUTC":28,"hippo__metadata__dateModified__monthUTC":5,"hippo__metadata__dateModified__yearUTC":2020,"hippo__metadata__dateModified__hourUTC":13,"hippo__metadata__dateModified__minuteUTC":11,"hippo__metadata__dateModified__secondUTC":19,"hippo__metadata__dateModified__millisecondUTC":0,"stripped":{"id":"bloodm1","type":"BloodMeter","hippo":{"type":"Number","value":40,"metadata":{"dateCreated":{"type":"DateTime","value":"2020-05-28T13:05:00.00Z"},"dateModified":{"type":"DateTime","value":"2020-05-28T13:11:19.00Z"}}}}}
EDIT:
ok, we got forward, (did not understand to use fiware-service header when posting the rule, our bad). BUT the email sending is not working. we get this error:
time=2020-06-08T12:01:05.234Z | lvl=DEBUG | corr=ba89f43e-a97f-11ea-9b7c-167c661b292c; perseocep=2 | trans=3ec8910b-ef8b-461e-bf71-dbf10f9ecf85 | op=/actions/do | path=/actions/do | comp=perseo-fe | srv=perseo | subsrv=/ | msg=emailAction.SendMail {"from":"mail#profirator.fi","to":"ilari.mikkonen#profirator.fi","subject":"Perseo Test One","headers":{}} {"code":"EENVELOPE","response":"554 5.7.1 <unknown[212.15.209.181]>: Client host rejected: Access denied","responseCode":554} undefined
time=2020-06-08T12:01:05.237Z | lvl=ERROR | corr=ba89f43e-a97f-11ea-9b7c-167c661b292c; perseocep=2 | trans=3ec8910b-ef8b-461e-bf71-dbf10f9ecf85 | op=/actions/do | path=/actions/do | comp=perseo-fe | srv=perseo | subsrv=/ | msg=emailAction.SendMail {"to":"ilari.mikkonen#profirator.fi","from":"mail#profirator.fi","subject":"Perseo Test One"} Can't send mail - all recipients were rejected: 554 5.7.1 <unknown[212.15.209.181]>: Client host rejected: Access denied
email creds are tested and working on other components. Tested with 2 different email services. We give these values via docker env variables:
PERSEO_SMTP_HOST: email.service.host
PERSEO_SMTP_PORT: 587
PERSEO_SMTP_SECURE: "false"
PERSEO_SMTP_AUTH_USER: user#email.com
PERSEO_SMTP_AUTH_PASS: password
We also tired to PERSEO_SMTP_TLS_REJECTUNAUTHORIZED: with false
I think we got it: Email sending is not working since we are using STARTTLS & email server requires username and password: https://github.com/telefonicaid/perseo-fe/issues/272

'fiware-servicepath' header value does not match the number of notified context responses

Working on setting Cygnus as a sink to CKAN, and I get this error, what part of Cygnus setup is responsible for this( subscription, configuration ...)
cygnus_1 | time=2018-10-01T12:40:04.517Z | lvl=DEBUG | corr=1ea858dc-c577-
11e8-b0fd-0242ac140003 | trans=5c553916-f5e6-4bbc-b98a-bcaba61a306c |
srv=waste4think | subsrv=/room/test | comp=cygnus-ngsi | op=getEvents |
msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[320] :
[NGSIRestHandler] Parsed NotifyContextRequest:{"
subscriptionId":"5bb2153fd1bde90f8813b236","originator":"null","contextResponse
s":[]}
I assume the error is connected to this contextResponses because it is empty, but I found no additional info what is causing this where I should look. And the error is not helping.
This is the more general question that issue since I cannot call this issue because I have no idea if it is me who is causing this or Cygnus to have indeed some problems.
Thanks.
When setting up the subscription, Cygnus currently only accepts notifications in the older NGSI v1 format- the attrsFormat=legacy is therefore needed.
e.g.
curl -iX POST \
'http://localhost:1026/v2/subscriptions' \
-H 'Content-Type: application/json' \
-H 'fiware-service: openiot' \
-H 'fiware-servicepath: /' \
-d '{
"description": "Notify Cygnus of all context changes",
"subject": {
"entities": [
{
"idPattern": ".*"
}
]
},
"notification": {
"http": {
"url": "http://cygnus:5050/notify"
},
"attrsFormat": "legacy"
},
"throttling": 5
}'
Further information about setting up subscriptions in Cygnus can be found in the Cygnus Tutorial

load dynamic data from MySQL table in ElasticSearch using JDBC driver

I have get dyanamically data from MySQL tables in my elasticSearch index. For that i have used following link for but not get propper result:
I have used following code:
echo '{
"type":"jdbc",
"jdbc":{
"url":"jdbc:mysql://localhost:3306/CDFL",
"user":"root",
"password":"root",
"useSSL":"false",
"sql":"SELECT * FROM event",
"index":"event",
"type":"event",
"autocommit":"true",
"metrics": {
"enabled" : true
},
"elasticsearch" : {
"cluster" : "servercluster",
"host" : "localhost",
"port" : 9300
}
}
}' | java -cp "/etc/elasticsearch/elasticsearch-jdbc-2.3.4.0/lib/*" -"Dlog4j.configurationFile=file:////etc/elasticsearch/elasticsearch-jdbc-2.3.4.0/bin/log4j2.xml" "org.xbib.tools.Runner" "org.xbib.tools.JDBCImporter"
and for that get solution i have used following link:
ElasticSearch how to integrate with Mysql
https://github.com/jprante/elasticsearch-jdbc
Fetching changes from table with ElasticSearch JDBC river
https://github.com/logstash-plugins/logstash-input-jdbc
I have got a answer for that question:
make one file in root directory called event.sh and following code in that file
event.sh
curl -XDELETE 'localhost:9200/event'
bin=/etc/elasticsearch/elasticsearch-jdbc-2.3.4.0/bin
lib=/etc/elasticsearch/elasticsearch-jdbc-2.3.4.0/lib
echo '{
"type":"jdbc",
"jdbc":{
"url":"jdbc:mysql://localhost:3306/CDFL",
"user":"root",
"password":"root",
"useSSL":"false",
"sql":"SELECT * FROM event",
"index":"event",
"type":"event",
"poll" : "6s",
"autocommit":"true",
"metrics": {
"enabled" : true
},
"elasticsearch" : {
"cluster" : "servercluster",
"host" : "localhost",
"port" : 9300
}
}
}' | java -cp "/etc/elasticsearch/elasticsearch-jdbc-2.3.4.0/lib/*" -"Dlog4j.configurationFile=file:////etc/elasticsearch/elasticsearch-jdbc-2.3.4.0/bin/log4j2.xml" "org.xbib.tools.Runner" "org.xbib.tools.JDBCImporter"
echo "sleeping while importer should run..."
sleep 10
curl -XGET 'localhost:9200/event/_refresh'
and run that file in cmd type following command:
sh elasticSearch/event.sh
that is work fine

Send commands to Context Broker using iotagent-ul

I'm trying to send commands to the Orion Context Broker using iotagent-ul with HTTP protcol.
Context Broker and IoT Agent are in different servers (actually IoTA is in running in my laptop).
I've configured the necessary parameters in config.js file.
My request is as follows:
curl -L POST -H "Fiware-Service: myHome" -H "Fiware-ServicePath: /environment" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{
"devices": [
{
"device_id": "sensor01",
"entity_name": "LivingRoomSensor",
"entity_type": "multiSensor",
"attributes": [
{ "object_id": "t", "name": "Temperature", "type": "celsius" },
{ "object_id": "l", "name": "Luminosity", "type": "lumens" }
]
}
]
}
' 'http://localhost:4061/iot/devices'
It shows the following erros:
In IoTA terminal:
time=2017-02-14T15:06:14.832Z | lvl=ERROR | corr=88ed3729-6682-44ce-9b0a-28098e54c94e | trans=88ed3729-6682-44ce-9b0a-28098e54c94e | op=IoTAgentNGSI.DomainControl | srv=myHome | subsrv=/environment | msg=TypeError: Cannot read property 'findOne' of undefined | comp=IoTAgent
In "cURL terminal":
curl: (52) Empty reply from server
Please can you tell us what's the IoT Agent UL version you are using?
On the other hand, it seems you are missing the 'protocol' field in the payload, please check
http://fiwaretourguide.readthedocs.io/en/latest/connection-to-the-internet-of-things/how-to-read-measures-captured-from-iot-devices/
best

cygnus instance not reached from orion context broker

I have installed cygnus 0.8.2 on fiware image CentOS-7-x64, I subscribed to orion context broker using:
(curl 193.48.247.246:1026/v1/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Fiware-Service: egmmqtt' --header 'Fiware-ServicePath: /egmmqttpath' -d #- | python -mjson.tool) <<EOF
{
"entities": [
{
"type": "sensors",
"isPattern": "false",
"id": "sensors:switch2A"
}
],
"attributes": [
"switch2A"
],
"reference": "http://193.48.247.223:5050/notify",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [
"switch2A"
]
}
],
"throttling": "PT1S"
}
EOF
No notification has reached cygnus and I got this error on orionContextBroker logs:
time=2015-10-06T17:43:37.898CEST | lvl=WARNING | trans=1443447780-161-00000000423 | function=sendHttpSocket | comp=Orion | msg=clientSocketHttp.cpp[358]: Notification failure for 193.48.247.223:5050 (curl_easy_perform failed: Couldn't connect to server)
I dont know why the cygnus instance is not reached under the associated public IP adress. In fact I can't ping cygnus machine instance from Orion instance. Any ideas of what I have missed? thanks!
In the security rules of cygnus instance the port on which cygnus is listenning (in my case 5050) has to be open so orion can reach cygnus instance.