I've installed (from source) cygnus 0.7.1, and after configuring it (MySQL and HDFS sinks) I can start it without problems. When I subscribe cygnus to a orion context, it receives the information ok, but there is a problem with MySQL and HDFS. This is the log:
15/03/17 13:58:52 INFO handlers.OrionRestHandler: Starting transaction (1426597123-891-0000000000)
15/03/17 13:58:52 INFO handlers.OrionRestHandler: Received data ({ "subscriptionId" : "5508250c1860a36e55240c84", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "ubk-temp", "isPattern" : "false", "id" : "ubk:temp:1", "attributes" : [ { "name" : "temperature", "type" : "float", "value" : "11" } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
15/03/17 13:58:52 INFO handlers.OrionRestHandler: Event put in the channel (id=1549700267, ttl=10)
15/03/17 13:58:52 INFO sinks.OrionSink: Event got from the channel (id=1549700267, headers={fiware-servicepath=ubktemp, destination=ubk_temp_1_ubk-temp, content-type=application/json, fiware-service=ubikwa, ttl=10, transactionId=1426597123-891-0000000000, timestamp=1426597132511}, bodyLength=462)
15/03/17 13:58:52 INFO sinks.OrionSink: Event got from the channel (id=1549700267, headers={fiware-servicepath=ubktemp, destination=ubk_temp_1_ubk-temp, content-type=application/json, fiware-service=ubikwa, ttl=10, transactionId=1426597123-891-0000000000, timestamp=1426597132511}, bodyLength=462)
15/03/17 13:58:52 INFO sinks.OrionMySQLSink: [mysql-sink] Persisting data at OrionMySQLSink. Database: ubikwa, Table: ubktemp_ubk_temp_1_ubk-temp, Timestamp: 2015-03-17T13:58:52.511, Data (attrs): {temperature=11}, (metadata): {temperature_md=[]}
15/03/17 13:58:53 INFO sinks.OrionHDFSSink: [hdfs-sink] Persisting data at OrionHDFSSink. HDFS file (ubikwa/ubktemp/ubk_temp_1_ubk-temp/ubk_temp_1_ubk-temp.txt), Data ({"recvTime":"2015-03-17T13:58:52.511","temperature":"11", "temperature_md":[]})
15/03/17 13:58:53 WARN sinks.OrionSink: Bad context data (Table 'ubikwa.ubktemp_ubk_temp_1_ubk-temp' doesn't exist)
15/03/17 13:58:53 INFO sinks.OrionSink: Finishing transaction (1426597123-891-0000000000)
The MySQL sink does not raise any errors but no tables are created. And the HDFS sink seems to be unable to create the files. I previously installed cygnus 0.6 and it worked with the same configuration.
Edit:
Here its is my configuration:
cygnusagent.sources = http-source
cygnusagent.sinks = hdfs-sink mysql-sink
cygnusagent.channels = hdfs-channel mysql-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = hdfs-channel mysql-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = ubikwa
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = ubktemp
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts de
# Timestamp interceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# Destination extractor interceptor, do not change
cygnusagent.sources.http-source.interceptors.de.type = es.tid.fiware.fiwareconnectors.cygnus.interceptors.DestinationExtractor$Builder
# Matching table for the destination extractor interceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.de.matching_table = /opt/cygnus/conf/matching_table.conf
# ============================================
# OrionHDFSSink configuration
# channel name from where to read notification events
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
# sink class, must not be changed
cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink
# Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints
# If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory
cygnusagent.sinks.hdfs-sink.cosmos_host = 130.206.80.46
# port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty
cygnusagent.sinks.hdfs-sink.cosmos_port = 14000
# default username allowed to write in HDFS
cygnusagent.sinks.hdfs-sink.cosmos_default_username = ***
# default password for the default username
cygnusagent.sinks.hdfs-sink.cosmos_default_password = ***
# HDFS backend type (webhdfs, httpfs or infinity)
cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.hdfs-sink.attr_persistence = column
# Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = 130.206.80.46
# Hive port for Hive external table provisioning
cygnusagent.sinks.hdfs-sink.hive_port = 10000
# Kerberos-based authentication enabling
cygnusagent.sinks.hdfs-sink.krb5_auth = false
# Kerberos username
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
# Kerberos password
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
# Kerberos login file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
# Kerberos configuration file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf
# ============================================
# OrionMySQLSink configuration
# channel name from where to read notification events
cygnusagent.sinks.mysql-sink.channel = mysql-channel
# sink class, must not be changed
cygnusagent.sinks.mysql-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionMySQLSink
# the FQDN/IP address where the MySQL server runs
cygnusagent.sinks.mysql-sink.mysql_host = 127.0.0.1
# the port where the MySQL server listes for incomming connections
cygnusagent.sinks.mysql-sink.mysql_port = 3306
# a valid user in the MySQL server
cygnusagent.sinks.mysql-sink.mysql_username = ***
# password for the user above
cygnusagent.sinks.mysql-sink.mysql_password = ***
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mysql-sink.attr_persistence = column
#=============================================
# hdfs-channel configuration
# channel type (must not be changed)
cygnusagent.channels.hdfs-channel.type = memory
# capacity of the channel
cygnusagent.channels.hdfs-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.hdfs-channel.transactionCapacity = 100
#=============================================
# mysql-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mysql-channel.type = memory
# capacity of the channel
cygnusagent.channels.mysql-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mysql-channel.transactionCapacity = 100
Any hints?
Thanks
I believe its because you are using parameter column in your configuration for OrionMySQLSink
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mysql-sink.attr_persistence = column
In documentation is stated that when using column database and tables must be created before starting of cygnus. When you using row all 8 rows will be created automatically before first insert.
Within tables, we can find two options:
Fixed 8-field rows, as usual: recvTimeTs, recvTime, entityId, entityType, attrName, attrType, attrValue and attrMd. These tables
(and the databases) are created at execution time if the table doesn't
exist previously to the row insertion. Regarding attrValue, in its
simplest form, this value is just a string, but since Orion 0.11.0 it
can be Json object or Json array. Regarding attrMd, it contains a
string serialization of the metadata array for the attribute in Json
(if the attribute hasn't metadata, an empty array [] is inserted),
Two columns per each entity's attribute (one for the value and other for the metadata), plus an addition column about the reception
time of the data (recv_time). This kind of tables (and the databases)
must be provisioned previously to the execution of Cygnus, because
each entity may have a different number of attributes, and the
notifications must ensure a value per each attribute is notified.
The behaviour of the connector regarding the internal representation
of the data is governed through a configuration parameter,
attr_persistence, whose values can be row or column.
As Petark suggest, the "column mode" does not automatically creates the tables, and this must be provisioned in advanced by you. Why? The reason is, depending on the subscription you made to Orion CB in order to sent notifications to Cygnus, such notification may include some times a few attrbute updates, some times other very different attributes set, etc.
For instance, let's consider an entity called "car" with attrbutes "speed", "location" and "oil-level". Then you may say "notify Cygnus each time the speed changes, but send only the speed value. But at the same time you may say "notify Cygnus each time the oil level changes, and send all the attribute's value" as well. If the car starts moving, and only the speed and the location change, but not the oil level, then only the speed updates will be notified to Cygnus, which has no chance to know about the other attributes at any time.
Thus, if you want rows of data having all the 3 attributes, then you have to provision the table by yourself. By the way, having such subscription examples will lead to a lot of "speed-value,null,null" rows in your tables.
The differece with the "row mode" is that, independently of the number of notifed attributes, a new row will be added for each notified attribute, having all the rows the same format (entityId,entitytType,attrName,attrType,attrValue,attrMd); these format can be automatically provisioned by Cygnus.
Related
Good day!
I have built a small IoT device that monitors the conditions inside a specific enclosure using an ESP32 and a couple of sensors. I want to monitor that data by publishing it to the ThingSpeak cloud, then writing it to InfluxDB with Telegraf and finally using the InfluxDB data source in Grafana to visualize it.
So far I have made everything work flawlessly, but with one small exception.
Which is: One of the plugins in my telegraf config fails with the error:
parsing metrics failed: Unable to convert field 'temperature' to type int: strconv.ParseInt: parsing "15.4": invalid syntax
The plugins are [inputs.http]] and [[inputs.http.json_v2]] and what I am doing with them is authenticating against my ThingSpeak API and parsing the json output of my fields. Then in my /etc/telegraf/telegraf.conf under [[inputs.http.json_v2.field]] I have added type = int as otherwise telegraf writes my metrics as Strings in InfluxDB and the only way to visualize them is using either a table or a single stat, because the rest of the flux queries fail with the error unsupported input type for mean aggregate: string. However, when I change to type = float in the config file I get a different error:
unprocessable entity: failure writing points to database: partial write: field type conflict: input field "temperature" on measurement "sensorData" is type float, already exists as type string dropped=1
I have a suspicion that I have misconfigured the parser plugin, however after hours of debugging I couldn't come up with a solution.
Some information that might be of use:
Telegraf version: Telegraf 1.24.2
Influxdb version: InfluxDB v2.4.0
Please see below for my telegraf.conf as well as the error messages.
Any help would be highly appreciated! (:
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 1000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
hostname = ""
omit_hostname = false
[[outputs.influxdb_v2]]
urls = ["http://localhost:8086"]
token = "XXXXXXXX"
organization = "XXXXXXXXX"
bucket = "sensor"
[[inputs.http]]
urls = [
"https://api.thingspeak.com/channels/XXXXX/feeds.json?api_key=XXXXXXXXXX&results=2"
]
name_override = "sensorData"
tagexclude = ["url", "host"]
data_format = "json_v2"
## HTTP method
method = "GET"
[[inputs.http.json_v2]]
[[inputs.http.json_v2.field]]
path = "feeds.1.field1"
rename = "temperature"
type = "int" #Error message 1
#type = "float" #Error message 2
Error when type = "float":
me#myserver:/etc/telegraf$ telegraf -config telegraf.conf --debug
2022-10-16T00:31:43Z I! Starting Telegraf 1.24.2
2022-10-16T00:31:43Z I! Available plugins: 222 inputs, 9 aggregators, 26 processors, 20
parsers, 57 outputs
2022-10-16T00:31:43Z I! Loaded inputs: http
2022-10-16T00:31:43Z I! Loaded aggregators:
2022-10-16T00:31:43Z I! Loaded processors:
2022-10-16T00:31:43Z I! Loaded outputs: influxdb_v2
2022-10-16T00:31:43Z I! Tags enabled: host=myserver
2022-10-16T00:31:43Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"myserver",
Flush Interval:10s
2022-10-16T00:31:43Z D! [agent] Initializing plugins
2022-10-16T00:31:43Z D! [agent] Connecting outputs
2022-10-16T00:31:43Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2022-10-16T00:31:43Z D! [agent] Successfully connected to outputs.influxdb_v2
2022-10-16T00:31:43Z D! [agent] Starting service inputs
2022-10-16T00:31:53Z E! [outputs.influxdb_v2] Failed to write metric to sensor (will be
dropped: 422 Unprocessable Entity): unprocessable entity: failure writing points to
database: partial write: field type conflict: input field "temperature" on measurement
"sensorData" is type float, already exists as type string dropped=1
2022-10-16T00:31:53Z D! [outputs.influxdb_v2] Wrote batch of 1 metrics in 8.9558ms
2022-10-16T00:31:53Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
Error when type = "int"
me#myserver:/etc/telegraf$ telegraf -config telegraf.conf --debug
2022-10-16T00:37:05Z I! Starting Telegraf 1.24.2
2022-10-16T00:37:05Z I! Available plugins: 222 inputs, 9 aggregators, 26 processors, 20
parsers, 57 outputs
2022-10-16T00:37:05Z I! Loaded inputs: http
2022-10-16T00:37:05Z I! Loaded aggregators:
2022-10-16T00:37:05Z I! Loaded processors:
2022-10-16T00:37:05Z I! Loaded outputs: influxdb_v2
2022-10-16T00:37:05Z I! Tags enabled: host=myserver
2022-10-16T00:37:05Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"myserver",
Flush Interval:10s
2022-10-16T00:37:05Z D! [agent] Initializing plugins
2022-10-16T00:37:05Z D! [agent] Connecting outputs
2022-10-16T00:37:05Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2022-10-16T00:37:05Z D! [agent] Successfully connected to outputs.influxdb_v2
2022-10-16T00:37:05Z D! [agent] Starting service inputs
2022-10-16T00:37:10Z E! [inputs.http] Error in plugin:
[url=https://api.thingspeak.com/channels/XXXXXX/feeds.json?
api_key=XXXXXXX&results=2]: parsing metrics failed: Unable to convert field
'temperature' to type int: strconv.ParseInt: parsing "15.3": invalid syntax
Fixed it by leaving type = float under [[inputs.http.json_v2.field]] in telegraf.conf and creating a NEW bucket with a new API key in Influx.
The issue was that the bucket sensor that I had previously defined in my telegraf.conf already had the field temperature created in my influx database from previous tries with its type set as last (aka: String) which could not be overwritten with the new type mean (aka: float).
As soon as I deleted all pre existing buckets everything started working as expected.
InfluxDB dashboard
Recently I start working with NGSIV2 but cygnus start crashing.
May I have something not cofigurated right? With the v1 works flawless.
My startup trace with the error is in here: http://pastebin.com/nP2am8GW
The error said: Error appending event to channel. Channel might be full. Consider increasing the channel capacity or make sure the sinks perform faster.
time=2016-05-09T05:58:28.866CDT | lvl=WARN | trans=1462791485-602-0000000000 | srv=papel-club | subsrv=events | function=intercept | comp=Cygnus | msg=com.telefonica.iot.cygnus.interceptors.GroupingInterceptor[135] : No context responses within the notified entity, nothing is done
time=2016-05-09T05:58:28.867CDT | lvl=WARN | trans=1462791485-602-0000000000 | srv=papel-club | subsrv=events | function=doPost | comp=Cygnus | msg=org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet[203] : Error appending event to channel. Channel might be full. Consider increasing the channel capacity or make sure the sinks perform faster.
org.apache.flume.ChannelException: Unable to put batch on required channel: org.apache.flume.channel.MemoryChannel{name: ckan-channel}
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:200)
at org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet.doPost(HTTPSource.java:201)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:814)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Caused by: java.lang.IllegalArgumentException: put() called with null event!
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:89)
at org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80)
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189)
... 16 more
As error said i increase the channel memory. Here is my agent config:
cygnusagent.sources = http-source
cygnusagent.sinks = ckan-sink
cygnusagent.channels = ckan-channel
cygnusagent.sources.http-source.channels = ckan-channel
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnusagent.sources.http-source.port = 5050
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
cygnusagent.sources.http-source.handler.notification_target = /notify
cygnusagent.sources.http-source.handler.default_service = papel-club
cygnusagent.sources.http-source.handler.default_service_path = /events
cygnusagent.sources.http-source.handler.events_ttl = 5
cygnusagent.sources.http-source.interceptors = ts gi
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules_day.conf
cygnusagent.channels.ckan-channel.type = memory
cygnusagent.channels.ckan-channel.capacity = 100000
cygnusagent.channels.ckan-channel.transactionCapacity = 100000
# ============================================
# OrionCKANSink configuration
# channel name from where to read notification events
cygnusagent.sinks.ckan-sink.channel = ckan-channel
# sink class, must not be changed
cygnusagent.sinks.ckan-sink.type = com.telefonica.iot.cygnus.sinks.OrionCKANSink
# true if the grouping feature is enabled for this sink, false otherwise
cygnusagent.sinks.ckan-sink.enable_grouping = true
# true if lower case is wanted to forced in all the element names, false otherwise
cygnusagent.sinks.hdfs-sink.enable_lowercase = false
# the CKAN API key to use
cygnusagent.sinks.ckan-sink.api_key = xxxx
# the FQDN/IP address for the CKAN API endpoint
cygnusagent.sinks.ckan-sink.ckan_host = ckan-demo.ckan.io
# the port for the CKAN API endpoint
cygnusagent.sinks.ckan-sink.ckan_port = 80
# Orion URL used to compose the resource URL with the convenience operation URL to query it
cygnusagent.sinks.ckan-sink.orion_url = http://localhost:1026
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.ckan-sink.attr_persistence = column
# enable SSL for secure Http transportation; 'true' or 'false'
cygnusagent.sinks.ckan-sink.ssl = false
# number of notifications to be included within a processing batch
cygnusagent.sinks.ckan-sink.batch_size = 100
# timeout for batch accumulation
cygnusagent.sinks.ckan-sink.batch_timeout = 60
# number of retries upon persistence error
cygnusagent.sinks.ckan-sink.batch_ttl = 10
Thanks.
For the time being, NGSIv2 is not supported in Cygnus. It is expected to be implemented, but it has not been scheduled yet.
EDIT: note also that you can use NGSIv2 to create/update entities at Orion and have notifications in NGSIv1 if you:
Create the subscription using NGSIv1 operations
Create the subscription using NGSIv2 operations with attrsFormat equal to legacy. Have a look to more detailed information here.
I installed cygnus version 0.8.2 on Fiware instance basing on the image CentOS-7-x64 using:
sudo yum install cygnus
I configured my agent as the following:
cygnusagent.sources = http-source
cygnusagent.sinks = mongo-sink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
cygnusagent.sinks.mongo-sink.db_prefix = kura_
# prefix pro the MongoDB collections
cygnusagent.sinks.mongo-sink.collection_prefix = kura_
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
#=============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
I tried to test it locally using the following curl command:
URL=$1
curl $URL -v -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' --header "Fiware-Service: qsg" --header "Fiware-ServicePath: testsink" -d #- <<EOF
{
"subscriptionId" : "51c0ac9ed714fb3b37d7d5a8",
"originator" : "localhost",
"contextResponses" : [
{
"contextElement" : {
"attributes" : [
{
"name" : "temperature",
"type" : "float",
"value" : "26.5"
}
],
"type" : "Room",
"isPattern" : "false",
"id" : "Room1"
},
"statusCode" : {
"code" : "200",
"reasonPhrase" : "OK"
}
}
]
}
EOF
but I got this exception:
2015-10-06 14:38:50,138 (1251445230#qtp-1186065012-0) [INFO - com.telefonica.iot.cygnus.handlers.OrionRestHandler.getEvents(OrionRestHandler.java:150)] Starting transaction (1444142307-244-0000000000)
2015-10-06 14:38:50,140 (1251445230#qtp-1186065012-0) [WARN - com.telefonica.iot.cygnus.handlers.OrionRestHandler.getEvents(OrionRestHandler.java:180)] Bad HTTP notification (curl/7.29.0 user agent not supported)
2015-10-06 14:38:50,140 (1251445230#qtp-1186065012-0) [WARN - org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet.doPost(HTTPSource.java:186)] Received bad request from client.
org.apache.flume.source.http.HTTPBadRequestException: curl/7.29.0 user agent not supported
at com.telefonica.iot.cygnus.handlers.OrionRestHandler.getEvents(OrionRestHandler.java:181)
at org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet.doPost(HTTPSource.java:184)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:814)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Any idea of what can be the cause of this exception?
Cygnus version <= 0.8.2 controls the HTTP headers, only accepting user-agets starting by orion. This has been fixed in 0.9.0 (this is the particular issue). Thus, you have two options:
Avoiding sending such a user-agent header. According to the curl documentation, you can use the -A, --user-agent <agent string> option in order to modify the user-agent and sending something starting by orion (e.g. orion/0.24.0).
Moving to Cygnus 0.9.0 (in order to avoid you have to install it from the sources, I'll upload along the day a RPM in the FIWARE repo).
I configured my cygnusagent to use mySQL sink as bellow and I used the notifcation script from the tutorial [1] to test it, but nothing has been changed in my data base, no new data base has been created.
Any ideas of what I may missed? Thanks!
cygnusagent.sources = http-source
cygnusagent.sinks = mysql-sink
cygnusagent.channels = mysql-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mysql-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
#cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
#cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# ============================================
# OrionMySQLSink configuration
# channel name from where to read notification events
cygnusagent.sinks.mysql-sink.channel = mysql-channel
# sink class, must not be changed
cygnusagent.sinks.mysql-sink.type = com.telefonica.iot.cygnus.sinks.OrionMySQLSink
# the FQDN/IP address where the MySQL server runs
cygnusagent.sinks.mysql-sink.mysql_host = 192.168.1.107
# the port where the MySQL server listes for incomming connections
cygnusagent.sinks.mysql-sink.mysql_port = 3306
# a valid user in the MySQL server
cygnusagent.sinks.mysql-sink.mysql_username = root
# password for the user above
cygnusagent.sinks.mysql-sink.mysql_password = poiu
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mysql-sink.attr_persistence = row
#=============================================
# mysql-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mysql-channel.type = memory
# capacity of the channel
cygnusagent.channels.mysql-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mysql-channel.transactionCapacity = 100
Here is my cygnus logs after the reception of data:
15/07/16 16:42:34 INFO handlers.OrionRestHandler: Starting transaction (1437057740-95-0000000000)
15/07/16 16:42:34 INFO handlers.OrionRestHandler: Received data ({ "subscriptionId" : "51c0ac9ed714fb3b37d7d5a8", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "attributes" : [ { "name" : "temperature", "type" : "centigrade", "value" : "26.5" } ], "type" : "Room", "isPattern" : "false", "id" : "Room1" }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
15/07/16 16:42:34 INFO handlers.OrionRestHandler: Event put in the channel (id=119782869, ttl=10)
15/07/16 16:42:34 INFO sinks.OrionSink: Event got from the channel (id=119782869, headers={content-type=application/json, fiware-service=room, fiware-servicepath=room, ttl=10, transactionId=1437057740-95-0000000000, timestamp=1437057754877}, bodyLength=612)
15/07/16 16:42:34 WARN sinks.OrionSink:
15/07/16 16:42:34 INFO sinks.OrionSink: Finishing transaction (1437057740-95-0000000000)
[1] https://github.com/telefonicaid/fiware-cygnus/blob/release/0.8.2/doc/quick_start_guide.md
You have commented this part of the configuration file:
# GroupinInterceptor, do not change
#cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
#cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
You must uncomment those parameters, even if you are not going to use the Grouping Rules feature (simply leave in blank /usr/cygnus/conf/grouping_rules.conf).
I have an issue related to ngsi2cosmos data flow. Everything works fine when persisting the information received in Orion into the public instance of Cosmos, but the destination folder and file name are both "null".
Simple test as follows:
I create of a brand new NGSIEntity with these headers added: Fiware-Service: myservice & Fiware-ServicePath: /my
I add a new subscription with Cygnus as the reference endpoint.
I send an update to previously created NGSIEntity
When I check my user space in Cosmos I check that the following route has been created: /user/myuser/myservice/null/null.txt
File content is OK, every updated info in Orion has been correctly sinked into it. The problem is with folder and file names.
I can't make it work properly. Isn't it supposed to get entityId and entityType for folder and file naming?
Component versions:
Orion version: contextBroker-0.19.0-1.x86_64
Cygnus version: cygnus-0.5-91.g3eb100e.x86_64
Cosmos: global instance
Cygnus conf file:
cygnusagent.sources = http-source
cygnusagent.sinks = hdfs-sink
cygnusagent.channels = hdfs-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = hdfs-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default organization (organization semantic depend on the persistence sink)
cygnusagent.sources.http-source.handler.default_organization = org42
# Number of channel re-injection retries before a Flume event is definitely discarded
cygnusagent.sources.http-source.handler.events_ttl = 10
# Management interface port (FIXME: temporal location for this parameter)
cygnusagent.sources.http-source.handler.management_port = 8081
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts
# Timestamp interceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# Destination extractor interceptor, do not change
cygnusagent.sources.http-source.interceptors.de.type = es.tid.fiware.fiwreconnectors.cygnus.interceptors.DestinationExtractor$Builder
# Matching table for the destination extractor interceptor, do not change
cygnusagent.sources.http-source.interceptors.de.matching_table = matching_table.conf
# ============================================
# OrionHDFSSink configuration
# channel name from where to read notification events
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
# sink class, must not be changed
cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink
# Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints
cygnusagent.sinks.hdfs-sink.cosmos_host = 130.206.80.46
# port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty
cygnusagent.sinks.hdfs-sink.cosmos_port = 14000
# default username allowed to write in HDFS
cygnusagent.sinks.hdfs-sink.cosmos_default_username = myuser
# default password for the default username
cygnusagent.sinks.hdfs-sink.cosmos_default_password = mypassword
# HDFS backend type (webhdfs, httpfs or infinity)
cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.hdfs-sink.attr_persistence = column
# prefix for the database and table names, empty if no prefix is desired
cygnusagent.sinks.hdfs-sink.naming_prefix =
# Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = 130.206.80.46
# Hive port for Hive external table provisioning
cygnusagent.sinks.hdfs-sink.hive_port = 10000
#=============================================
# hdfs-channel configuration
# channel type (must not be changed)
cygnusagent.channels.hdfs-channel.type = memory
# capacity of the channel
cygnusagent.channels.hdfs-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.hdfs-channel.transactionCapacity = 100
I think you should configure the matching_table of cygnus to define the path and file name.
You have that file in the same path of Cygnus agent conf file.
You can follow the next example:
# integer id|comma-separated fields|regex to be applied to the fields concatenation|destination|dataset
#
# The available "dictionary" of fields is:
# - entitydId
# - entityType
# - servicePath
1|entityId,entityType|Room\.(\d*)Room|numeric_rooms|rooms