ContextBroker subscriptions Error - fiware

I've updated cygnus from version 0.13 to 1.7.0 by installing NGSI following this tutorial:
https://github.com/telefonicaid/fiware-cygnus/tree/master/cygnus-ngsi
Error the subscription
[
{
"id": "59d38a92dbaa1e477aef9c00",
"description": "A subscription to get info about pruebas",
"status": "failed",
"subject": {
"entities": [
{
"id": "pruebas",
"type": "pruebas"
}
],
"condition": {
"attrs": [
"pressure"
]
}
},
"notification": {
"timesSent": 2,
"lastNotification": "2017-10-03T13:03:43.00Z",
"attrs": [
"temperature",
"pressure"
],
"attrsFormat": "legacy",
"http": {
"url": "http://localhost:5050/notify"
},
"lastFailure": "2017-10-03T13:03:43.00Z"
}
}
]
viewing the contextBroker log gives the following:
$pp[328]:notificationError | msg=Raising alarm NotificationError http://localhost:5050/notify: (curl_easy_perform failed: Couldn't connect to server)
I have contextBroker on the same machine as cygnus so I have already tried to change the notify ip for the server and for localhost and it does not work for any of it.
with version 0.13 if it works with localhost.
What could be the problem?
It does not even come to cygnus configuration files because it can not access from the contextBroker.
Greetings and thank you.
EDIT1:
I am tested with the fiwareLab machines and removing cygnus 0.13 that comes pre installed with YUM REMOVE CYGNUS. Then I installed 1.7 with YUM INSTALL CYGNUS-NGSI and installed two packages ngsi and common.
Restarting the service with service cygnus restart indicates the following:
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
cygnus-ngsi x86_64 1.7.1-0.g9df0d4d fiware 74 M
Installing for dependencies:
cygnus-common x86_64 1.7.1-0.g9df0d4d fiware 128 M
Transaction Summary
================================================================================
Install 2 Package(s)
Total size: 202 M
Installed size: 223 M
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
[INFO] Creating cygnus user
Installing : cygnus-common-1.7.1-0.g9df0d4d.x86_64 1/2
[INFO] Creating log directory
Done
Installing : cygnus-ngsi-1.7.1-0.g9df0d4d.x86_64 2/2
Verifying : cygnus-common-1.7.1-0.g9df0d4d.x86_64 1/2
Verifying : cygnus-ngsi-1.7.1-0.g9df0d4d.x86_64 2/2
Installed:
cygnus-ngsi.x86_64 0:1.7.1-0.g9df0d4d
Dependency Installed:
cygnus-common.x86_64 0:1.7.1-0.g9df0d4d
Complete!
[centos#centos6 cygnus]$ sudo service cygnus restart
There aren't any instance of Cygnus running [ OK ]
Starting Cygnus 1... [ OK ]
When I try on my server I do the same steps but when doing the service cygnus restart has two cygnus the 1 and 2 not as in vuesta machine that only has one and therefore indicates that the port 8081 is already in use.
Dependencias resueltas
============================================================================================================================================================================
Paquete Arquitectura Versión Repositorio Tamaño
============================================================================================================================================================================
Instalando:
cygnus-ngsi x86_64 1.7.1-0.g9df0d4d fiware 74 M
Instalando para las dependencias:
cygnus-common x86_64 1.7.1-0.g9df0d4d fiware 128 M
Resumen de la transacción
============================================================================================================================================================================
Instalar 2 Paquete(s)
Tamaño total de la descarga: 202 M
Tamaño instalado: 223 M
Está de acuerdo [s/N]:s
Descargando paquetes:
(1/2): cygnus-common_hadoopcore_1.2.1-1.7.1-0.g9df0d4d.x86_64.rpm | 128 MB 00:14
(2/2): cygnus-ngsi-1.7.1-0.g9df0d4d.x86_64.rpm | 74 MB 00:07
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 8.9 MB/s | 202 MB 00:22
Ejecutando el rpm_check_debug
Ejecutando prueba de transacción
La prueba de transacción ha sido exitosa
Ejecutando transacción
[INFO] Creating cygnus user
Instalando : cygnus-common-1.7.1-0.g9df0d4d.x86_64 1/2
[INFO] Creating log directory
Done
Instalando : cygnus-ngsi-1.7.1-0.g9df0d4d.x86_64 2/2
Verifying : cygnus-common-1.7.1-0.g9df0d4d.x86_64 1/2
Verifying : cygnus-ngsi-1.7.1-0.g9df0d4d.x86_64 2/2
Instalado:
cygnus-ngsi.x86_64 0:1.7.1-0.g9df0d4d
Dependencia(s) instalada(s):
cygnus-common.x86_64 0:1.7.1-0.g9df0d4d
¡Listo!
[root#UAL-IoF2020 conf]# ls
agent_1.conf agent_ngsi.conf.template cygnus_instance_2.conf grouping_rules_2.conf krb5_login.conf README-cygnus-common.md
agent_3.conf cartodb_keys.conf.template cygnus_instance.conf.template grouping_rules.conf.template log4j.properties README-cygnus-ngsi.md
agent.conf.template cygnus_instance_1.conf flume-env.sh.template krb5.conf.template name_mappings.conf.template
[root#UAL-IoF2020 conf]# service cygnus restart
There aren't any instance of Cygnus running [ OK ]
Starting Cygnus 1... [ OK ]
Starting Cygnus 2... [ OK ]
[root#UAL-IoF2020 conf]#
Is it possible that this is the problem and that is not recognizing my NGSI and this occupying the 8081 the common? or is this normal?
Log cygnus :
time=2017-10-03T21:51:09.326Z | lvl=INFO | corr= | trans= | srv= | subsrv= | comp=cygnusagent | op=main | msg=com.telefonica.iot.cygnus.nodes.CygnusApplication[301] : Starting a Jetty server listening on 0.0.0.0:8081 (Management Interface)
time=2017-10-03T21:51:09.381Z | lvl=WARN | corr= | trans= | srv= | subsrv= | comp=cygnusagent | op=warn | msg=org.mortbay.log.Slf4jLog[76] : failed SelectChannelConnector#0.0.0.0:8081: java.net.BindException: La dirección ya se está usando
time=2017-10-03T21:51:09.381Z | lvl=WARN | corr= | trans= | srv= | subsrv= | comp=cygnusagent | op=warn | msg=org.mortbay.log.Slf4jLog[76] : failed Server#52992ace: java.net.BindException: La dirección ya se está usando
time=2017-10-03T21:51:09.381Z | lvl=FATAL | corr= | trans= | srv= | subsrv= | comp=cygnusagent | op=run | msg=com.telefonica.iot.cygnus.http.JettyServer[90] : Fatal error running the Management Interface. Details=La dirección ya se está usando
EDIT2
I have already solved the problem of two cygnus, had two agent_1 and agent_2 created. I have deleted one of them and already performing service cygnus restart appears only one cygnus. We are getting better.
But I still have the same problem with the subscriptions:
The contextBroker log indicates:
 
msg = Raising alarm NotificationError http: // localhost: 5050 / notify: (curl_easy_perform failed: could not connect to server)
When I try:
[root # UAL-IoF2020 conf] # netstat -np | grep 5050
I do not think anything.
When I launch this:
[root # UAL-IoF2020 conf] # netstat -np | grep 1026
tcp 0 0 150.XXX.XXX.XXX:1026 XXX.XXX.XXX.XXX:50348 ESTABLISHED 5169 / contextBroker
I am trying to launch a Test of your page.
./notification-json-simple.sh http: // localhost: 5050 / notify myservice myservicepath
and gives me the following error:
[root # UAL-IoF2020 ngsi-examples] # ./notification-json-simple.sh http: // localhost: 5050 / notify myservice myservicepath
* About to connect () to localhost port 5050 (# 0)
* Trying :: 1 ... Connection refused
* Trying 127.0.0.1 ... Connection refused
* could not connect to host
* Closing connection # 0
curl: (7) could not connect to host
It gives the impression that in the 5050 I have nothing listening.
Any clue what that might be?
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = mysql-sink
cygnus-ngsi.channels = mysql-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnus-ngsi.sources.http-source.channels = mysql-channel
# source class, must not be changed
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnus-ngsi.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
# URL target
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
# default service (service semantic depends on the persistence sink)
cygnus-ngsi.sources.http-source.handler.default_service = default
# default service path (service path semantic depends on the persistence sink)
cygnus-ngsi.sources.http-source.handler.default_service_path = /
# source interceptors, do not change
cygnus-ngsi.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
# GroupingInterceptor, do not change
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# see the doc/design/interceptors document for more details
cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf

Do I have to install cygnus-common manual?
Reading the documentation (https://github.com/telefonicaid/fiware-cygnus/tree/master/cygnus-ngsi) it's wrote:
Cygnus NGSI is based on Apache Flume, which is used through cygnus-common and which Cygnus NGSI depends on.
I think you need to install cygnus-common.

I have already solved the problem.
It was in the configuration file of cygnus_instance_1.conf, you had to rename the cygnusagent agent by cygnus-ngsi.
For installation, I have simply followed these steps.
Install ContextBroker.
Install MongoDB (for contextBroker to
work).
Install cygnus-ngsi, this in turn installed automatically
cygnus-common.
Copy the agent_ngsi.conf.template and rename it with
agent_1.conf
Copy the cygnus_instance.conf.template to cygnus_instance_1.conf
Rename the agent from cygnus_instance_1.conf to cygnus-ngsi and the configuration file created above (agent_1.conf)
All this has been with Yum Install, with RPM, I have not had to install apache flume or anything, this way it does everything automatically.
I hope this helps and thanks.

The last error log you have posted is the key: there is another running process listening on TCP/5050 port. Most probably, a previous run of Cygnus not stopped/killed properly.

Related

Error on PM2 - type.slice is not a function

I'm using PM2 version 5.1.2 with the app in cluster mode. Getting this error and pm2 keeps restarting with the same error.
Seems like it has something to do with the cluster mode, but I couldn't find a solution.
PM2 | ===============================================================================
PM2 | --- PM2 global error caught ---------------------------------------------------
PM2 | Time : Tue Dec 21 2021 13:45:52 GMT+0000 (Coordinated Universal Time)
PM2 | type.slice is not a function
PM2 | TypeError: type.slice is not a function
PM2 | at EventEmitter.emit (/usr/lib/node_modules/pm2/node_modules/eventemitter2/lib/eventemitter2.js:343:77)
PM2 | at Worker.cluMessage (/usr/lib/node_modules/pm2/lib/God/ClusterMode.js:65:24)
PM2 | at Worker.emit (events.js:412:35)
PM2 | at Worker.emit (domain.js:537:15)
PM2 | at ChildProcess.<anonymous> (internal/cluster/worker.js:33:12)
PM2 | at ChildProcess.emit (events.js:400:28)
PM2 | at ChildProcess.emit (domain.js:537:15)
PM2 | at emit (internal/child_process.js:912:12)
PM2 | at processTicksAndRejections (internal/process/task_queues.js:83:21)
PM2 | ===============================================================================
PM2 | [PM2] Resurrecting PM2
PM2 config:
--- Daemon -------------------------------------------------
pm2d version : 5.1.2
node version : 14.18.2
node path : /usr/bin/pm2
argv : /usr/bin/node,/usr/lib/node_modules/pm2/lib/Daemon.js
argv0 : node
user : marcos
uid : 1001
gid : 1002
uptime : 0min
===============================================================================
--- CLI ----------------------------------------------------
local pm2 : 5.1.2
node version : 14.18.2
node path : /usr/bin/pm2
argv : /usr/bin/node,/usr/bin/pm2,report
argv0 : node
user : marcos
uid : 1001
gid : 1002
===============================================================================
--- System info --------------------------------------------
arch : x64
platform : linux
type : Linux
cpus : Intel(R) Xeon(R) CPU # 2.20GHz
cpus nb : 2
freemem : 6384754688
totalmem : 8340885504
home : /home/marcos
===============================================================================

How to connect Mysql with logstash?

When I was on windows this file working well but now I must restart my project on Linux and I don't know how can I do it ? bellow see logstash.conf xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
cammand : ./logstash -f longstash.conf
input {
jdbc {
jdbc_connection_string =>"jdbc:mysql://localhost:3306/test"
jdbc_user =>"root"
jdbc_password =>"password"
jdbc_driver_library =>"/usr/share/java/mysql-connector-java-5.1.45.jar"
jdbc_driver_class =>"com.mysql.jdbc.Driver"
schedule =>"* * * * *"
statement =>"SELECT * FROM Pro WHERE last_modificate >:sql_last_value"
use_column_value =>true
tracking_column =>last_modificate
}
}
output {
elasticsearch {
hosts =>"localhost:9200"
action=>update
document_id =>"%{id}"
doc_as_upsert =>true
index =>"blog"
document_type =>"pro"
}
}
And bellow see the error :
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-02-06 16:24:38.441 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-02-06 16:24:38.495 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.6.0"}
[WARN ] 2019-02-06 16:24:58.845 [Converge PipelineAction::Create<main>] elasticsearch - You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], doc_as_upsert=>true, action=>"update", index=>"blog", id=>"0d9b8021264f8db7c25bca76842096f28d088e42d8e84a573b39874bc2c38c19", document_id=>"%{id}", document_type=>"pro", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0d66fa34-7e13-432a-9405-8084af971c1a", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>false, ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[INFO ] 2019-02-06 16:24:58.963 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2019-02-06 16:25:00.378 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[WARN ] 2019-02-06 16:25:01.241 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"}
[INFO ] 2019-02-06 16:25:02.692 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2019-02-06 16:25:02.705 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2019-02-06 16:25:02.805 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[INFO ] 2019-02-06 16:25:02.881 [Ruby-0-Thread-5: :1] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2019-02-06 16:25:03.044 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2019-02-06 16:25:03.726 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5e9c3d30 run>"}
[INFO ] 2019-02-06 16:25:03.868 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-02-06 16:25:05.345 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
Wed Feb 06 16:26:05 CET 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Feb 06 16:26:06 CET 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
[ERROR] 2019-02-06 16:26:06.396 [Ruby-0-Thread-15: :1] jdbc - Unable to connect to database. Tried 1 times {:error_message=>"Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'"}
{ 2014 rufus-scheduler intercepted an error:
2014 job:
2014 Rufus::Scheduler::CronJob "* * * * *" {}
2014 error:
2014 2014
2014 Sequel::DatabaseConnectionError
2014 Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'
2014 com.mysql.jdbc.SQLError.createSQLException(com/mysql/jdbc/SQLError.java:965)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:3973)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:3909)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:873)
2014 com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(com/mysql/jdbc/MysqlIO.java:1710)
2014 com.mysql.jdbc.MysqlIO.doHandshake(com/mysql/jdbc/MysqlIO.java:1226)
2014 com.mysql.jdbc.ConnectionImpl.coreConnect(com/mysql/jdbc/ConnectionImpl.java:2188)
2014 com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(com/mysql/jdbc/ConnectionImpl.java:2219)
2014 com.mysql.jdbc.ConnectionImpl.createNewIO(com/mysql/jdbc/ConnectionImpl.java:2014)
2014 com.mysql.jdbc.ConnectionImpl.<init>(com/mysql/jdbc/ConnectionImpl.java:776)
2014 com.mysql.jdbc.JDBC4Connection.<init>(com/mysql/jdbc/JDBC4Connection.java:47)
2014 java.lang.reflect.Constructor.newInstance(java/lang/reflect/Constructor.java:423)
As the error is self explanatory.
2014 Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'
kindly check the configuration credentials. On linux have have to go /usr/share/logstash directory then run the following commnad.
sudo bin/logstash -f /etc/logstash/conf.d/yourfilename.conf

Cygnus didn't update database

I want to subscribe Orion to send notifications to Cygnus. Then cygnus will save all data in mysql database. I use this script to subscribe the speed attribute of car1.
(curl 130.206.118.44:1026/v1/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Fiware-Service: vehicles' --header 'Fiware-ServicePath: /4wheels' -d #- | python -mjson.tool) <<EOF
{
"entities": [
{
"type": "car",
"isPattern": "false",
"id": "car1"
}
],
"attributes": [
"speed",
"oil_level"
],
"reference": "http://192.168.1.49:5050/notify",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [
"speed"
]
}
],
"throttling": "PT1S"
}
EOF
But when I update the speed attribute of car 1, cygnus doesn't update the database.
Databases available:
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)
Some information about my cygnus service and my cygnus configuration (systemctl status cygnus):
cygnus.service - SYSV: cygnus
Loaded: loaded (/etc/rc.d/init.d/cygnus)
Active: active (exited) since Wed 2015-10-21 17:54:07 UTC; 8min ago
Process: 31566 ExecStop=/etc/rc.d/init.d/cygnus stop (code=exited, status=0/SUCCESS)
Process: 31588 ExecStart=/etc/rc.d/init.d/cygnus start (code=exited, status=0/SUCCESS)
Oct 21 17:54:05 cygnus systemd[1]: Starting SYSV: cygnus...
Oct 21 17:54:05 cygnus su[31593]: (to cygnus) root on none
Oct 21 17:54:07 cygnus cygnus[31588]: Starting Cygnus mysql... [ OK ]
Oct 21 17:54:07 cygnus systemd[1]: Started SYSV: cygnus.
agent_mysql.conf:
# main configuration
cygnusagent.sources = http-source
cygnusagent.sinks = mysql-sink
cygnusagent.channels = mysql-channel
# source configuration
cygnusagent.sources.http-source.channels = mysql-channel
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnusagent.sources.http-source.port = 5050
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# url target
cygnusagent.sources.http-source.handler.notification_target = /notify
cygnusagent.sources.http-source.handler.default_service = def_serv
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
cygnusagent.sources.http-source.handler.events_ttl = 10
cygnusagent.sources.http-source.interceptors = ts gi
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
#Orion MysqlSink Configuration
cygnusagent.sinks.mysql-sink.channel = mysql-channel
cygnusagent.sinks.mysql-sink.type = com.telefonica.iot.cygnus.sinks.OrionMySQLSink
cygnusagent.sinks.mysql-sink.enable_grouping = false
# mysqldb ip
cygnusagent.sinks.mysql-sink.mysql_host = 127.0.0.1
# mysqldb port
cygnusagent.sinks.mysql-sink.mysql_port = 3306
cygnusagent.sinks.mysql-sink.mysql_username = root
cygnusagent.sinks.mysql-sink.mysql_password = 12345
cygnusagent.sinks.mysql-sink.attr_persistence = column
cygnusagent.sinks.mysql-sink.table_type = table-by-destination
# configuracao do canal mysql
cygnusagent.channels.mysql-channel.type = memory
cygnusagent.channels.mysql-channel.capacity = 1000
cygnusagent.channels.mysql-channel.transactionCapacity = 100
After read this question, I changed my agent_mysql.conf in this line:
cygnusagent.sinks.mysql-sink.attr_persistence = column to cygnusagent.sinks.mysql-sink.attr_persistence = row and restarted the service. Then I updated orion entity and I queried database and nothing happened.
Cygnus log file: http://pastebin.com/B2FNKcVf
Note: My JAVA_HOME is set.
As you can see in the logs you posted, there is a problem with the Cygnus log file:
java.io.FileNotFoundException: ./logs/cygnus.log (No such file or directory)
After that, Cygnus stops. You must check your configuration regarding log4j, everything is at /usr/cygnus/conf/log4j.properties (it should exists, it is created by the RPM... if not existing -because you italled from sources instead of the RPM-, it must be created from the available template). In addition, can you post your instance configuration file? Anyway, which version are you running?
EDIT 1
Recently, we have found another user dealing with the same error, and the problem was the content of the /usr/cygnus/conf/log4j.properties file was:
flume.root.logger=INFO,LOGFILE
flume.log.dir=./log
flume.log.file=flume.log
Instead of what the template contains:
flume.root.logger=INFO,LOGFILE
flume.log.dir=/var/log/cygnus/
flume.log.file=flume.log
Once changed it worked because the RPM creates /var/log/cygnus but not ./log.

Encrypting Nagios report mails with GnuPG fails with empty mails, why?

I am trying to crytp using gpg2 the mails sent by Nagios3. For that, I have create this custom command on /etc/nagios3/commands.cfg :
/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$
}
Some points:
The e-mail is sent but it is "empty":
Sep 19 14:35:25 tutu nagios3: Finished daemonizing... (New PID=4313)
Sep 19 14:36:15 tutu nagios3: SERVICE ALERT:
tete_vm;HTTP;OK;HARD;4;HTTP OK: HTTP/1.1 200 OK - 347 bytes in 0.441
second response time Sep 19 14:36:15 tutu nagios3: SERVICE
NOTIFICATION: tata;tete_vm;HTTP;OK;notify-service-by-email;HTTP OK:
HTTP/1.1 200 OK - 347 bytes in 0.441 second response time
The command:
/usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$</code>
works very well on command line
I have tested this command:
/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com >> /tmp/toto.txt
The file /tmp/toto.txt is created but "empty".
So, it seems to be a problem using /usr/bin/gpg2 on this file, but I cannot find why!
The most common mistake when encrypting from within services using GnuPG is that the recipient's key was imported by another (system) user than the one the service is running under, for example imported by root, but the service runs as nagios.
GnuPG maintains per-user "GnuPG home directories" (usually ~/.gnupg) with per-user keyrings in them. If you imported as root, other service accounts don't know anything about the keys in there.
The first step for debugging the issue would be to redirect gpg's stderr to a file, so you can read the error message by adding 2>>/tmp/gpg-error.log to the GnuPG call:
/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com 2>>/tmp/gpg-error.log | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$
If the issue is something like "key not found" or similar, you've got two possibilities to resolve the issue:
Import to the service's user account. Switch to the service's user, and import the key again.
Hard-code the GnuPG home directory to somewhere else using the --homedir [directory] option, for example in a place you also store your Nagios plugins.
Be aware of using appropriate, restrictive permissions. GnuPG is very picky if other users than the owner are allowed to read the files!

Cygnus not presisting data on MySql database

So i have read all the documentation and followed the tutorial on MySQL persistence but i can't still presist any kind of data on MySQL database.
Even though i'm puting the presistence mode = row it doesn't create any database nor table.
What am i doing wrong?
My Subscription:
python2.7 SetSubscription.py bustest4 http://localhost:5050/notify
Output:
* Asking to http://localhost:1026/v1/subscribeContext
* Headers: {'Fiware-Service': 'fiwaretestapi', 'content-type': 'application/json ', 'accept': 'application/json', 'X-Auth-Token': 'NULL'}
* Sending PAYLOAD:
{
"reference": "http://localhost:5050/notify",
"throttling": "PT5S",
"entities": [
{
"type": "",
"id": "bustest4",
"isPattern": "false"
}
],
"attributes": [
"temperature"
],
"duration": "P1M",
"notifyConditions": [
{
"condValues": [
"temperature"
],
"type": "ONCHANGE"
}
]
}
...
* Status Code: 200
{
"subscribeResponse" : {
"subscriptionId" : "5567332298add18cc3e183ac",
"duration" : "P1M",
"throttling" : "PT5S"
}
}
My agent_a1.conf:
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = hdfs-channel mysql-channel ckan-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = fiwaretestapi
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = /
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts de
# Timestamp interceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# Destination extractor interceptor, do not change
cygnusagent.sources.http-source.interceptors.de.type = es.tid.fiware.fiwareconnectors.cygnus.interceptors.DestinationExtractor$Builder
# Matching table for the destination extractor interceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.de.matching_table = /usr/cygnus/conf/matching_table.conf
# ============================================
# OrionMySQLSink configuration
# channel name from where to read notification events
cygnusagent.sinks.mysql-sink.channel = mysql-channel
# sink class, must not be changed
cygnusagent.sinks.mysql-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionMySQLSink
# the FQDN/IP address where the MySQL server runs
cygnusagent.sinks.mysql-sink.mysql_host = localhost
# the port where the MySQL server listes for incomming connections
cygnusagent.sinks.mysql-sink.mysql_port = 3306
# a valid user in the MySQL server
cygnusagent.sinks.mysql-sink.mysql_username = root
# password for the user above
cygnusagent.sinks.mysql-sink.mysql_password = ***********
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mysql-sink.attr_persistence = row
#=============================================
# mysql-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mysql-channel.type = memory
# capacity of the channel
cygnusagent.channels.mysql-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mysql-channel.transactionCapacity = 100
My cygnus_instance_c1.conf:
# Who to run cygnus as. Note that you may need to use root if you want
# to run cygnus in a privileged port (<1024)
CYGNUS_USER=root
# Where is the config folder
CONFIG_FOLDER=/usr/cygnus/conf
# Which is the config file
CONFIG_FILE=/usr/cygnus/conf/agent_a1.conf
# Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME=cygnususer
# Name of the logfile located at /var/log/cygnus. It is important to put the extension '.log' in order to the log rotation works properly
LOGFILE_NAME=cygnus.log
# Administration port. Must be unique per instance
ADMIN_PORT=8081
My cygnus.log:
Info: Sourcing environment configuration script /usr/cygnus/conf/flume-env.sh
Warning: JAVA_HOME is not set!
+ exec /usr/bin/java -Xmx20m -Dflume.log.file=cygnus.log -cp '/usr/cygnus/conf:/usr/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/libext/*' -Djava.library.path= es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication -p 8081 -f /usr/cygnus/conf/agent_a1.conf -n root
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/cygnus/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/cygnus/plugins.d/cygnus/lib/cygnus-0.7.1-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
EDIT:
So after some changes i got the log file to work and i found out that the 8081 port was already in use. Can you explain me what the ADMIN_PORT is used for, what port is it recommended to be?
LOG FILE:
02 Jun 2015 05:16:40,680 INFO [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:133) - Reloading configuration file:/usr/cygnus/conf/agent_a1.conf
02 Jun 2015 05:16:40,685 INFO [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016) - Processing:mysql-sink
02 Jun 2015 05:16:40,686 INFO [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016) - Processing:mysql-sink
02 Jun 2015 05:16:40,686 INFO [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:930) - Added sinks: mysql-sink Agent: cygnusagent
02 Jun 2015 05:16:40,686 INFO [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016) - Processing:mysql-sink
02 Jun 2015 05:16:40,687 INFO [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016) - Processing:mysql-sink
02 Jun 2015 05:16:40,687 INFO [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016) - Processing:mysql-sink
02 Jun 2015 05:16:40,687 INFO [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016) - Processing:mysql-sink
02 Jun 2015 05:16:40,687 INFO [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016) - Processing:mysql-sink
02 Jun 2015 05:16:40,692 INFO [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration.validateConfiguration:140) - Post-validation flume configuration contains configuration for agents: [cygnusagent]
02 Jun 2015 05:16:40,692 WARN [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:138) - No configuration found for this host:root
02 Jun 2015 05:16:40,693 INFO [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:101) - Shutting down configuration: { sourceRunners:{} sinkRunners:{} channels:{} }
02 Jun 2015 05:16:40,693 INFO [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:138) - Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
02 Jun 2015 05:16:40,694 INFO [conf-file-poller-0] (es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication.startManagementInterface:85) - Starting a Jetty server listening on port 8081 (Management Interface)
02 Jun 2015 05:16:40,695 INFO [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:101) - Shutting down configuration: { sourceRunners:{} sinkRunners:{} channels:{} }
02 Jun 2015 05:16:40,695 INFO [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:138) - Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
02 Jun 2015 05:16:40,695 INFO [conf-file-poller-0] (es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication.startManagementInterface:85) - Starting a Jetty server listening on port 8081 (Management Interface)
02 Jun 2015 05:16:40,696 INFO [Thread-26] (org.mortbay.log.Slf4jLog.info:67) - jetty-6.1.26
02 Jun 2015 05:16:40,704 WARN [Thread-26] (org.mortbay.log.Slf4jLog.warn:76) - failed SocketConnector#0.0.0.0:8081: java.net.BindException: Address already in use
02 Jun 2015 05:16:40,704 WARN [Thread-26] (org.mortbay.log.Slf4jLog.warn:76) - failed Server#4f1b95f: java.net.BindException: Address already in use
02 Jun 2015 05:16:40,704 FATAL [Thread-26] (es.tid.fiware.fiwareconnectors.cygnus.http.JettyServer.run:63) - Fatal error running the Management Interface. Details=Address already in use
02 Jun 2015 05:16:40,705 INFO [Thread-27] (org.mortbay.log.Slf4jLog.info:67) - jetty-6.1.26
02 Jun 2015 05:16:40,709 WARN [Thread-27] (org.mortbay.log.Slf4jLog.warn:76) - failed SocketConnector#0.0.0.0:8081: java.net.BindException: Address already in use
02 Jun 2015 05:16:40,709 WARN [Thread-27] (org.mortbay.log.Slf4jLog.warn:76) - failed Server#ed4c222: java.net.BindException: Address already in use
02 Jun 2015 05:16:40,709 FATAL [Thread-27] (es.tid.fiware.fiwareconnectors.cygnus.http.JettyServer.run:63) - Fatal error running the Management Interface. Details=Address already in use
EDIT 2:
My script that updates entity on Context Broker:
BASE_URL = 'http://localhost:1026'
UPDATE_URL = BASE_URL+'/ngsi10/updateContext'
HEADERS = {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Fiware-Service' : 'fiwaretestapi',
'Fiware-ServicePath': '/'
}
UPDATE_EXAMPLE = {
"contextElements": [
{
"type": "",
"isPattern": "false",
"id": "bustest4",
"attributes": [
{
"name": "temperature",
"type": "int",
"value": "99"
}
]
}
],
"updateAction": "APPEND"
}
def post(url, data):
""""""
req = urllib2.Request(url, data, HEADERS)
f = urllib2.urlopen(req)
result = json.loads(f.read())
f.close()
return result
if __name__ == "__main__":
print post(UPDATE_URL, json.dumps(UPDATE_EXAMPLE))
EDIT3:
Even though i set the admin port to be 8085 on the cygnus agent configuration it tries to bind to the 8081, is that normal?
Here are the logs from the cygnus:
time=2015-06-12T05:56:39.820EDT | lvl=INFO | trans= | function=start | comp=Cygnu s | msg=org.apache.flume.instrumentation.MonitoredCounterGroup[94] : Component ty pe: CHANNEL, name: mysql-channel started
time=2015-06-12T05:56:39.821EDT | lvl=INFO | trans= | function=startAllComponents | comp=Cygnus | msg=org.apache.flume.node.Application[173] : Starting Sink mysql -sink
time=2015-06-12T05:56:39.821EDT | lvl=INFO | trans= | function=startAllComponents | comp=Cygnus | msg=org.apache.flume.node.Application[184] : Starting Source htt p-source
time=2015-06-12T05:56:39.821EDT | lvl=INFO | trans= | function=startManagementInt erface | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusAppl ication[85] : Starting a Jetty server listening on port 8081 (Management Interfac e)
time=2015-06-12T05:56:39.822EDT | lvl=INFO | trans= | function=start | comp=Cygnu s | msg=es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionMySQLSink[151] : [mysql- sink] Startup completed
time=2015-06-12T05:56:39.823EDT | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : jetty-6.1.26
time=2015-06-12T05:56:39.824EDT | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : jetty-6.1.26
time=2015-06-12T05:56:39.825EDT | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : Started SocketConnector#0.0.0.0:5050
time=2015-06-12T05:56:39.825EDT | lvl=INFO | trans= | function=start | comp=Cygnu s | msg=org.apache.flume.instrumentation.MonitoredCounterGroup[94] : Component ty pe: SOURCE, name: http-source started
time=2015-06-12T05:56:39.827EDT | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed SocketConnector#0.0.0.0:8081: java.n et.BindException: Address already in use
time=2015-06-12T05:56:39.827EDT | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed Server#1c9c5521: java.net.BindExcept ion: Address already in use
time=2015-06-12T05:56:39.827EDT | lvl=FATAL | trans= | function=run | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.http.JettyServer[63] : Fatal error r unning the Management Interface. Details=Address already in use
Log when i make a subscription:
time=2015-06-12T06:03:56.529EDT | lvl=INFO | trans=1434103313-190-0000000000 | fu nction=getEvents | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.handle rs.OrionRestHandler[153] : Starting transaction (1434103313-190-0000000000)
time=2015-06-12T06:03:56.535EDT | lvl=INFO | trans=1434103313-190-0000000000 | fu nction=getEvents | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.handle rs.OrionRestHandler[239] : Received data ({ "subscriptionId" : "557aae8c98add18c c3e183b6", "originator" : "localhost", "contextResponses" : [ { "contex tElement" : { "type" : "thing", "isPattern" : "false", "id" : "autocarro1", "attributes" : [ { "name" : "temperatu re", "type" : "int", "value" : "95", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2015-06-03T09:17:44.046583Z" } ] } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
time=2015-06-12T06:03:56.540EDT | lvl=INFO | trans=1434103313-190-0000000000 | fu nction=getEvents | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.handle rs.OrionRestHandler[261] : Event put in the channel (id=1983722072, ttl=10)
time=2015-06-12T06:03:56.724EDT | lvl=INFO | trans=1434103313-190-0000000000 | fu nction=process | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.sinks.Or ionSink[126] : Event got from the channel (id=1983722072, headers={timestamp=1434 103436542, content-type=application/json, transactionId=1434103313-190-0000000000 , fiware-service=fiwaretestapi, fiware-servicepath=, ttl=10, destination=autocarr o1_thing}, bodyLength=657)
time=2015-06-12T06:03:57.260EDT | lvl=INFO | trans=1434103313-190-0000000000 | fu nction=persist | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.sinks.Or ionMySQLSink[227] : [mysql-sink] Persisting data at OrionMySQLSink. Database: fiw aretestapi, Table: autocarro1_thing, Data: 1434103436,2015-06-12T06:03:56.542,aut ocarro1,thing,temperature,thing,95,[{"name":"TimeInstant","type":"ISO8601","value ":"2015-06-03T09:17:44.046583Z"}]
time=2015-06-12T06:03:57.270EDT | lvl=INFO | trans=1434103313-190-0000000000 | fu nction=process | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.sinks.Or ionSink[187] : Finishing transaction (1434103313-190-0000000000)
As I can see in the log, most probably Cygnus is running but not starting any Flume component (any source, channel or sink). This is due to some configuration errors.
Regarding agent_a1.conf file:
It is missing the list of sources, channels and sinks:
cygnusagent.sources = http-source
cygnusagent.sinks = mysql-sink
cygnusagent.channels = mysql-channel
cygnusagent.sources.http-source.channels value should be mysql-channel
Regarding cygnus_instance_c1.conf:
AGENT_NAME value must be cygnusagent
Which version have you installed? Are you reunning Cygnus as a service or as an standalone process?
In adition, you could try to start Cygnus in DEBUG mode? Simply edit the /usr/cygnus/conf/log4j.properties file.
Do the proposed changes and see how the log evolves! :)
EDIT 1
Such a "fatal error" is not so fatal. It was a bug appearing in Cygnus 0.7.1, currently fixed. Anyway, even in 0.7.1 it did not affect the normal behaviour of Cygnus since the management port is only used for retrieving the version, nothing important.
Did you try to send some update context to Orion in order a notification is received by Cygnus? Even by simulating the notification? Please, see the Cygnus Quick Start Guide for an example about how to make such a simulation.
EDIT 2
Cygnus packages names es.tid.fiware.fiwareconnectors.cygnus... were replaced by com.telefonica.iot.cygnus... from release 0.8.0 (or maybe 0.9.0).