How to interpret Fiware CYGNUS stats service output? - fiware

Starting from my own installation of the following Fiware components: Orion Context Broker, CYGNUS NGSI, Fiware STH and MongoDB, after a while I got the following result consuming a stats service which I found inside CYGNUS management API.
Service: GET http://<cygnus_host>:<management_port>/v1/stats
Result:
{
"success":"true",
"stats":{
"sources":[
{
"name":"http-source",
"status":"START",
"setup_time":"2018-05-10T13:35:06.194Z",
"num_received_events":78,
"num_processed_events":78
}
],
"channels":[
{
"name":"sth-channel",
"status":"START",
"setup_time":"2018-05-10T13:35:06.662Z",
"num_events":1,
"num_puts_ok":78,
"num_puts_failed":0,
"num_takes_ok":77,
"num_takes_failed":112
},
{
"name":"mongo-channel",
"status":"START",
"setup_time":"2018-05-10T13:35:06.662Z",
"num_events":0,
"num_puts_ok":78,
"num_puts_failed":0,
"num_takes_ok":78,
"num_takes_failed":139
},
{
"name":"hdfs-channel",
"status":"START",
"setup_time":"2018-05-10T13:35:06.662Z",
"num_events":1,
"num_puts_ok":78,
"num_puts_failed":0,
"num_takes_ok":77,
"num_takes_failed":35
}
],
"sinks":[
{
"name":"hdfs-sink",
"status":"START",
"setup_time":"2018-05-10T13:35:06.341Z",
"num_processed_events":77,
"num_persisted_events":0
},
{
"name":"mongo-sink",
"status":"START",
"setup_time":"2018-05-10T13:35:06.374Z",
"num_processed_events":78,
"num_persisted_events":78
},
{
"name":"sth-sink",
"status":"START",
"setup_time":"2018-05-10T13:35:06.380Z",
"num_processed_events":78,
"num_persisted_events":77
}
]
}
}
What caught my attention was the amount of num_takes_failed on each channel and here is my first question:
What exactly does this variable mean?
Looking into CYGNUS documentation I suppose that a "take" is something like a retry of a certain action in Flume Mongo channel but which action is that?
I looked at the MongoDB log files and did not find anything related to a connection saturation or similar problem, which brings me to my second question.
Should I worry about this statistic? If yes, how do I solve this problem?
Thank you very much in advance for any help.

You don't have to be worried about the num_takes_failed if you see that the number of processed_events is the same than the number of persisted_events. The numb_takes_filed is the result of the subtraction between the values of the flume methods EventTakeAttemptCount and EventTakeSuccessCount where the EventTakeAttemptCount is the total number of times the sink(s) attempted to read the events from the channel. This doesn't mean that the events were returned each time since sinks might poll and the channel might not have any data, On the other hand,EventTakesuccessCount is the total number of events that were successfully taken by the sink(s).
Moreover, if you want to know more about how the events are processed by the channels and sinks, you can run Cygnus on debug mode and see the log output for ensuring that every event is processed and persisted in the correct way

Related

Subscribing to a topic returns nothing

I'm new to RobotFramework and MQTT. The main requirement for the broker is that only a valid JSON message should be published. So far I've been able to successfully publish my messages.
I subscribe to the topic I posted to in PowerShell and I see that the message has been posted
However, When I try to subscribe and validate, in RIDE, I dont get any message returned.
E.g: I was able to publish this as a retained message to the topic:
Test/TestTopic
{"schema": { "name": "XkvPYD2i", "version": 1 },"title": "XkvPYD2i","tags": "XkvPYD2i"}
This code works:
Publish Single ${topic} ${message} ${qos} ${Retained} ${broker.uri}
(Where the global file defines these values( as above) ${qos}=0)
This code doesn't work
#{messages}= Subscribe ${topic} qos=${qos} timeout=5 limit=0
log ${messages}
I expect to have the message (I posted above) returned and stored in ${messages}. But I get the following(from logs):
KEYWORD BuiltIn. Log ${messages}
Documentation:
Logs the given message with the given level.
Start / End / Elapsed: 20190219 14:57:53.909 / 20190219 14:57:53.910 / 00:00:00.001
14:57:53.910 INFO []
20190219 14:57:53.907 : INFO : #{messages} = [ ]
20190219 14:57:53.910 : INFO : []
Can anyone advise how can I make that work? Thanks!

Duplicate Registration of the same Yubikey U2F device

I have a doubt. I have set a complete solution around the Yubico U2F keys. But now, I cannot stop duplicate registration of the same device for an user for the same app id. While checking on the keyhandles on my database they show different values for each of the duplicate registration. Please help me out.
If you are using the WebAuthn API, you can send all the already registered keys to the client when trying to add a new key using the 'excludeCredentials' key. These credentials would be formatted the same as when trying to log in.
excludeCredentials — Contains a list of credentials that were already
registered to the user. This list is then given to the authenticator,
and if the authenticator recognises any of them, it cancels operation
with error CREDENTIAL_EXISTS, thus preventing double registration of
the same authenticator.
Source: https://medium.com/#herrjemand/introduction-to-webauthn-api-5fd1fb46c285
An example of the JSON the client receives when adding a new key could be:
{
"publicKey":{
"rp":{
"name":"YourApp",
"id":"YourAddress"
},
"authenticatorSelection":{
"userVerification":"preferred"
},
"user":{
"id":"UserId",
"name":"Username",
"displayName":"displayName"
},
"pubKeyCredParams":[
{
"type":"public-key",
"alg":-7
}
],
"attestation":"direct",
"extensions":{
"exts":true
},
"timeout":20000,
"challenge":"...",
"excludeCredentials":[
{
"id":"...",
"type":"public-key",
"transports":[
"usb",
"ble",
"nfc",
"internal"
]
},
{
"id":"...",
"type":"public-key",
"transports":[
"usb",
"ble",
"nfc",
"internal"
]
}
]
}
}
When the browser detects that the user tries to register a key that was already registered, it will tell the user to try another key and the request will not be sent to the server at all.

getDegree()/isOutgoing() funcitons don't work in graphAware/neo4j-to-elasticsearch mapping.json

Neo4j Version: 3.2.2
Operating System: Ubuntu 16.04
I use getDegree() function in mapping.json file, but the return would always be null; I'm using the dataset neo4j tutorial Movie/Actor dataset.
Output from elasticsearch request
mapping.json
{
"defaults": {
"key_property": "uuid",
"nodes_index": "default-index-node",
"relationships_index": "default-index-relationship",
"include_remaining_properties": true
},
"node_mappings": [
{
"condition": "hasLabel('Person')",
"type": "getLabels()",
"properties": {
"getDegree": "getDegree()",
"getDegree(type)": "getDegree('ACTED_IN')",
"getDegree(direction)": "getGegree('OUTGOING')",
"getDegree('type', 'direction')": "getDegree('ACTED_IN', 'OUTGOING')",
"getDegree-degree": "degree"
}
}
],
"relationship_mappings": [
{
"condition": "allRelationships()",
"type": "type",
}
]
}
Also if I use isOutgoing(), isIncoming(), otherNode function in relationship_mappings properties part, elasticsearch would never load the relationship data from neo4j. I think I probably have some misunderstanding of this sentence only when one of the participating nodes "looking" at the relationship is provided on this page https://github.com/graphaware/neo4j-framework/tree/master/common#inclusion-policies
mapping.json
{
"defaults": {
"key_property": "uuid",
"nodes_index": "default-index-node",
"relationships_index": "default-index-relationship",
"include_remaining_properties": true
},
"node_mappings": [
{
"condition": "allNodes()",
"type": "getLabels()"
}
],
"relationship_mappings": [
{
"condition": "allRelationships()",
"type": "type",
"properties": {
"isOutgoing": "isOutgoing()",
"isIncomming": "isIncomming()",
"otherNode": "otherNode"
}
}
]
}
BTW, is there any page that list all of the functions that we can use in mapping.json? I know two of them
github.com/graphaware/neo4j-framework/tree/master/common#inclusion-policies
github.com/graphaware/neo4j-to-elasticsearch/blob/master/docs/json-mapper.md
but it seems there are more, since I can use getType(), which hasn't been listed in any of the above pages.
Please let me know if I can provide any further help to solve the problem
Thanks!
The getDegree() function is not available to use, in contrary to getType(). I will explain why :
When the mapper (the part responsible to create a node or relationship representation as ES document ) is doing its job, it receive a DetachedGraphObject being a detached node or relationship.
The meaning of detached is that it is happening outside of a transaction and thus query operations are not available against the database anymore. The getType() is available because it is part of the relationship metadata and it is cheap, however if we would want to do the same for getDegree() this can be seriously more costly during the DetachedObject creation (which happen in a tx) depending on the number of different types etc.
This is however something we are working on, by externalising the mapper in a standalone java application coupled with a broker like kafa, rabbit,.. between neo and this app. We would not, however offer the possibilty to requery the graph in the current version of the module as it can have serious performance impacts if the user is not very careful.
As last, the only suggestion I can give you is to keep a property on your node with the updates of degrees you need to replicate to ES.
UPDATE
Regarding this part of the documentation :
For Relationships only when one of the participating nodes "looking" at the relationship is provided:
This is used only when not using the json definition, so you can use one or the other. the json definition has been added later as addition and both cannot be used together.
For answering this part, it means that the nodes of the incoming or outgoing side, depending on the definition, should be included in the inclusion policy for nodes, like hasLabel('Employee') || hasProperty('form') || getProperty('age', 0) > 20 . If you have an allNodes policy then it is fine.

Is it possible to create graphs taking data from json in Zabbix?

Would it be possible, in any way, to create json code that zabbix can understand and recreate on a graph?
Eg:
I have this json:
{
"response:" {
"success": true,
"server": {
"name": "Test Server",
"alive": true,
"users": 25
}
}
}
And I would like to have a simple graph where I can see the value of users.
I might be asking a nonsense here but I was reading about the URL element and it looks like it is possible but couldn't find any type template or any info on how to send the data.
Create a Zabbix trapper item and send such values with the zabbix_sender. The values will be processed as any normal item values by Zabbix, and graphs will be available as well.

Orion Context Broker - Query By Location

Following with queries in Orion v0.24.
As pointed out in previous related questions, documentation is ahead to real implementation. Is filter by location with 'geometry' and 'coord' already implemented?
Can anyone provide a query example. I do not understand what/how to pass coordinates. From docs:
List of coordinates (separated by ;) are interpreted depending on the geometry
I tried the following unsuccesfully:
//Call 1
http://<some-ip>:<some-ip>/v2/entities/?type=Test&geometry=polygon&coords=35.46064,-9.93164;35.46066,3.07617;44.33956,3.07617;44.33955,-9.93164
//Result
{
"error": "BadRequest",
"description": "invalid character in URI parameter"
}
I tried similar combinations, filtering special characters with encodeURIComponent, but nothing.
Entities in orion have following attribute 'coordenadas':
{
"id": "Test.1",
"type": "Test",
"coordenadas": {
"type": "geo:point",
"value": "43.7723705, -7.6784461"
},
"fecha": 1440108000000,
"regiones": [
"ES"
]
}
EDIT 03/11/2015
We have updated Orion to version 0.25 where geometry queries are expected to be implemented using NGSI v2.
A call to
http://<some-ip>:<some-ip>/version
reports us the update has been done correctly:
<orion>
<version>0.25.0</version>
<uptime>0 d, 2 h, 23 m, 17 s</uptime>
<git_hash>a8cf800d4e9fdd7b4293a886490c40309a5bb58c</git_hash>
<compile_time>Mon Nov 2 09:13:05 CET 2015</compile_time>
<compiled_by>fermin</compiled_by>
<compiled_in>centollo</compiled_in>
</orion>
Nonetheless, the queries seem to not work properly. Following with the examples used above, a geometrical query like this should return an entity:
http://<some-ip>:<some-ip>/v2/entities?type=Test&geometry=circle;radius:6000&coords=43.7723705,-7.6784461
Unfortunately, the response is an empty array.
We have also tried a geometrical query using a polygon:
http://<some-ip>:<some-ip>/v2/entities?type=Test&geometry=polygon&coords=40.199854,-4.045715;40.643135,-4.045715;40.643135,-3.350830;40.199854,-3.350830
Again, the response is the empty array.
It seems like the location property of the entity, "coordenadas", is not being detected. So I tried creating a new entity to see if the problem is all entities were created before the update to v0.25, but it did not work.
EDIT 04/11/2015
The request we are building for entity creation is the following:
POST /v2/entities/ HTTP/1.1
Accept: application/json, application/*+json
Content-Type: application/json;charset=UTF-8
User-Agent: Java/1.7.0_71
Host: 127.0.0.1:1026
Connection: keep-alive
Content-Length: 379
{
"id":"Test.1",
"type":"Test",
"nombreEspecie":"especietest",
"coordenadas":{
"type":"geo:point",
"value":"3.21456, 41.2136"
},
"fecha":1446624226632,
"gradoSeguridad":1,
"palabrasClave":"test, test, test",
"comentarios":"comentarios, comentarios",
"nombreImagen":"ImagenTest",
"alertas":[],
"regiones":[],
"validacionesPositivas":0,
"validacionesNegativas":0,
"validacionesDenunciadas":0
}
As you suggested we tested the entity creation in a new and clean instance of Orion. The creation was done correctly, but location query is still not working...
The examples are correct, but that functionality is not available yet in Orion 0.24.0 or any previous version. It has been already implemented in the develop branch (see corresponding issue at github.com repository, now closed). It will be available in the version next to 0.24.0, either 0.24.1 or 0.25.0 (number not yet decided at the moment of writting this) by the end of september 2015.
EDIT: Orion 0.25.0 implements geometry and coord URL parameters, but the location definition is still based in NGSIv1 mechanism. Thus, instead of using geo:point use a metadata named location to mark that the associated attribute is the location:
"coordenadas": {
"location": {
"type": "string",
"value": "WGS84"
},
"type": "geo:point",
"value": "3.21456, 41.2136"
}
This "asymmetry" (i.e. NGSIv1 to define location but NGSIv2 geo-query support) will desappear as we progress in NGSIv2 implementation (take into accoun that in Orion 0.25.0, NGSIv2 is still in beta status).