I am using Cygnus with Mongo and sth sink to retrieve historical data.
In the current implementation of cygnus mongo sink the attribute metadata is not stored in the data base. So I updated cygnus to be able to store the attribute metadata.
But when I use the STH-comet to retrieve the history, the API appreantly does not support retrieveing the attribute metadata.
Am I missing some kind of configuration or the API is not supporting the attribute metadata since the response that I am getting from STH-comet is:
{
"contextResponses": [
{
"contextElement": {
"attributes": [
{
"name": "humidity",
"values": [
{
"recvTime": "2017-03-08T08:06:11.463Z",
"attrType": "Number",
"attrValue": "999"
},
{
"recvTime": "2017-03-08T08:10:54.199Z",
"attrType": "Number",
"attrValue": "3.06"
}
]
}
],
"id": "Room1",
"isPattern": false,
"type": "Room"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}
]
}
In the mongoDB data base I have this content:
{ "_id" : ObjectId("58bfbb7c973c5c22d258cffc"), "recvTime" : ISODate("2017-03-08T08:06:11.463Z"), "attrName" : "humidity", "attrType" : "Number", "attrValue" : "999", "attrMetadata" : [ ] }
{ "_id" : ObjectId("58bfbc93973c5c22d258cffd"), "recvTime" : ISODate("2017-03-08T08:10:54.199Z"), "attrName" : "humidity", "attrType" : "Number", "attrValue" : "3.06", "attrMetadata" : [ { "name" : "unit", "type" : "Text", "value" : "voltage" } ] }
In case the API is not supporting the retrieval of the attribute metadata, can this feature be added?
Thanks & Best regards.
STH and Cygnus are aligned with regards to the information stored in MongoDB, both raw and aggregated one. In this sense, because Cygnus originally did not support for attribute metadata in NGSIMongoSink (the one in charge of storing the information in raw format), STH do not support attribute metadata in its raw API either.
As long as you have extended Cygnus functionality for this purpose, you'll have to extend STH API as well.
Related
I'm new to couchbase. I want to update channels for some of the documents through sync functions. But right now, it is not updating but adding an extra channel to the document's meta but not removing the existing channel. Can anyone suggest how I can remove the existing channel in the document?
Sync Function:
function(doc, oldDoc) {
//....
if (doc.docType === "differentType") {
channel("differentChannel");
expiry(2332800);
return;
}
//.......
}
Document:
{
"channels": [
"abcd"
],
"docType": "differentType",
"_id" : "asjnc"
}
Metadata:
{
"meta": {
"id": "asjnc",
"rev": "64-1b500000000",
"expiration": 1650383285,
"flags": 0,
"type": "json"
},
"xattrs": {
"_sync": {
"rev": "1-db30e607872",
"sequence": 777,
"recent_sequences": [
777
],
"history": {
"revs": [
"1-db30e607872"
],
"parents": [
-1
],
"channels": [
[
"differentChannel"
]
]
},
"channels": {
"differentChannel": null
}
}
}
}
Expectation of the document with the same metadata:
{
"channels": [ ], // <--- no channels
"docType": "differentType",
"_id" : "asjnc"
}
With this sync function, for the document of type differentType, the channel differentChannel is set in the xattrs section in the metadata. But the channel that was added earlier from the couchbaseLite is not getting removed. Can anyone help?
I answered this in the Couchbase Forums: https://forums.couchbase.com/t/remove-channels-from-a-document/33212
The "channels" property in a document is counter-intuitively not describing what channels the document is currently in - it's just a user-definable field that happens to be the default routing for channels if you don't specify a sync function. It's up to the writer of the document what it should contain.
If you have another means of channel assignments (like "docType" in your case), then you don't need to specify "channels" in the document. The sync metadata shows that the document is in "differentChannel" at revision 1-db30e607872 but the contents of the document can be arbitrary.
We are using Loopback successfully so far, but we want to add query params to our API documentation.
In our swagger.json file, we might have something that looks like =>
{
"swagger": "2.0",
"info": {
"version": "1.0.0",
"title": "poc-discovery"
},
"basePath": "/api",
"paths": {
"/Users/{id}/accessTokens/{fk}": {
"get": {
"tags": [
"User"
],
"summary": "Find a related item by id for accessTokens.",
"operationId": "User.prototype.__findById__accessTokens",
"parameters": [
{
"name": "fk",
"in": "path",
"description": "Foreign key for accessTokens",
"required": true,
"type": "string",
"format": "JSON"
},
{
"name": "id",
"in": "path",
"description": "User id",
"required": true,
"type": "string",
"format": "JSON"
},
{
"name":"searchText",
"in":"query",
"description":"The Product that needs to be fetched",
"required":true,
"type":"string"
},
{
"name":"ctrCode",
"in":"query",
"description":"The Product locale needs to be fetched. Example=en-GB, fr-FR, etc.",
"required":true,
"type":"string"
},
],
I am 99% certain the swagger.json information gets generated dynamically via information from the .json files in the /server/models directory.
I am hoping that I can add the query params that we accept for each model in those .json files. What I want to avoid is having to modify swagger.json directly.
What is the best approach to add our query params so that they show up in our docs? Very confused as to how to best approach this.
After a few hours of tinkering, I'm afraid there is not a straight forward way to achieve this as the swagger spec generated here is representation of remoting metadata for model methods along with Model data from model.json files.
Thus, updating remoting metadata for built-in model methods would be challenging and it might not be fully supported by method implementations.
Right approach, IMO, here is to:
- create a remoteMethod wrapper around built-in method for which you want additional params to be injected with requried http mapping data.
- And, disable the REST end-point for the built-in method using
MyModel.disableRemoteMethod(<methodName>, <isStatic>).
Hello Stackoverflow Community,
I have a problem with Microsoft Azure Provisioning, I am trying to access SharedAccessPolicyKeys for Resources like IoT-Hubs or Event-Hubs. I am trying this with listKeys function and output these inside the template JSON file:
"outputs": {
"hubKeys": {
"value": "[listKeys(resourceId('Microsoft.Devices/IotHubs', parameters('hubName')), '2016-02-03')]",
"type": "object"
}
}
When I output the returned Object in Windows Powershell it looks like this:
Type : Array
IsReadOnly : False
HasValues : True
First : {keyName, primaryKey, secondaryKey, rights}
Last : {keyName, primaryKey, secondaryKey, rights}
Count : 5
Parent : {{
"keyName": "iothubowner",
"primaryKey": "dZVFGkIysIgVRKjxlZsCWdk6KGa4rpBFlY6BOLmaiD8=",
"secondaryKey": "HtRYETAdgja/TBSS3sVTshKaGzZWMLbZC6GR60emSV4=",
"rights": "RegistryWrite, ServiceConnect, DeviceConnect"
} {
"keyName": "service",
"primaryKey": "DGOujP2tBTiTTdKxukTx7umeYFFlDEhoih7fb0tP3i8=",
"secondaryKey": "B+6j1nfEc59GAeJQNakNKolTBoR9kc5W+TUNzRXmDpc=",
"rights": "ServiceConnect"
} {
"keyName": "device",
"primaryKey": "qxmRJVH0yVhSkLEz8JaHhtDJaDofpw4SEKkZNlBwp7c=",
"secondaryKey": "RhUuME9EnnUsE2sixswaiTofKsVVfCQNIllwkHgY/8A=",
"rights": "DeviceConnect"
} {
"keyName": "registryRead",
"primaryKey": "pEpHrL4amd9+7pvl6uCiYHL3rZhxV76tZ1P9bERO6Xc=",
"secondaryKey": "6h4UBKd4WPkdpUfl0Hi3G5YKgB3LmtDMbgXDYx3eKrk=",
"rights": "RegistryRead"
} {
"keyName": "registryReadWrite",
"primaryKey": "HpCxKVa1686A8vOfNVBUzYSe2YJmKIwwAzxUh5DokuY=",
"secondaryKey": "PGeYYID9y6cClqGD1rl4koLNySc7kOGK6VuNlBiwqmo=",
"rights": "RegistryWrite"
}}
Root : {value}
Next :
Previous :
Path : value
LineNumber : 0
LinePosition : 0
AllowNew : True
AllowEdit : True
AllowRemove : True
SupportsChangeNotification : True
SupportsSearching : False
SupportsSorting : False
IsSorted : False
SortProperty :
SortDirection : Ascending
IsFixedSize : False
SyncRoot : System.Object
IsSynchronized : False
My Question: Can anyone tell me how to access the "primaryKey" in the different "keyName" Objects? In particular I need the PrimaryKey for "service".
I can print the Object with
$Key = New-AzureRmResourceGroupDeployment (deleted parameters for this post)
Write-Output $Key.Outputs.hubKeys
I already tried things like $Key.Outputs.hubKeys.value.Parents.values.... and countless other ways. Does anyone know how to get the Value?
Thanks,
Arno
The sample here illustrates one way to achieve this. The ARM template creates an IoT Hub and and Azure Stream Analytics job that connects to the hub using the generated key values.
These snippets summarize the key pieces:
/* Create IoT Hub */
{
"apiVersion": "2016-02-03",
"type": "Microsoft.Devices/IotHubs",
"name": "[variables('iotHubName')]",
"location": "[resourceGroup().location]",
"sku": "[parameters('iotHubSku')]"
},
/* Part of the ASA definition */
"datasource": {
"type": "Microsoft.Devices/IotHubs",
"properties": {
"iotHubNamespace": "[variables('iotHubName')]",
"sharedAccessPolicyName": "[variables('iotHubKeyName')]",
"sharedAccessPolicyKey": "[listKeys(resourceId('Microsoft.Devices/IotHubs/Iothubkeys', variables('iotHubName'), variables('iotHubKeyName')), '2016-02-03').primaryKey]",
"consumerGroupName": "[variables('archiveJobConsumerGroupName')]"
}
}
This question is very similar to Missing attributes on Orion CB Entity when registering device through IDAS but found no definitive answer there.
I have been trying FiWare to get UL2.0 via IDAS to the Orion CB working in the Fiware-Lab env:
using latest GitHub
https://github.com/telefonicaid/fiware-figway/tree/master/python-IDAS4
scripts
following the tutorials in particular
http://www.slideshare.net/FI-WARE/fiware-iotidasintroul20v2
I have a FI-WARE Lab account with token generated. Adapted the config.ini file:
[user]
# Please, configure here your username at FIWARE Cloud and a valid Oauth2.0 TOKEN for your user (you can use get_token.py to obtain a valid TOKEN).
username=MY_USERNAME
token=MY_TOKEN
[contextbroker]
host=130.206.80.40
port=1026
OAuth=no
# Here you need to specify the ContextBroker database you are querying.
# Leave it blank if you want the general database or the IDAS service if you are looking for IoT devices connected by you.
# fiware_service=
fiware_service=bus_auto
fiware-service-path=/
[idas]
host=130.206.80.40
adminport=5371
ul20port=5371
OAuth=no
# Here you need to configure the IDAS service your devices will be sending data to.
# By default the OpenIoT service is provided.
# fiware-service=fiwareiot
fiware-service=bus_auto
fiware-service-path=/
#apikey=4jggokgpepnvsb2uv4s40d59ov
apikey=4jggokgpepnvsb2uv4s40d59ov
[local]
#Choose here your System type. Examples: RaspberryPI, MACOSX, Linux, ...
host_type=MACOSX
# Here please add a unique identifier for you. Suggestion: the 3 lower hexa bytes of your Ethernet MAC. E.g. 79:ed:af
# Also you may use your e-mail address.
host_id=a0:11:00
I used the SENSOR_TEMP template, adding the 'protocol' field (PDI-IoTA-UltraLight which as the first problem I stumbled upon):
{
"devices": [
{ "device_id": "DEV_ID",
"entity_name": "ENTITY_ID",
"entity_type": "thing",
"protocol": "PDI-IoTA-UltraLight",
"timezone": "Europe/Amsterdam",
"attributes": [
{ "object_id": "otemp",
"name": "temperature",
"type": "int"
} ],
"static_attributes": [
{ "name": "att_name",
"type": "string",
"value": "value"
}
]
}
]
}
Now I can Register the device ok. Like
python RegisterDevice.py SENSOR_TEMP NexusPro Temp-Otterlo
and see it in Device List:
python ListDevices.py
I can send Observations like
python SendObservation.py Temp-Otterlo 'otemp|17'
But in the ContextBroker I see the Entity but never the measurements, e.g.
python GetEntity.py Temp-Otterlo
Gives
* Asking to http://130.206.80.40:1026/ngsi10/queryContext
* Headers: {'Fiware-Service': 'bus_auto', 'content-type': 'application/json', 'accept': 'application/json', 'X-Auth-Token': 'NULL'}
* Sending PAYLOAD:
{
"entities": [
{
"type": "",
"id": "Temp-Otterlo",
"isPattern": "false"
}
],
"attributes": []
}
...
* Status Code: 200
* Response:
{
"contextResponses" : [
{
"contextElement" : {
"type" : "thing",
"isPattern" : "false",
"id" : "Temp-Otterlo",
"attributes" : [
{
"name" : "TimeInstant",
"type" : "ISO8601",
"value" : "2015-10-03T14:04:44.663133Z"
},
{
"name" : "att_name",
"type" : "string",
"value" : "value",
"metadatas" : [
{
"name" : "TimeInstant",
"type" : "ISO8601",
"value" : "2015-10-03T14:04:44.663500Z"
}
]
}
]
},
"statusCode" : {
"code" : "200",
"reasonPhrase" : "OK"
}
}
]
}
I get an TimeInstant attribute strangely. I tried playing with settings of the .ini like fiware-service=fiwareiot, but to no avail. I am out of ideas. The documentation at the catalogue. for IDAS4
is talking about observations to be sent to port 8002 and setting "OpenIoT" service, but that failed as well.
Any help appreciated.
You should run "python SendObservation.py NexusPro 'otemp|17'" instead of "python SendObservation.py Temp-Otterlo 'otemp|17'".
The reason is that you are providing an observation at the southbound and then, the DEV_ID should be used.
The entity does not include an attribute until an observation is received so then it is normal you are not able to see it. Once you try the one above it should all work.
Cheers,
How to update multiple documents in Solr 4.5.1 with JSON? I tried this but it does not work:
POST /solr/mycore/update/json:
{
"commit": {},
"add": {
"overwrite": true,
"doc": [{
"thumbnail": "/images/404.png",
"url": "/404.html?1",
"id": "demo:/404.html?1",
"channel": "demo",
"display_name": "One entry",
"description": "One entry is not enough."
}, {
"thumbnail": "/images/404.png",
"url": "/404.html?2",
"id": "demo:/404.html?2",
"channel": "demo",
"display_name": "Another entry",
"description": "Another entry is required."
}
]
}
}
Solr expects one "add"-key in the JSON-structure for each document (which might seem weird, if you think about the original meaning of the key in the object), since it maps directly to the XML format when doing the indexing - and this way you can have metadata for each document by itself.
{
"commit": {},
"add": {
"doc": {
"id": "321321",
"name": "barfoo"
}
},
"add": {
"doc": {
"id": "123123",
"name": "Foobar"
}
}
}
.. works. I think allowing an array as the element referenced by "add" would make more sense, but I haven't dug further into the source or know the reasoning behind this.
I understand that (at least) from versions 4.0 and older of solr, this has been fixed. Look at http://wiki.apache.org/solr/UpdateJSON.
In ./exampledocs/books.json there is an example of a json file with multiple documents.
[
{
"id" : "978-0641723445",
"cat" : ["book","hardcover"],
"name" : "The Lightning Thief",
"author" : "Rick Riordan",
"series_t" : "Percy Jackson and the Olympians",
"sequence_i" : 1,
"genre_s" : "fantasy",
"inStock" : true,
"price" : 12.50,
"pages_i" : 384
}
,
{
"id" : "978-1423103349",
"cat" : ["book","paperback"],
"name" : "The Sea of Monsters",
"author" : "Rick Riordan",
"series_t" : "Percy Jackson and the Olympians",
"sequence_i" : 2,
"genre_s" : "fantasy",
"inStock" : true,
"price" : 6.49,
"pages_i" : 304
},
...
]
While #fiskfisk answer is still a valid JSON, it is not easy to be serializable from a data structure. This one is.
elachell is correct that the array format will work if you are just adding documents with the default settings. Unfortunately, that won't work if, for instance, you need to add a custom boost to some of the documents or change the overwrite setting. You then have to use the full object structure with an "add" key for each of them, which as they pointed out, makes this frustratingly annoying to try to serialize from most languages which don't allow the same key more than once in an object:
{
"commit": {},
"add": {
"doc": {
"id": "321321",
"name": "barfoo"
},
"boost": 2.0
},
"add": {
"doc": {
"id": "123123",
"name": "Foobar"
},
"boost": 1.5,
"overwrite": false
}
}
Update for SOLR 8.8 (and maybe lower).
The following JSON works for /update/json:
{
'add': [
{'id': '123', 'field1': 'foo'},
{'id': '124', 'field1': 'foo'}
],
'delete': ['111', '106']
}
Another option if you are on Solr 4.10 or later is to use a custom JSON structure and tell Solr how to index it (not sure how to add boosts with this method either, but it's a nice option if you already have a data struct in JSON and don't want to convert it over to Solr's format). Here's the Solr documentation on this option:
https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers#UploadingDatawithIndexHandlers-TransformingandIndexingCustomJSON