Microsoft Azure Provisioning JSON Template Output in Powershell - json

Hello Stackoverflow Community,
I have a problem with Microsoft Azure Provisioning, I am trying to access SharedAccessPolicyKeys for Resources like IoT-Hubs or Event-Hubs. I am trying this with listKeys function and output these inside the template JSON file:
"outputs": {
"hubKeys": {
"value": "[listKeys(resourceId('Microsoft.Devices/IotHubs', parameters('hubName')), '2016-02-03')]",
"type": "object"
}
}
When I output the returned Object in Windows Powershell it looks like this:
Type : Array
IsReadOnly : False
HasValues : True
First : {keyName, primaryKey, secondaryKey, rights}
Last : {keyName, primaryKey, secondaryKey, rights}
Count : 5
Parent : {{
"keyName": "iothubowner",
"primaryKey": "dZVFGkIysIgVRKjxlZsCWdk6KGa4rpBFlY6BOLmaiD8=",
"secondaryKey": "HtRYETAdgja/TBSS3sVTshKaGzZWMLbZC6GR60emSV4=",
"rights": "RegistryWrite, ServiceConnect, DeviceConnect"
} {
"keyName": "service",
"primaryKey": "DGOujP2tBTiTTdKxukTx7umeYFFlDEhoih7fb0tP3i8=",
"secondaryKey": "B+6j1nfEc59GAeJQNakNKolTBoR9kc5W+TUNzRXmDpc=",
"rights": "ServiceConnect"
} {
"keyName": "device",
"primaryKey": "qxmRJVH0yVhSkLEz8JaHhtDJaDofpw4SEKkZNlBwp7c=",
"secondaryKey": "RhUuME9EnnUsE2sixswaiTofKsVVfCQNIllwkHgY/8A=",
"rights": "DeviceConnect"
} {
"keyName": "registryRead",
"primaryKey": "pEpHrL4amd9+7pvl6uCiYHL3rZhxV76tZ1P9bERO6Xc=",
"secondaryKey": "6h4UBKd4WPkdpUfl0Hi3G5YKgB3LmtDMbgXDYx3eKrk=",
"rights": "RegistryRead"
} {
"keyName": "registryReadWrite",
"primaryKey": "HpCxKVa1686A8vOfNVBUzYSe2YJmKIwwAzxUh5DokuY=",
"secondaryKey": "PGeYYID9y6cClqGD1rl4koLNySc7kOGK6VuNlBiwqmo=",
"rights": "RegistryWrite"
}}
Root : {value}
Next :
Previous :
Path : value
LineNumber : 0
LinePosition : 0
AllowNew : True
AllowEdit : True
AllowRemove : True
SupportsChangeNotification : True
SupportsSearching : False
SupportsSorting : False
IsSorted : False
SortProperty :
SortDirection : Ascending
IsFixedSize : False
SyncRoot : System.Object
IsSynchronized : False
My Question: Can anyone tell me how to access the "primaryKey" in the different "keyName" Objects? In particular I need the PrimaryKey for "service".
I can print the Object with
$Key = New-AzureRmResourceGroupDeployment (deleted parameters for this post)
Write-Output $Key.Outputs.hubKeys
I already tried things like $Key.Outputs.hubKeys.value.Parents.values.... and countless other ways. Does anyone know how to get the Value?
Thanks,
Arno

The sample here illustrates one way to achieve this. The ARM template creates an IoT Hub and and Azure Stream Analytics job that connects to the hub using the generated key values.
These snippets summarize the key pieces:
/* Create IoT Hub */
{
"apiVersion": "2016-02-03",
"type": "Microsoft.Devices/IotHubs",
"name": "[variables('iotHubName')]",
"location": "[resourceGroup().location]",
"sku": "[parameters('iotHubSku')]"
},
/* Part of the ASA definition */
"datasource": {
"type": "Microsoft.Devices/IotHubs",
"properties": {
"iotHubNamespace": "[variables('iotHubName')]",
"sharedAccessPolicyName": "[variables('iotHubKeyName')]",
"sharedAccessPolicyKey": "[listKeys(resourceId('Microsoft.Devices/IotHubs/Iothubkeys', variables('iotHubName'), variables('iotHubKeyName')), '2016-02-03').primaryKey]",
"consumerGroupName": "[variables('archiveJobConsumerGroupName')]"
}
}

Related

CFT Template error: unresolved condition dependency UseDBSnapshot in Fn::If

Trying to create a CFT for RDS which can handle both the scenarios
creating a new RDS Aurora MySQL cluster and
create a RDS cluster with a existing DB Cluster Snapshot
Here is what I tried,
I have provide the below conditions section of the template
"UseDbSnapshot" : {
"Fn::Not" : [
{
"Fn::Equals":[
{"Ref": "DBSnapshotName"},
""
]
}
]
}
and referenced in Resource section as below
"RDSCluster1": {
"Type": "AWS::RDS::DBCluster",
"Condition": "isResourceCreate",
"Properties": {
"Engine": "aurora",
"DBSubnetGroupName": {
"Ref": "DBSubnetGroup"
},
"DBClusterParameterGroupName": {
"Ref": "RDSDBClusterParameterGroup"
},
"DBSnapshotIdentifier" : {
"Fn::If" : [
"UseDBSnapshot",
{"Ref" : "DBSnapshotName"},
{"Ref" : "AWS::NoValue"}
]
},
"MasterUsername": {
"Ref": "DbUser"
},
"MasterUserPassword": {
"Ref": "MasterUserPassword"
},
"StorageEncrypted" : true,
"KmsKeyId" : {
"Ref": "KmsKeyId"
},
"VpcSecurityGroupIds": [
{
"Fn::GetAtt": [
"DBAccessSecurityGroup",
"GroupId"
]
}
],
"Port": "3306",
"BackupRetentionPeriod": "1"
},
"DeletionPolicy": "Snapshot"
}
The condition "isResourceCreate" is satisfied but I am getting below error
Template error: unresolved condition dependency UseDBSnapshot in Fn::If
Could you please help me here.
Have looked up online link https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-sample-templates.html
and created this CFT.
Let me know if you require any more details.
If you are restoring DB from snapshot, you can't provide MasterUsername and MasterUserPassword. These values will be inherited from the snapshot, so you have to make them optional.
If you specify the SourceDBInstanceIdentifier or DBSnapshotIdentifier property, don't specify this property. The value is inherited from the source DB instance or snapshot.

Can Filebeat parse JSON fields instead of the whole JSON object into kibana?

I am able to get a single JSON object in Kibana:
By having this in the filebeat.yml file:
output.elasticsearch:
hosts: ["localhost:9200"]
How can I get the individual elements in the JSON string. So say if I wanted to compare all the "pseudorange" fields of all my JSON objects. How would I:
Select "pseudorange" field from all my JSON messages to compare them.
Compare them visually in kibana. At the moment I can't even find the message let alone the individual fields in the visualisation tab...
I have heard of people using logstash to parse the string somehow but is there no way of doing this simply with filebeat? If there isn't then what do I do with logstash to help filter the individual fields in the json instead of have my message just one big json string that I cannot interact with?
I get the following output from output.console, note I am putting some information in <> to hide it:
"#timestamp": "2021-03-23T09:37:21.941Z",
"#metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.8.14",
"truncated": false
},
"message": "{\n\t\"Signal_data\" : \n\t{\n\t\t\"antenna type:\" : \"GPS\",\n\t\t\"frequency type:\" : \"GPS\",\n\t\t\"position x:\" : 0.0,\n\t\t\"position y:\" : 0.0,\n\t\t\"position z:\" : 0.0,\n\t\t\"pseudorange:\" : 20280317.359730639,\n\t\t\"pseudorange_error:\" : 0.0,\n\t\t\"pseudorange_rate:\" : -152.02620448094211,\n\t\t\"svid\" : 18\n\t}\n}\u0000",
"source": <ip address>,
"log": {
"source": {
"address": <ip address>
}
},
"input": {
"type": "udp"
},
"prospector": {
"type": "udp"
},
"beat": {
"name": <ip address>,
"hostname": "ip-<ip address>",
"version": "6.8.14"
},
"host": {
"name": "ip-<ip address>",
"os": {
<ubuntu info>
},
"id": <id>,
"containerized": false,
"architecture": "x86_64"
},
"meta": {
"cloud": {
<cloud info>
}
}
}
In Filebeat, you can leverage the decode_json_fields processor in order to decode a JSON string and add the decoded fields into the root obejct:
processors:
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 2
target: ""
overwrite_keys: true
add_error_key: false
Credit to Val for this. His answer worked however as he suggested my JSON string had a \000 at the end which stops it being JSON and prevented the decode_json_fields processor from working as it should...
Upgrading to version 7.12 of Filebeat (also ensure version 7.12 of Elasticsearch and Kibana because mismatched versions between them can cause issues) allows us to use the script processor: https://www.elastic.co/guide/en/beats/filebeat/current/processor-script.html.
Credit to Val here again, this script removed the null terminator:
- script:
lang: javascript
id: trim
source: >
function process(event) {
event.Put("message", event.Get("message").trim());
}
After the null terminator was removed the decode_json_fields processor did its job as Val suggested and I was able to extract the individual elements of the JSON field which allowed Kibana visualisation to look at the elements I wanted!

Fiware STH: row data API not exposing metadata

I am using Cygnus with Mongo and sth sink to retrieve historical data.
In the current implementation of cygnus mongo sink the attribute metadata is not stored in the data base. So I updated cygnus to be able to store the attribute metadata.
But when I use the STH-comet to retrieve the history, the API appreantly does not support retrieveing the attribute metadata.
Am I missing some kind of configuration or the API is not supporting the attribute metadata since the response that I am getting from STH-comet is:
{
"contextResponses": [
{
"contextElement": {
"attributes": [
{
"name": "humidity",
"values": [
{
"recvTime": "2017-03-08T08:06:11.463Z",
"attrType": "Number",
"attrValue": "999"
},
{
"recvTime": "2017-03-08T08:10:54.199Z",
"attrType": "Number",
"attrValue": "3.06"
}
]
}
],
"id": "Room1",
"isPattern": false,
"type": "Room"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}
]
}
In the mongoDB data base I have this content:
{ "_id" : ObjectId("58bfbb7c973c5c22d258cffc"), "recvTime" : ISODate("2017-03-08T08:06:11.463Z"), "attrName" : "humidity", "attrType" : "Number", "attrValue" : "999", "attrMetadata" : [ ] }
{ "_id" : ObjectId("58bfbc93973c5c22d258cffd"), "recvTime" : ISODate("2017-03-08T08:10:54.199Z"), "attrName" : "humidity", "attrType" : "Number", "attrValue" : "3.06", "attrMetadata" : [ { "name" : "unit", "type" : "Text", "value" : "voltage" } ] }
In case the API is not supporting the retrieval of the attribute metadata, can this feature be added?
Thanks & Best regards.
STH and Cygnus are aligned with regards to the information stored in MongoDB, both raw and aggregated one. In this sense, because Cygnus originally did not support for attribute metadata in NGSIMongoSink (the one in charge of storing the information in raw format), STH do not support attribute metadata in its raw API either.
As long as you have extended Cygnus functionality for this purpose, you'll have to extend STH API as well.

Fiware: No observation attributes in Orion CB when registered/sent via IDAS UltraLight

This question is very similar to Missing attributes on Orion CB Entity when registering device through IDAS but found no definitive answer there.
I have been trying FiWare to get UL2.0 via IDAS to the Orion CB working in the Fiware-Lab env:
using latest GitHub
https://github.com/telefonicaid/fiware-figway/tree/master/python-IDAS4
scripts
following the tutorials in particular
http://www.slideshare.net/FI-WARE/fiware-iotidasintroul20v2
I have a FI-WARE Lab account with token generated. Adapted the config.ini file:
[user]
# Please, configure here your username at FIWARE Cloud and a valid Oauth2.0 TOKEN for your user (you can use get_token.py to obtain a valid TOKEN).
username=MY_USERNAME
token=MY_TOKEN
[contextbroker]
host=130.206.80.40
port=1026
OAuth=no
# Here you need to specify the ContextBroker database you are querying.
# Leave it blank if you want the general database or the IDAS service if you are looking for IoT devices connected by you.
# fiware_service=
fiware_service=bus_auto
fiware-service-path=/
[idas]
host=130.206.80.40
adminport=5371
ul20port=5371
OAuth=no
# Here you need to configure the IDAS service your devices will be sending data to.
# By default the OpenIoT service is provided.
# fiware-service=fiwareiot
fiware-service=bus_auto
fiware-service-path=/
#apikey=4jggokgpepnvsb2uv4s40d59ov
apikey=4jggokgpepnvsb2uv4s40d59ov
[local]
#Choose here your System type. Examples: RaspberryPI, MACOSX, Linux, ...
host_type=MACOSX
# Here please add a unique identifier for you. Suggestion: the 3 lower hexa bytes of your Ethernet MAC. E.g. 79:ed:af
# Also you may use your e-mail address.
host_id=a0:11:00
I used the SENSOR_TEMP template, adding the 'protocol' field (PDI-IoTA-UltraLight which as the first problem I stumbled upon):
{
"devices": [
{ "device_id": "DEV_ID",
"entity_name": "ENTITY_ID",
"entity_type": "thing",
"protocol": "PDI-IoTA-UltraLight",
"timezone": "Europe/Amsterdam",
"attributes": [
{ "object_id": "otemp",
"name": "temperature",
"type": "int"
} ],
"static_attributes": [
{ "name": "att_name",
"type": "string",
"value": "value"
}
]
}
]
}
Now I can Register the device ok. Like
python RegisterDevice.py SENSOR_TEMP NexusPro Temp-Otterlo
and see it in Device List:
python ListDevices.py
I can send Observations like
python SendObservation.py Temp-Otterlo 'otemp|17'
But in the ContextBroker I see the Entity but never the measurements, e.g.
python GetEntity.py Temp-Otterlo
Gives
* Asking to http://130.206.80.40:1026/ngsi10/queryContext
* Headers: {'Fiware-Service': 'bus_auto', 'content-type': 'application/json', 'accept': 'application/json', 'X-Auth-Token': 'NULL'}
* Sending PAYLOAD:
{
"entities": [
{
"type": "",
"id": "Temp-Otterlo",
"isPattern": "false"
}
],
"attributes": []
}
...
* Status Code: 200
* Response:
{
"contextResponses" : [
{
"contextElement" : {
"type" : "thing",
"isPattern" : "false",
"id" : "Temp-Otterlo",
"attributes" : [
{
"name" : "TimeInstant",
"type" : "ISO8601",
"value" : "2015-10-03T14:04:44.663133Z"
},
{
"name" : "att_name",
"type" : "string",
"value" : "value",
"metadatas" : [
{
"name" : "TimeInstant",
"type" : "ISO8601",
"value" : "2015-10-03T14:04:44.663500Z"
}
]
}
]
},
"statusCode" : {
"code" : "200",
"reasonPhrase" : "OK"
}
}
]
}
I get an TimeInstant attribute strangely. I tried playing with settings of the .ini like fiware-service=fiwareiot, but to no avail. I am out of ideas. The documentation at the catalogue. for IDAS4
is talking about observations to be sent to port 8002 and setting "OpenIoT" service, but that failed as well.
Any help appreciated.
You should run "python SendObservation.py NexusPro 'otemp|17'" instead of "python SendObservation.py Temp-Otterlo 'otemp|17'".
The reason is that you are providing an observation at the southbound and then, the DEV_ID should be used.
The entity does not include an attribute until an observation is received so then it is normal you are not able to see it. Once you try the one above it should all work.
Cheers,

How to update multiple documents in Solr with JSON?

How to update multiple documents in Solr 4.5.1 with JSON? I tried this but it does not work:
POST /solr/mycore/update/json:
{
"commit": {},
"add": {
"overwrite": true,
"doc": [{
"thumbnail": "/images/404.png",
"url": "/404.html?1",
"id": "demo:/404.html?1",
"channel": "demo",
"display_name": "One entry",
"description": "One entry is not enough."
}, {
"thumbnail": "/images/404.png",
"url": "/404.html?2",
"id": "demo:/404.html?2",
"channel": "demo",
"display_name": "Another entry",
"description": "Another entry is required."
}
]
}
}
Solr expects one "add"-key in the JSON-structure for each document (which might seem weird, if you think about the original meaning of the key in the object), since it maps directly to the XML format when doing the indexing - and this way you can have metadata for each document by itself.
{
"commit": {},
"add": {
"doc": {
"id": "321321",
"name": "barfoo"
}
},
"add": {
"doc": {
"id": "123123",
"name": "Foobar"
}
}
}
.. works. I think allowing an array as the element referenced by "add" would make more sense, but I haven't dug further into the source or know the reasoning behind this.
I understand that (at least) from versions 4.0 and older of solr, this has been fixed. Look at http://wiki.apache.org/solr/UpdateJSON.
In ./exampledocs/books.json there is an example of a json file with multiple documents.
[
{
"id" : "978-0641723445",
"cat" : ["book","hardcover"],
"name" : "The Lightning Thief",
"author" : "Rick Riordan",
"series_t" : "Percy Jackson and the Olympians",
"sequence_i" : 1,
"genre_s" : "fantasy",
"inStock" : true,
"price" : 12.50,
"pages_i" : 384
}
,
{
"id" : "978-1423103349",
"cat" : ["book","paperback"],
"name" : "The Sea of Monsters",
"author" : "Rick Riordan",
"series_t" : "Percy Jackson and the Olympians",
"sequence_i" : 2,
"genre_s" : "fantasy",
"inStock" : true,
"price" : 6.49,
"pages_i" : 304
},
...
]
While #fiskfisk answer is still a valid JSON, it is not easy to be serializable from a data structure. This one is.
elachell is correct that the array format will work if you are just adding documents with the default settings. Unfortunately, that won't work if, for instance, you need to add a custom boost to some of the documents or change the overwrite setting. You then have to use the full object structure with an "add" key for each of them, which as they pointed out, makes this frustratingly annoying to try to serialize from most languages which don't allow the same key more than once in an object:
{
"commit": {},
"add": {
"doc": {
"id": "321321",
"name": "barfoo"
},
"boost": 2.0
},
"add": {
"doc": {
"id": "123123",
"name": "Foobar"
},
"boost": 1.5,
"overwrite": false
}
}
Update for SOLR 8.8 (and maybe lower).
The following JSON works for /update/json:
{
'add': [
{'id': '123', 'field1': 'foo'},
{'id': '124', 'field1': 'foo'}
],
'delete': ['111', '106']
}
Another option if you are on Solr 4.10 or later is to use a custom JSON structure and tell Solr how to index it (not sure how to add boosts with this method either, but it's a nice option if you already have a data struct in JSON and don't want to convert it over to Solr's format). Here's the Solr documentation on this option:
https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers#UploadingDatawithIndexHandlers-TransformingandIndexingCustomJSON