What is a useful Azure IoT Hub JSON message structure for consumption in Time Series Insights - json

The title sounds quite comprehensive, but my baseline question is quite simple, I guess.
Context
I Azure, I have an IoT hub, which I am sending messages to. I use a modified version one of the samples from the Azure Iot SDK for python.
Sending works fine. However, instead of a string, I send a JSON structure.
When I watch the events flowing into the IoT hub, using the Cloud shell, it looks like this:
PS /home/marcel> az iot hub monitor-events --hub-name weathertestiothub
This extension 'azure-cli-iot-ext' is deprecated and scheduled for removal. Please remove and add 'azure-iot' instead.
Starting event monitor, use ctrl-c to stop...
{
"event": {
"origin": "raspberrypi-zero-wh",
"payload": "{ \"timestamp\": \"1608643863720\", \"locationDescription\": \"Attic\", \"temperature\": \"21.941\", \"relhumidity\": \"71.602\" }"
}
}
Issue
The data seems fine, except the payload looks strange here. BUT, the payload is literally what I send from the device, using the SDK sample.
Is this the correct way to do it? At the end, I have a very hard time to actually get the data into the Time Series Insights model. So I guess, my structure is to be blamed.
Question
What is a recommended JSON data structure to send to the IoT hub for later use?

You should add the following 2 lines to your message in your python SDK sample:
msg.content_encoding = "utf-8"
msg.content_type = "application/json"
This should resolve your formatting concern.
We've also updated our samples to reflect this: https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/samples/sync-samples/send_message.py

I ended up using the tip by #elhorton, but it was not the key change. Nonetheless, the formatting in the Azure Shell Monitor looks now much better:
"event": {
"origin": "raspberrypi-zero-wh",
"payload": {
"temperature": 21.543947753906245,
"humidity": 69.22964477539062,
"locationDescription": "Attic"
}
}
The key was:
include the message source time in ISO format
from datetime import datetime
timestampIso = datetime.now().isoformat()
message.custom_properties["iothub-creation-time-utc"] = timestampIso
Using the locationDescription as the Time Series ID Property See https://learn.microsoft.com/en-us/azure/time-series-insights/how-to-select-tsid (Maybe I could also have taken the iothub-connection-device-id, but I did not test that alone specifically)

I guess using "iothub-connection-device-id" will make "raspberrypi-zero-wh" as the name of the time series instance. I agree with your choice of using "locationDescription" as TSID; so Attic becomes the time series instance name, temperature and humidity will be your variables.

Related

key-value of JSON object not stored in Azure App Config as expected when reading from App Config

I'm developing an Azure Function which has to consume JSON as input and then trigger a hybrid CI/CD pipeline split between on-prem and Azure DevOps. To split configuration from code I intend to use an Azure App Configuration store to retrieve configuration settings that the Function will use to trigger the correct pipeline depending on JSON input. I'm completely new to App Config but have tried to investigate how to properly use it. However, I have stumbled into a perplexing issue and can't find an explanation for it. I apologize if I have missed something obvious out there.
For the purpose of this question I have abstracted away any business-related terminology.
Imagine I have a JSON object stored in a file TestStructure.json that looks like this:
{
"TestStructure": {
"Repository1": {
"RepositoryName": "Repository1",
"RepositoryUrl": "https://url.repository1.com/"
},
"Repository2": {
"RepositoryName": "Repository2",
"RepositoryUrl": "https://url.repository2.com/"
},
"Repository3": {
"RepositoryName": "Repository3",
"RepositoryUrl": "https://url.repository3.com/"
}
}
}
I store this in my App Config using the Azure CLI with the following command:
az appconfig kv import -n <myAppConfigName> -s file --format json --path "C:\workspace\TestStructure.json" --content-type "application/json" --separator . --depth 2
The command yields the following key-value pairings:
---------------- Key Values Preview ----------------
Adding:
{"key": "TestStructure.Repository1", "value": "{\"RepositoryName\": \"Repository1\", \"RepositoryUrl\": \"https://url.repository1.com/\"}"}
{"key": "TestStructure.Repository2", "value": "{\"RepositoryName\": \"Repository2\", \"RepositoryUrl\": \"https://url.repository2.com/\"}"}
{"key": "TestStructure.Repository3", "value": "{\"RepositoryName\": \"Repository3\", \"RepositoryUrl\": \"https://url.repository3.com/\"}"}
These keys are what I expect to find in my App Config store.
Going to the App Config in the Azure Portal I find that the JSON object has been stored correctly, i.e. the keys are TestStructure.Repository1, TestStructure.Repository2 and so forth, all with their corresponding values as the Azure CLI command reported back. This screenshot verifies it:
Now, to the actual problem. When I try to fetch a key from my App Config I get some weird behavior.
I have put together a simple Console App in .NET 6 to test how to read from the App Config:
1 using Microsoft.Extensions.Configuration;
2
3 var config = new ConfigurationBuilder()
4 .AddAzureAppConfiguration("MyConnectionString")
5 .Build();
6
7 var repository = config["TestStructure.Repository1"] // Returns null
It doesn't make sense to me why line 7 returns null, so I attached a debugger to inspect the ConfigurationRoot object a bit further and found the following:
What is going on here? Inspecting the config object reveals that the actual keys to index with are stored as TestStructure.Repository1:RepositoryName and not TestStructure.Repository1 and then the corresponding values.
Thank you for taking your time to read my question. I hope I have expressed clearly what I am trying to achieve and what my problem is.

Subscribe to blocks in Solana (JSON RPC API)

I have been reading old blocks from the solana JSON RPC API (using python), but now I am trying to subscribe to the block production on the solana network (to get up live updates).
I tried to pull updates through the RPC API using
{"jsonrpc": "2.0", "id": "1", "method": "blockSubscribe", "params": ["all"]}
This doesn't work, with response: 'code': -32601, 'message': 'Method not found'
Looking at the docs.solana.com info, it states that:
This subscription is unstable and only available if the validator was
started with the --rpc-pubsub-enable-block-subscription flag. The
format of this subscription may change in the future
I assume this means I need to run solana-test-validator --rpc-pubsub-enable-block-subscription, but this just returns:
error: Found argument '--rpc-pubsub-enable-block-subscription' which wasn't expected, or isn't valid in this context
Did you mean --rpc-port?
USAGE:
solana-test-validator --ledger <DIR> --rpc-port <PORT>
I can't seem to find any more information on how to subscribe to blocks using the RPC.
Any ideas or help with what I'm doing wrong?
Thanks in advance
You are correct that the validator has to run --rpc-pubsub-enable-block-subscription. For mainnet-beta usage, it is recommended to either find a private rpc with this enabled or have your own. Please note though, the method is marked currently as unstable.
It looks like rpc-pubsub-enable-block-subscription is not available on the test validator. You can find the full command list here.
This subscription is unstable and only available if the validator was started with the --rpc-pubsub-enable-block-subscription flag. The format of this subscription may change in the future

Postman interceptor request running forever

I am trying to intercept a website - https://www.kroger.com/pl/chicken/05002. In the chrome network tab, I see the request as below, with the details of the products nicely listed as JSON
I copied the cURL as bash and imported it as raw text in Postman. It ran forever without any response. Then I used the intercept feature and still it is running forever.
When both the requests are exactly same, why is it running in Chrome and not in Postman? What am i missing? Any help is appreciated, thanks in advance.
This is probably happening because they don't want you to do what you are trying to do. Note the "filter.verified" param in the URL.
You may want to try reaching out to them for an external API token - especially if you are creating an app or extension to compare competitive prices with the intention of distributing said app or extension - regardless of if it is for financial compensation or not.
Ethically questionable workaround (which would defintely need to be improved upon - this is simply an example of how you could solve your problem...):
GET https://www.kroger.com/search?query=chicken&searchType=default_search&fulfillment=all
const html = cheerio(responseBody);
var results = [];
html.find('div[class="AutoGrid-cell min-w-0"] > div').each(function (i, e)
{
results.push({
"Item": e.children[e.children.length-3].children[0].children[0].children[0]["data"],
"Price": e.children[e.children.length-4].children[0].attribs["value"]
})
});
console.log(results);
If you are unable to obtain an API token from them, this would probably be a legal way to accomplish what you want.

How to use --attach-data-disks when creating new VM using Azure CLI2?

I'm trying to create a new VM using existing Managed disks and I keep running into problems because the parameters are not very well documented.
One problem that I haven't figured out is the format of --attach-data-disks
From the name and description of the parameter this seems to be the way you can attach data disks to the VM that you are creating and I am assuming because it is --attach-data-disks and not --attach-data-disk that you can attach multiple disks using this parameter.
What I don't know is what format to use when passing multiple disks. I have tried separating them using commas but the error that I got seemed to indicate that it viewed the comma delimited list of drives as one long name for a drive.
Here is an example of what I am trying to do:
az vm create -g test-group -n testvm2 --os-type windows --attach-os-disk testvm1-osdisk-20181213-033052 --attach-data-disks "testvm1-datadisk-000-20181213-033052,testvm1-datadisk-001-20181213-033052,testvm1-datadisk-002-20181213-033052"
Error I'm getting:
Deployment failed. Correlation ID: 9999. {
"error": {
"code": "InvalidParameter",
"message": "Id /subscriptions/99999999/resourceGroups/lbacompensafe/providers/Microsoft.Compute/disks/testvm1-datadisk-000-20181213-033052,testvm1-datadisk-001-20181213-033052,testvm1-datadisk-002-20181213-033052 is not a valid resource reference.",
"target": "dataDisk.managedDisk.id"
}
}
I'm running the commands from Powershell, not Bash, if that makes a difference.
Figured it out. It is in fact a space delimited list. I didn't try this sooner because I incorrectly assumeed it would need some sort of grouping or it would look like different parameters, but just listing them out like
--attach-data-disks disk1 disk2 disk3
Will add them in that order. Wish the docs would have just said so. Would have saved me a bunch of time.

Fuzzy search and fail trying to create an entity in Orion

Does the new version (0.24) of Orion let fuzzy search (approximate string search) over entities properties?
In addition, I tried to create an entity with an empty string, but althought the server is returning a 201 code, the entity is not created.
//url to create entity (POST)
http://some.ip:port/v2/entities
//payload:
{
"type": "Test",
"id": "Test.1",
"nombre": ""
}
//reponse
code 201
//url to list entities (GET)
http://some.ip:port/v2/entities?type=Test
//response
[]
This case doesn't work in Orion 0.24.0 due to a bug that has been recently solved in the develop branch. The fix will be available in the version next to 0.24.0, either 0.24.1 or 0.25.0 (number not yet decided at the moment of writting this) by the end of september 2015.
Regarding fuzzy search, we haven't consider yet that functionality in NGSIv2. If you find it useful/needed I'd recommend you to create a new issue in the Orion repository, explaining the feature request as detailed as you can, please.