Looking for a way to filter data withing Azure API call - json

I am looking for a way to extract data out of an azure environment. Problem i'm currently having is that when I use my API call I receive about 60 lines of json while I only need 4 of those lines. To reduce load, increase efficiency and remove the need for parsing withing the other environment where I need to data, I want to find a way to filter the data in the api call. Currently my call looks like this.
https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Web/sites/{application or resource}/providers/microsoft.insights/metrics?api-version=2021-05-01&metricnames=IoWriteBytesPerSecond,IoReadBytesPerSecond&timeSpan=PT1M
now the ouput looks something like this.
{
"cost": 0,
"timespan": "2022-10-11T10:18:00Z/2022-10-11T10:19:00Z",
"interval": "PT1M",
"value": [
{
"id": "/subscriptions//resourceGroups//providers/Microsoft.Web/sites//providers/Microsoft.Insights/metrics/IoWriteBytesPerSecond",
"type": "Microsoft.Insights/metrics",
"name": {
"value": "IoWriteBytesPerSecond",
"localizedValue": "IO Write Bytes Per Second"
},
"displayDescription": "The rate at which the app process is writing bytes to I/O operations. For WebApps and FunctionApps.",
"unit": "BytesPerSecond",
"timeseries": [
{
"metadatavalues": [],
"data": [
{
"timeStamp": "2022-10-11T10:18:00Z",
"total": 288.0
}
]
}
],
"errorCode": "Success"
},
{
"id": "/subscriptions//resourceGroups//providers/Microsoft.Web/sites//providers/Microsoft.Insights/metrics/IoReadBytesPerSecond",
"type": "Microsoft.Insights/metrics",
"name": {
"value": "IoReadBytesPerSecond",
"localizedValue": "IO Read Bytes Per Second"
},
"displayDescription": "The rate at which the app process is reading bytes from I/O operations. For WebApps and FunctionApps.",
"unit": "BytesPerSecond",
"timeseries": [
{
"metadatavalues": [],
"data": [
{
"timeStamp": "2022-10-11T10:18:00Z",
"total": 284.0
}
]
}
],
"errorCode": "Success"
}
],
"namespace": "Microsoft.Web/sites",
"resourceregion": "westeurope"
}
Out of all these lines I only need about 4 objects, Is it possible to use the $filter function within the URL api call? If yes, can someone redirect me to a forum, doc or example where this is used?
Thanks, regards

Related

Amadeus flights API error: carrier code is a 2 or 3 alphanum except YY and YYY

I am using the following SDK to search for and purchase flights via Amadeus:
https://github.com/autotune/amadeus/pull/1/files
This was a previously abandoned project I have decided to take on and make work. As part of that project I am trying to purchase a ticket in the sandbox environment and getting the following error:
{
"errors": [
{
"code": 477,
"title": "INVALID FORMAT",
"detail": "carrier code is a 2 or 3 alphanum except YY and YYY",
"source": {
"pointer": "/data/flightOffers[0]/itineraries[1]/segments[0]/operating/carrierCode",
"example": "AF"
},
"status": 400
}
]
}
Here is the json data being sent:
{
"type": "flight-order",
"travelers": [
{
"id": "1",
"dateOfBirth": "1990-02-15",
"name": {
"firstName": "Foo",
"lastName": "Bar"
},
"gender": "MALE",
"contact": {
"emailAddress": "foo#bar.com",
"phones": [
{
"deviceType": "MOBILE",
"countryCallingCode": "33",
"number": "5555555555"
}
]
}
}
],
"ticketingAgreement": {
"option": "DELAY_TO_CANCEL",
"delay": "6D"
},
"remarks": {},
"operating": {
"carrierCode": "UA"
}
}
Any help appreciated!
The error suggests that the sent payload is invalid. I'd advice you use a tool like Curl or Postman to verify you're using the right API documentation, before debugging actual code.
After further reading your PR and checking the API reference at :
https://developers.amadeus.com/self-service/category/air/api-doc/flight-create-orders/api-reference
I think you need to confirm that the Carrier code being passed is available in the segments under:
flightOffers > itineraries > segments
Although the API reference doesn't have operating > carrierCode like you used in the data sent, my guess after seeing the API error response you shared is that they are performing a check against the flight offers passed.
I suggest you check the results gotten when you call the flightOffers and also add it to the payload sent to the sandbox.

find pricing information for a GCE instance given machine type, region and commitment type

I use the GCP metadata API (http://metadata.google.internal/computeMetadata/v1/) to get information about the instance that a process is running on, including machine type (e.g. "projects/818238156224/machineTypes/n1-standard-4" -- presumably the important part is the "n1-standard-4"), region, zone, and whether the instance is preemptible.
I would like to be able to retrieve information programmatically about how much GCP is charging (e.g. per hour) for usage of the instance.
I can query the GCP billing API (https://cloudbilling.googleapis.com/v1/services/6F81-5844-456A/skus), but that returns JSON like
{
"name": "services/6F81-5844-456A/skus/0048-21CE-74C3",
"skuId": "0048-21CE-74C3",
"description": "Preemptible N2 Custom Instance Core running in Sao Paulo",
"category": {
"serviceDisplayName": "Compute Engine",
"resourceFamily": "Compute",
"resourceGroup": "CPU",
"usageType": "Preemptible"
},
"serviceRegions": [
"southamerica-east1"
],
"pricingInfo": [
{
"summary": "",
"pricingExpression": {
"usageUnit": "h",
"usageUnitDescription": "hour",
"baseUnit": "s",
"baseUnitDescription": "second",
"baseUnitConversionFactor": 3600,
"displayQuantity": 1,
"tieredRates": [
{
"startUsageAmount": 0,
"unitPrice": {
"currencyCode": "USD",
"units": "0",
"nanos": 11538000
}
}
]
},
"currencyConversionRate": 1,
"effectiveTime": "2021-05-26T08:47:05.220Z"
}
],
"serviceProviderName": "Google",
"geoTaxonomy": {
"type": "REGIONAL",
"regions": [
"southamerica-east1"
]
}
}
And it's very unclear how to retrieve an objects in one API given an object in the other.
Do I need to parse the description somehow? Does that even work? Is there a better way?

Orion CB output regarding provisioned device

I have noticed that upon querying Orion CB, while it is working with provisioned devices and having IoT Agent receive HTTP and MQTT messages, it will always output all the values written in the quotation marks:
{
"id": "sensor_data",
"type": "Sensor",
"ActiveTime": {
"type": "Seconds",
"value": "17703",
"metadata": {
"TimeInstant": {
"type": "ISO8601",
"value": "2018-07-04T13:32:27.357Z"
}
}
},
"Distance": {
"type": "Number",
"value": "312",
"metadata": {
"TimeInstant": {
"type": "ISO8601",
"value": "2018-07-04T13:32:27.413Z"
}
}
}
}
However, if to work with only entities in Orion CB, it is possible to receive actual values (like in the example in the manual):
{
"id": "Room1",
"pressure": {
"metadata": {},
"type": "Integer",
"value": 720
},
"temperature": {
"metadata": {},
"type": "Float",
"value": 23
},
"type": "Room"
}
Sometimes, I need to receive the actual value from my sensor in order to format it and use in further applications, but they are in quotation marks, which makes it a little difficult.
Is it possible to somehow change?(maybe in device provisioning), or it really should be that way regarding devices?
Thanks in advance!
EDIT 1
This is the way I provisioned the device:
{
"devices": [
{
"device_id": "sensor_data",
"entity_name": "sensor_data",
"entity_type": "Sensor",
"transport": "MQTT",
"timezone": "Europe/Helsinki",
"attributes": [
{ "object_id": "act", "name": "ActiveTime", "type": "Seconds"},
{ "object_id": "dst", "name": "Distance", "type": "Number"}
]
}
]
}
And this is how the MQTT messages are sent from my sensor (I have set up the topics for IoT Agent to understand them)
/123456789/sensor_data/attrs/act 12
/123456789/sensor_data/attrs/dst 322
123456789 is the API Key I have set here.
This situation tipycally happens when IoT Agents uses NGSIv1 to push data to Context Broker, given that NGSIv1 always "string-fy" any attribute value. Recently, the ability to use NGSIv2 (which doesn't have this limitatino) was introduced in IoT Agents.
In order to solve your problem you have to:
Use a recent IOTA-UL version (the current one from master branch will work)
Enable NGSIv2 in configuration as explained in documentation. This is done in the config.js file:
config.iota = {
...
contextBroker: {
...
ngsiVersion: 'v2'
}
...
}
or using environament variable IOTA_CB_NGSI_VERSION=v2 for the IOTA-UL process.
Enable autocast as explained in documentation. This is done in config.js file:
config.iota = {
...
autocast: true,
...
}
or using environament variable IOTA_AUTOCAST=true for the IOTA-UL process.
Set the right type for each attribute at provision time. The documentation here) provides the right types:
Type "Number" for integer or float numbers
Type "Boolean" for boolean
Type "None" for null
Thus, in your case the provisioning for Distance is ok, but for ActiveTime you should use also Number as type.

How to upload multiple documents with multiple JSON files to Cloudant DB via node js?

Currently, I have a requirement to reprocess the failure records that sit in the Cloudant DB ex.say fail DB. I need to take records from there for a particular day, say 20 records, and place them in Reprocess DB. Can you please help me how to bulk insert 20 failure records that can be stored as 20 different JSON Files using Node JS.
Sample request:
{
"docs": [
{
"_id": "XXX",
"_rev": "1-XXX",
"timestamp": "2018-01-06T14:36:09.834Z",
"DocType": "CustFail",
"RequestPayload": {
},
"CustID": "4",
"Response": "Fail"
},
{
"_id": "XXX",
"_rev": "1-XXX",
"timestamp": "2018-01-06T14:36:09.834Z",
"DocType": "CustFail",
"RequestPayload": {
},
"CustID": "42",
"Response": "Fail"
}
]
}
Thanks!!
if you are using the nodejs-cloudant library, you should be able to call bulk() passing in the array of JSON docs to be inserted

Data Factory: AzureSQL in- and output for pipeline activity type AzureMLBatchExecution

In Azure Data Factory, I’m trying to call an Azure Machine Learning model by a Data Factory Pipeline. I want to use a Azure SQL table as input and another Azure SQL table for the output.
First I deployed a Machine Learning (classic) web service. Then I created an Azure Data Factory Pipeline, using a LinkedService (type= ‘AzureML’, using Request URI and API key of the ML-webservice) and a input and output dataset (‘AzureSqlTable’ type).
Deploying and Provisioning is succeeded. The pipeline starts as scheduled, but keeps ‘Running’ without any result. The pipeline activity is not being shown in the Monitor&Manage: Activity Windows.
On different sites and tutorials, I only find JSON-scripts using the activity type ‘AzureMLBatchExecution’ with BLOB in- and outputs. I want to use AzureSQL in- and output but I can’t get this working.
Can someone provide a sample JSON-script or tell me what’s possibly wrong with the code below?
Thanks!
{
"name": "Predictive_ML_Pipeline",
"properties": {
"description": "use MyAzureML model",
"activities": [
{
"type": "AzureMLBatchExecution",
"typeProperties": {},
"inputs": [
{
"name": "AzureSQLDataset_ML_Input"
}
],
"outputs": [
{
"name": "AzureSQLDataset_ML_Output"
}
],
"policy": {
"timeout": "02:00:00",
"concurrency": 3,
"executionPriorityOrder": "NewestFirst",
"retry": 1
},
"scheduler": {
"frequency": "Week",
"interval": 1
},
"name": "My_ML_Activity",
"description": "prediction analysis on ML batch input",
"linkedServiceName": "AzureMLLinkedService"
}
],
"start": "2017-04-04T09:00:00Z",
"end": "2017-04-04T18:00:00Z",
"isPaused": false,
"hubName": "myml_hub",
"pipelineMode": "Scheduled"
}
}
With a little help from a Microsoft technician, I've got this working. The JSON script as mentioned above is only changed in the schedule-section:
"start": "2017-04-01T08:45:00Z",
"end": "2017-04-09T18:00:00Z",
A pipeline is active only between its start time and end time. Because the scheduler is set to weekly, the pipeline is triggered at the start of the week: that date should be within start- and end date. For more details about scheduling, see: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-scheduling-and-execution
The Azure SQL Input dataset should look like this:
{
"name": "AzureSQLDataset_ML_Input",
"properties": {
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "SRC_SQL_Azure",
"typeProperties": {
"tableName": "dbo.Azure_ML_Input"
},
"availability": {
"frequency": "Week",
"interval": 1
},
"external": true,
"policy": {
"externalData": {
"retryInterval": "00:01:00",
"retryTimeout": "00:10:00",
"maximumRetry": 3
}
}
}
I added the external and policy properties to this dataset (see script above) and after that, it worked.