Azure Cost Management API does not allow me to select columns - json

I tried to use the Azure Cost Management - Query Usage API to get details (certain columns) on all costs for a given subscription. The body I use for the request is
{
"type": "Usage",
"timeframe": " BillingMonthToDate ",
"dataset": {
"granularity": "Daily",
"configuration": {
"columns": [
"MeterCategory",
"CostInBillingCurrency",
"ResourceGroup"
]
}
}
But the response I get back is this:
{
"id": "xxxx",
"name": "xxxx",
"type": "Microsoft.CostManagement/query",
"location": null,
"sku": null,
"eTag": null,
"properties": {
"nextLink": null,
"columns": [
{
"name": "UsageDate",
"type": "Number"
},
{
"name": "Currency",
"type": "String"
} ],
"rows": [
[
20201101,
"EUR"
],
[
20201102,
"EUR"
],
[
20201103,
"EUR"
],
...
]
}
The JSON continues listing all the dates with the currency.
When I use the dataset.aggregation or dataset.grouping clauses in the JSON, I do get costs returned in my JSON but then I don't get the detailed column information that I want. And of course it is not possible to combine these 2 clauses with the dataset.columns clause. Anyone have any idea what I'm doing wrong?

I found a solution without using the dataset.columns clause (which might just be a faulty clause?). By grouping the data according tot the columns I want, I can also get the data for those column values:
{
"type": "Usage",
"timeframe": "BillingMonthToDate",
"dataset": {
"granularity": "Daily",
"aggregation": {
"totalCost": {
"name": "PreTaxCost",
"function": "Sum"
}
},
"grouping": [
{
"type": "Dimension",
"name": "SubscriptionName"
},
{
"type": "Dimension",
"name": "ResourceGroupName"
}
,
{
"type": "Dimension",
"name": "meterSubCategory"
}
,
{
"type": "Dimension",
"name": "MeterCategory"
}
]
}

Related

How to Parse Json Object Element that contains Dynamic Attribute

We have a Json object with the following structure.
{
"results": {
"timesheets": {
"135288482": {
"id": 135288482,
"user_id": 1242515,
"jobcode_id": 17288283,
"customfields": {
"19142": "Item 1",
"19144": "Item 2"
},
"attached_files": [
50692,
44878
],
"last_modified": "1970-01-01T00:00:00+00:00"
},
"135288514": {
"id": 135288514,
"user_id": 1242509,
"jobcode_id": 18080900,
"customfields": {
"19142": "Item 1",
"19144": "Item 2"
},
"attached_files": [
50692,
44878
],
"last_modified": "1970-01-01T00:00:00+00:00"
}}
We need to access the elements that is inside the results --> timesheets --> Dynamic id.
Example:
{
"id": 135288482,
"user_id": 1242515,
"jobcode_id": 17288283,
"customfields": {
"19142": "Item 1",
"19144": "Item 2"
},
"attached_files": [
50692,
44878
],
"last_modified": "1970-01-01T00:00:00+00:00"
}
The problem is that "135288482": { is dynamic. How do we access what is inside of it.
We are trying to create data flow to parse the data. The data is dynamic, so accessing via attribute name is not possible.
AFAIK, as per your JSON structure and dynamic keys it might not be possible to get the desired result using Dataflow.
I have reproduced the above and able to get it done using set variable and ForEach like below.
In your JSON, the keys and the values for the ids are same. So, I have used that to get the list of keys first. Then using that list of keys, I am able to access the inner JSON object.
These are my variable in the pipeline:
First, I have a set variable of type string and stored "id" in it.
Then I have taken lookup activity to get the above JSON file from blob.
I have stored lookup the timesheets objects as string in a set variable using the below dynamic content
#string(activity('Lookup1').output.value[0].results.timesheets)
I have used split on that string with "id" and stored the result array in an array variable.
#split(variables('jsonasstring'), variables('ids'))
This will give the array like below.
Now, I took a Foreach to this array but skipped the first element. In Each iteration, I have taken first 9 indexes of the string to append variable and that is the key.
If your id values and keys are not same, then you can skip the last from split array and take the keys from the reverse side of the string as per your requirement.
This is my dynamic content for append variable activity inside ForEach #take(item(), 9)
Then I took another ForEach, and given this keys list array to it. Inside foreach you can access the JSON with below dynamic content.
#string(activity('Lookup1').output.value[0].results.timesheets[item()])
This is my pipeline JSON:
{
"name": "pipeline1",
"properties": {
"activities": [
{
"name": "Lookup1",
"type": "Lookup",
"dependsOn": [
{
"activity": "for ids",
"dependencyConditions": [
"Succeeded"
]
}
],
"policy": {
"timeout": "0.12:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "JsonSource",
"storeSettings": {
"type": "AzureBlobFSReadSettings",
"recursive": true,
"enablePartitionDiscovery": false
},
"formatSettings": {
"type": "JsonReadSettings"
}
},
"dataset": {
"referenceName": "Json1",
"type": "DatasetReference"
},
"firstRowOnly": false
}
},
{
"name": "JSON as STRING",
"type": "SetVariable",
"dependsOn": [
{
"activity": "Lookup1",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"variableName": "jsonasstring",
"value": {
"value": "#string(activity('Lookup1').output.value[0].results.timesheets)",
"type": "Expression"
}
}
},
{
"name": "for ids",
"type": "SetVariable",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"variableName": "ids",
"value": {
"value": "#string('\"id\":')",
"type": "Expression"
}
}
},
{
"name": "after split",
"type": "SetVariable",
"dependsOn": [
{
"activity": "JSON as STRING",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"variableName": "split_array",
"value": {
"value": "#split(variables('jsonasstring'), variables('ids'))",
"type": "Expression"
}
}
},
{
"name": "ForEach to append keys to array",
"type": "ForEach",
"dependsOn": [
{
"activity": "after split",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"items": {
"value": "#skip(variables('split_array'),1)",
"type": "Expression"
},
"isSequential": true,
"activities": [
{
"name": "Append variable1",
"type": "AppendVariable",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"variableName": "key_ids_array",
"value": {
"value": "#take(item(), 9)",
"type": "Expression"
}
}
}
]
}
},
{
"name": "ForEach to access inner object",
"type": "ForEach",
"dependsOn": [
{
"activity": "ForEach to append keys to array",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"items": {
"value": "#variables('key_ids_array')",
"type": "Expression"
},
"isSequential": true,
"activities": [
{
"name": "Each object",
"type": "SetVariable",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"variableName": "show_res",
"value": {
"value": "#string(activity('Lookup1').output.value[0].results.timesheets[item()])",
"type": "Expression"
}
}
}
]
}
}
],
"variables": {
"jsonasstring": {
"type": "String"
},
"ids": {
"type": "String"
},
"split_array": {
"type": "Array"
},
"key_ids_array": {
"type": "Array"
},
"show_res": {
"type": "String"
}
},
"annotations": []
}
}
Result:
use #json() in the dynamic content to convert the below string to an object.
If you want to store this JSONs in a file, use a ForEach and inside Foreach use an SQL script to copy each object as a row to table in each iteration using JSON_VALUE . Then outside Foreach use copy activity to copy that SQL table to your destination as per your requirement.

Integromat - Dynamically render spec of a collection from an rpc

I am trying to dynamically render a spec (Specification) of a collection from an RPC. Can't get it to work. Here I have attached the code of both 'module->mappable parameters' and the 'remote procedure->communication' here.
module -> mappable parameters
[
{
"name": "birdId",
"type": "select",
"label": "Bird Name",
"required": true,
"options": {
"store": "rpc://selectbird",
"nested": [
{
"name": "variables",
"type": "collection",
"label": "Bird Variables",
"spec": [
"rpc://birdVariables"
]
}
]
}
}
]
remote procedure -> communication
{
"url": "/bird/get-variables",
"method": "POST",
"body": {
"birdId": "{{parameters.birdId}}"
},
"headers": {
"Authorization": "Apikey {{connection.apikey}}"
},
"response": {
"iterate":{
"container": "{{body.data}}"
},
"output": {
"name": "{{item.name}}",
"label": "{{item.label}}",
"type": "{{item.type}}"
}
}
}
Thanks in advance.
Just tried the following and it worked. According to Integromat's Docs you can use the wrapper directive for the rpc like so:
{
"url": "/bird/get-variables",
"method": "POST",
"body": {
"birdId": "{{parameters.birdId}}"
},
"headers": {
"Authorization": "Apikey {{connection.apikey}}"
},
"response": {
"iterate":"{{body.data}}",
"output": {
"name": "{{item.name}}",
"label": "{{item.label}}",
"type": "{{item.type}}"
},
"wrapper": [{
"name": "variables",
"type": "collection",
"label": "Bird Variables",
"spec": "{{output}}"
}]
}
}
Your mappable parameters would then look like:
[
{
"name": "birdId",
"type": "select",
"label": "Bird Name",
"required": true,
"options": {
"store": "rpc://selectbird",
"nested": "rpc://birdVariables"
}
}
]
Needing this myself. Pulling in custom fields that have different types but would like them all to show for the user to update customs fields or when creating a contact be able to update them. Not sure if best to have them all show or have a select drop down then let the user use the map for more than one.
Here is my response from a Get for custom fields. Could you show how my code should look. Got little confused as usualy look for add a value in the output and do you need two separate RPC's in integromat? Noticed your store and nested were different.
{
"customFields": [
{
"id": "5sCdYXDx5QBau2m2BxXC",
"name": "Your Experience",
"fieldKey": "contact.your_experience",
"dataType": "LARGE_TEXT",
"position": 0
},
{
"id": "RdrFtK2hIzJLmuwgBtAr",
"name": "Assisted by",
"fieldKey": "contact.assisted_by",
"dataType": "MULTIPLE_OPTIONS",
"position": 0,
"picklistOptions": [
"Tom",
"Jill",
"Rick"
]
},
{
"id": "uyjmfZwo0PCDJKg2uqrt",
"name": "Is contacted",
"fieldKey": "contact.is_contacted",
"dataType": "CHECKBOX",
"position": 0,
"picklistOptions": [
"I would like to be contacted"
]
}
]
}

JSONpath to get a value by filter on another value

I have below JSON and i require to validate the status based on given id in my automation script. For that JSON path require
[
[
{
"id": 9905130204,
"category": {
"id": 0,
"name": "string"
},
"name": "doggie",
"photoUrls": [
"string"
],
"tags": [
{
"id": 0,
"name": "string"
}
],
"status": "available"
},
{
"id": 9905130203,
"name": "Jeffs Doggie11/6/2019 2:44:53 PM",
"photoUrls": [
"string"
],
"tags": [],
"status": "available"
},
{
"id": 9905130217,
"name": "Goot Doggie",
"photoUrls": [
"https://media.karousell.com/media/photos/products/2017/09/14/doggi_door_stopper_1505372529_5cdd1eba0"
],
"tags": [],
"status": "available"
}
]
]
I want to extract "status": "available" based on "id": 9905130217. No clue how to do that, please help.
You can use the following jsonpath expression to find the status where id = n:
$.[?(#.id == 'n')].status
So, for your specific case:
$.[?(#.id == '9905130217')].status
Note, that this assumes id is unique.

Azure Data Factory Copy Activity

I have been working on this for a couple days and cannot get past this error. I have 2 activities in this pipeline. The first activity copies data from an ODBC connection to an Azure database, which is successful. The 2nd activity transfers the data from Azure table to another Azure table and keeps failing.
The error message is:
Copy activity met invalid parameters: 'UnknownParameterName', Detailed message: An item with the same key has already been added..
I do not see any invalid parameters or unknown parameter names. I have rewritten this multiple times using their add activity code template and by myself, but do not receive any errors when deploying on when it is running. Below is the JSON pipeline code.
Only the 2nd activity is receiving an error.
Thanks.
Source Data set
{
"name": "AnalyticsDB-SHIPUPS_06shp-01src_AZ-915PM",
"properties": {
"structure": [
{
"name": "UPSD_BOL",
"type": "String"
},
{
"name": "UPSD_ORDN",
"type": "String"
}
],
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "Source-SQLAzure",
"typeProperties": {},
"availability": {
"frequency": "Day",
"interval": 1,
"offset": "04:15:00"
},
"external": true,
"policy": {}
}
}
Destination Data set
{
"name": "AnalyticsDB-SHIPUPS_06shp-02dst_AZ-915PM",
"properties": {
"structure": [
{
"name": "SHIP_SYS_TRACK_NUM",
"type": "String"
},
{
"name": "SHIP_TRACK_NUM",
"type": "String"
}
],
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "Destination-Azure-AnalyticsDB",
"typeProperties": {
"tableName": "[olcm].[SHIP_Tracking]"
},
"availability": {
"frequency": "Day",
"interval": 1,
"offset": "04:15:00"
},
"external": false,
"policy": {}
}
}
Pipeline
{
"name": "SHIPUPS_FC_COPY-915PM",
"properties": {
"description": "copy shipments ",
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "RelationalSource",
"query": "$$Text.Format('SELECT COMPANY, UPSD_ORDN, UPSD_BOL FROM \"orupsd - UPS interface Dtl\" WHERE COMPANY = \\'01\\'', WindowStart, WindowEnd)"
},
"sink": {
"type": "SqlSink",
"sqlWriterCleanupScript": "$$Text.Format('delete imp_fc.SHIP_UPS_IntDtl_Tracking', WindowStart, WindowEnd)",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
},
"translator": {
"type": "TabularTranslator",
"columnMappings": "COMPANY:COMPANY, UPSD_ORDN:UPSD_ORDN, UPSD_BOL:UPSD_BOL"
}
},
"inputs": [
{
"name": "AnalyticsDB-SHIPUPS_03shp-01src_FC-915PM"
}
],
"outputs": [
{
"name": "AnalyticsDB-SHIPUPS_03shp-02dst_AZ-915PM"
}
],
"policy": {
"timeout": "1.00:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"style": "StartOfInterval",
"retry": 3,
"longRetry": 0,
"longRetryInterval": "00:00:00"
},
"scheduler": {
"frequency": "Day",
"interval": 1,
"offset": "04:15:00"
},
"name": "915PM-SHIPUPS-fc-copy->[imp_fc]_[SHIP_UPS_IntDtl_Tracking]"
},
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "SqlSource",
"sqlReaderQuery": "$$Text.Format('select distinct ups.UPSD_BOL, ups.UPSD_BOL from imp_fc.SHIP_UPS_IntDtl_Tracking ups LEFT JOIN olcm.SHIP_Tracking st ON ups.UPSD_BOL = st.SHIP_SYS_TRACK_NUM WHERE st.SHIP_SYS_TRACK_NUM IS NULL', WindowStart, WindowEnd)"
},
"sink": {
"type": "SqlSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
},
"translator": {
"type": "TabularTranslator",
"columnMappings": "UPSD_BOL:SHIP_SYS_TRACK_NUM, UPSD_BOL:SHIP_TRACK_NUM"
}
},
"inputs": [
{
"name": "AnalyticsDB-SHIPUPS_06shp-01src_AZ-915PM"
}
],
"outputs": [
{
"name": "AnalyticsDB-SHIPUPS_06shp-02dst_AZ-915PM"
}
],
"policy": {
"timeout": "1.00:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"style": "StartOfInterval",
"retry": 3,
"longRetryInterval": "00:00:00"
},
"scheduler": {
"frequency": "Day",
"interval": 1,
"offset": "04:15:00"
},
"name": "915PM-SHIPUPS-AZ-update->[olcm]_[SHIP_Tracking]"
}
],
"start": "2017-08-22T03:00:00Z",
"end": "2099-12-31T08:00:00Z",
"isPaused": false,
"hubName": "adf-tm-prod-01_hub",
"pipelineMode": "Scheduled"
}
}
Have you seen this link?
They get the same error message and suggest using AzureTableSink instead of SqlSink
"sink": {
"type": "AzureTableSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
}
It would make sense for you too since your 2nd copy activity is Azure to Azure
It could be a red herring but I'm pretty sure "tableName" is a require entry in the typeProperties for a sqlSource. Yours is missing this for the input dataset. Appreciate you have a join in the sqlReaderQuery so probably best to put a dummy (but real) table name in there.
Btw, not clear why you are using $$Text.Format and WindowStart/WindowEnd on your queries if you're not transposing these values into the query; you could just put the query between double quotes.

Schema to load json data to google big query

I have a question for the project that we are doing...
I tried to extract this JSON to Google Big Query and not able to get JSON votes Object fields from the JSON input. I tried the "record" and the "string" types in the schema.
{
"votes": {
"funny": 10,
"useful": 10,
"cool": 10
},
"user_id": "OlMjqqzWZUv2-62CSqKq_A",
"review_id": "LMy8UOKOeh0b9qrz-s1fQA",
"stars": 4,
"date": "2008-07-02",
"text": "This is what this 4-star bar is all about.",
"type": "review",
"business_id": "81IjU5L-t-QQwsE38C63hQ"
}
Also i am not able to get the tables populated from this below JSON for the categories and neighborhood JSON arrays? What should my schema be for these inputs? The docs didn't help much unfortunately in this case or maybe i am not looking at the right place..
{
"business_id": "Iu-oeVzv8ZgP18NIB0UMqg",
"full_address": "3320 S Hill St\nSouth East LA\nLos Angeles, CA 90007",
"schools": [
"University of Southern California"
],
"open": true,
"categories": [
"Medical Centers",
"Health and Medical"
],
"neighborhoods": [
"South East LA"
]
}
I am able to get the regular fields, but that's about it... Any help is appreciated!
For business it seems you want schools to be a repeated field. Your schema should be:
"schema": {
"fields": [
{
"name": "business_id",
"type": "string"
}.
{
"name": "full_address",
"type": "string"
},
{
"name": "schools",
"type": "string",
"mode": "repeated"
},
{
"name": "open",
"type": "boolean"
}
]
}
For votes it seems you want record. Your schema should be:
"schema": {
"fields": [
{
"name": "name",
"type": "string"
}.
{
"name": "votes",
"type": "record",
"fields": [
{
"name": "funny",
"type": "integer",
},
{
"name": "useful",
"type": "integer"
},
{
"name": "cool",
"type": "integer"
}
]
},
]
}
Source
I was also stuck on this problem, but the issue I faced was because one has to remember to flag the mode as repeated for the records source
Also please note that these cannot have a null value source