Best way to wrangle json data to remove field labels - google-apps-script

I am looking to try make it easier to use the Google Analytics data in Google sheets.
The outputs that I receive from the api look like :
[
{
"dimensionValues": [
{
"value": "id2"
},
{
"value": "(not set)"
},
{
"value": "Android"
}
]
},
{
"dimensionValues": [
{
"value": "id1"
},
{
"value": "stream name"
},
{
"value": "iOS"
}
]
}
]
Whats the most efficient method to remove the field names to make a "flatter set of arrays" similar to:
[["id2","(not set)","Android"], ["id1","stream name","iOS"]]
It feels like there should be a quick way to do this!

You can use a double map for this one. Map the value to its dimensionValues and then map those to the array elements to get the designated output.
let array = [{"dimensionValues": [{"value": "id2"},{"value": "(not set)"},{"value": "Android"}]},{"dimensionValues": [{"value": "id1"},{"value": "stream name"},{"value": "iOS"}]}]
console.log(array.map(x => x.dimensionValues.map(y => y.value)))

Related

JSONata transpose a dynamic sized object

I have a json response from a REST API that looks like:
{
"Data": [
{
"Name": "MeasurementID1",
"Samples": [
{
"Time": "2021-12-31T11:15:00.000Z",
"Value": "3.280642033",
},
{
"Time": "2021-12-31T12:15:00.000Z",
"Value": "0.06151203811",
}
]
},
{
"Name": "MeasurementID2",
"Samples": [
{
"Time": "2021-12-31T11:15:00.000Z",
"Value": "53.91226196",
},
{
"Time": "2021-12-31T12:15:00.000Z",
"Value": "56.34856796",
}
]
}
]
}
I would like to transform this data for plotting in Grafana where the data is an array of table rows like
[
{ "Time": "2021-12-31T11:15:00.000Z", "MeasurementID1": "3.280642033", "MeasurementID2": "53.91226196".........."MeasurementIDxx": xxx},
{ "Time": "2021-12-31T12:15:00.000Z", "MeasurementID1": "0.06151203811", "MeasurementID2": "56.34856796".........."MeasurementIDxx": xxx}
]
I've hit a stumbling block in that the number of objects in the "Data" array is dynamic and is based on the # requested in rest api get request.
I'm stuck and don't have enough knowledge to do this transformation
If you flattened the structure a bit first, you can use the 'group-by' construct:
Data.Samples.{
"Name": %.Name,
"Time": Time,
"Value": Value
}{Time: $} ~> $each(function($v) {
$merge($v.{"Time": Time, Name: Value})
})
See https://try.jsonata.org/NXMIg7e0R
I was able to solve it by using 2 nested $reduce calls. Check it out here: https://stedi.link/egfbW8g
$reduce(Data, function($dataAcc, $dataItem) {(
$reduce($dataItem.Samples, function($samplesAcc, $sampleItem) {(
$existingItemForTime := $lookup($samplesAcc, $sampleItem.Time);
$patchForTime := {"Time": $sampleItem.Time, $dataItem.Name: $sampleItem.Value };
$merge([$samplesAcc, { $sampleItem.Time: $merge([$existingItemForTime, $patchForTime]) }])
)}, $dataAcc);
)}, {}) ~> $each(function($v) { $v })

jmespath :select json object element based on other (array) element in the object

I have this JSON
{
"srv_config": [{
"name": "db1",
"servers": ["srv1", "srv2"],
"prop": [{"source":"aa"},"destination":"bb"},{"source":"cc"},"destination":"cc"},]
}, {
"name": "db2",
"servers": ["srv2", "srv2"],
"prop": [{"source":"dd"},"destination":"dd"},{"source":"ee"},"destination":"ee"},]
}
]
}
I try to build a JMESPath expression to select the prop application in each object in the main array, but based on the existence of a string in the servers element.
To select all props, I can do:
*.props [*]
But how do I add condition that says "select only if srv1 is in servers list"?
You can use the contains function in order to filter based on a array containing something.
Given the query:
*[?contains(servers, `srv1`)].prop | [][]
This gives us:
[
{
"source": "aa",
"destination": "bb"
},
{
"source": "cc",
"destination": "cc"
}
]
Please mind that I am also using a bit of flattening here.
All this run towards a corrected version of you JSON:
{
"srv_config":[
{
"name":"db1",
"servers":[
"srv1",
"srv2"
],
"prop":[
{
"source":"aa",
"destination":"bb"
},
{
"source":"cc",
"destination":"cc"
}
]
},
{
"name":"db2",
"servers":[
"srv2",
"srv2"
],
"prop":[
{
"source":"dd",
"destination":"dd"
},
{
"source":"ee",
"destination":"ee"
}
]
}
]
}

Insert whole json to the field in Dynamo DB using AWS Step Functions

I am trying to PutItem into the DynamoDB using the AWS Step Functions.
I managed to save Item with simple string fields (S), but one of the fields should store the whole JSON payload. So it should be the Map (M).
But my payload includes nested Maps also.
Example JSON:
{
"firstMap": {
"field": "something",
},
"secondMap": {
"nestedMap": {
"field": "something",
},
"anotherNestedMap": [
{
"field": "something",
"oneMoreNestedMap": {
"andOneMore": {
"field": "something",
},
"arrayComesHere": [
{
"andAgainNestedMap": {
"field": "something",
},
"andAgain": [
{
"field": "something",
"alsoNestedArray": [
{
"field": "something"
}
]
}
]
}
]
},
"letItBeFinalOne": [
{
"field": "something"
}
]
}
]
...
What I want to do is to just say, hey Step Function, insert please this whole JSON into the item field like this
"Item": {
...
"whole_payload": {
"M.$": "$"
},
} ...
But it fails, cause it accepts only one Map to be handled.
So I need to directly iterate over all nested maps and mark them with 'M'.
Is there a way to make it resolve it by itself?
Like in Typescript I can use aws.DynamoDB.DocumentClient() and just put a whole JSON to the field and it resolves all the maps by itself
Came across a thread for similar request to AWS Step functions team. New feature enhanced allows something closer to what you are looking for.
Sample snippet:
...
"Parameters": {
"TableName" : "dynamodb-table",
"Item":{
"requestId" : {
"S.$": "$.requestId"
},
"payload": {
"S.$":"States.JsonToString($)"
}
}
...
AWS Reference

In Logic Apps JSON Array while parsing throwing error for single object but for multiple objects it is working fine

While parsing JSON in Azure Logic App in my array I can get single or multiple values/objects (Box as shown in below example)
Both type of inputs are correct but when only single object is coming then it is throwing an error "Invalid type. Expected Array but got Object "
Input 1 (Throwing error) : -
{
"MyBoxCollection":
{
"Box":{
"BoxName": "Box 1"
}
}
}
Input 2 (Working Fine) : -
{
"MyBoxCollection":
[
{
"Box":{
"BoxName": "Box 1"
},
"Box":{
"BoxName": "Box 2"
}
}]
}
JSON Schema :
"MyBoxCollection": {
"type": "object",
"properties": {
"box": {
"type": "array",
items": {
"type": "object",
"properties": {
"BoxName": {
"type": "string"
},
......
.....
..
}
Error Details :-
[
{
"message": "Invalid type. Expected Array but got Object .",
"lineNumber": 0,
"linePosition": 0,
"path": "Order.MyBoxCollection.Box",
"schemaId": "#/properties/Root/properties/MyBoxCollection/properties/Box",
"errorType": "type",
"childErrors": []
}
]
I used to use the trick of injecting a couple of dummy rows in the resultset as suggested by the other posts, but I recently found a better way. Kudos to Thomas Prokov for providing the inspiration in his NETWORG blog post.
The JSON parse schema accepts multiple choices as type, so simply replace
"type": "array"
with
"type": ["array","object"]
and your parse step will happily parse either an array or a single value (or no value at all).
You may then need to identify which scenario you're in: 0, 1 or multiple records in the resultset? I'm pasting below how you can create a variable (ResultsetSize) which takes one of 3 values (rs_0, rs_1 or rs_n) for your switch:
"Initialize_ResultsetSize": {
"inputs": {
"variables": [
{
"name": "ResultsetSize",
"type": "string",
"value": "rs_n"
}
]
},
"runAfter": {
"<replace_with_name_of_previous_action>": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Check_if_resultset_is_0_or_1_records": {
"actions": {
"Set_ResultsetSize_to_0": {
"inputs": {
"name": "ResultsetSize",
"value": "rs_0"
},
"runAfter": {},
"type": "SetVariable"
}
},
"else": {
"actions": {
"Set_ResultsetSize_to_1": {
"inputs": {
"name": "ResultsetSize",
"value": "rs_1"
},
"runAfter": {},
"type": "SetVariable"
}
}
},
"expression": {
"and": [
{
"equals": [
"#string(body('<replace_with_name_of_Parse_JSON_action>')?['<replace_with_name_of_root_element>']?['<replace_with_name_of_list_container_element>']?['<replace_with_name_of_item_element>']?['<replace_with_non_null_element_or_attribute>'])",
""
]
}
]
},
"runAfter": {
"Initialize_ResultsetSize": [
"Succeeded"
]
},
"type": "If"
},
"Process_resultset_depending_on_ResultsetSize": {
"cases": {
"Case_no_record": {
"actions": {
},
"case": "rs_0"
},
"Case_one_record_only": {
"actions": {
},
"case": "rs_1"
}
},
"default": {
"actions": {
}
},
"expression": "#variables('ResultsetSize')",
"runAfter": {
"Check_if_resultset_is_0_or_1_records": [
"Succeeded",
"Failed",
"Skipped",
"TimedOut"
]
},
"type": "Switch"
}
For this problem, I met another stack overflow post which is similar to this problem. While there is one "Box", it will be shown as {key/value pair} but not [array] when we convert it to json format. I think it is caused by design, so maybe we can just add a record "Box" at the source of your xml data such as:
<Box>specific_test</Box>
And do some operation to escape the "specific_test" in the next steps.
Another workaround for your reference:
If your json data has only one array, we can use it to do a judgment. We can judge the json data if it contains "[" character. If it contains "[", the return value is the index of the "[" character. If not contains, the return value is -1.
The expression shows as below:
indexOf('{"MyBoxCollection":{"Box":[aaa,bbb]}}', '[')
The screenshot above is the situation when it doesn't contain "[", it return -1.
Then we can add a "If" condition. If >0, do "Parse JSON" with one of the schema. If =-1, do "Parse JSON" with the other schema.
Hope it would be helpful to your problem~
We faced a similar issue. The only solution we find is by manipulating the XML before conversion. We updated XML nodes which needs to be an array even when we have single element using this. We used a Azure function to update the required XML attributes and then returned the XML for conversion in Logic Apps. Hope this helps someone.

Build a JSON document from a sqlite table

I've been looking at the JSON1 extension for SQLite databases as a potential tool to use to compose a JSON document from a table-valued result.
I have a result set coming out of my SQLite database that resembles something like this:
CLASS|INSTANCE|PROPERTY|FROM|TO
ABC|12345|COLOR|RED|
ABC|12345|COLOR|GREEN|
ABC|12345|WEIGHT|1|10
ABC|56789|COLOR|BLUE|
ABC|56789|HEIGHT|4.5|6.2
DEF|2345|NAME|YOMOMMA|
I've been trying to make a JSON document based off of this data to look like:
{
"ABC": {
"12345": {
"COLOR": [
{ "from": "RED", "to": "" },
{ "from": "GREEN", "to": ""}
],
"WEIGHT": [
{ "from": "1", "to": "10" }
]
},
"56789": {
"COLOR": [
{ "from": "BLUE", "to": "" }
],
"HEIGHT": [
{ "from": "4.5", "to": "6.2" }
]
}
},
"DEF": {
"2345": {
"NAME": [
{ "from": "YOMOMMA", "to": "" }
]
}
}
}
I've been trying to get the JSON1 functions to help me out with this, but it seems they don't really support a multilayered JSON format (or at least, not that I've been able to see). If I wrap my result set in a group by query, I can get the properties to nicely turn into a JSON array, but when I try to wrap that into the next level, all the JSON gets escaped (would have been nice if there was a json_raw() function, but I'd imagine implementing it would be challenging).
Can this be done (relatively) easily using SQLite/ JSON1? Is there a different/ better way to create my document than using JSON1, or am I just going to have to write code for this?