jq - parse structure and save values in bash variable - json

I have a json input as follow
{
"unique": 1924,
"coordinates": [
{
"time": "2015-01-25T00:00:01.683",
"xyz": [
{
"z": 4,
"y": 2,
"x": 1,
"id": 99,
"inner_arr" : [
{
"a": 1,
"b": 2
},
{
"a": 3,
"b": 4
}
]
},
{
"z": 9,
"y": 9,
"x": 8,
"id": 100,
"inner_arr" : [
{
"a": 1,
"b": 2
},
{
"a": 3,
"b": 4
}
]
},
{
"z": 9,
"y": 6,
"x": 10,
"id": 101,
"inner_arr" : [
{
"a": 1,
"b": 2
},
{
"a": 3,
"b": 4
}
]
}
]
},
{
"time": "2015-01-25T00:00:02.790",
"xyz": [
{
"z": 0,
"y": 3,
"x": 7,
"id": 99,
"inner_arr" : [
{
"a": 1,
"b": 2
},
{
"a": 3,
"b": 4
}
]
},
{
"z": 4,
"y": 6,
"x": 2,
"id": 100,
"inner_arr" : [
{
"a": 1,
"b": 2
},
{
"a": 3,
"b": 4
}
]
},
{
"z": 2,
"y": 9,
"x": 51,
"id": 101,
"inner_arr" : [
{
"a": 1,
"b": 2
},
{
"a": 3,
"b": 4
}
]
}
]
}
]
}
I want to parse this input with jq and store values in bash arrays:
#!/bin/bash
z=()
x=()
y=()
id=()
a=()
b=()
jq --raw-output '.coordinates[] | .xyz[] | (.z) as $z, (.y) as $y,7 (.x) as $x, (.id) as $id, .inner_arr[].a $a, .inner_arr[].b as $b | $z, $y, $x, $id, $a, $b' <<< "$input"
echo -e "${z}"
Expected output for above echo command:
4
9
9
0
4
2
echo -e "${a}"
Expected output for above echo command:
1
3
1
3
1
3
1
3
1
3
1
3
How can I do it with jq with a single jq call looping through all arrays in a cascading fashion?
I want to save CPU by calling jq just once and extract all single or array values.

You cannot set environment variable directly from jq (cf. manual). What you can do is to generate a series of bash declarations for the declare builtin. I suggest to store the declarations in an intermediate bash array (with mapfile) processed directly by declare so that you can stay away from hazardous commands like eval.
mapfile -t < <(
jq --raw-output '
def m(exp): first(.[0] | path(exp)[-1]) + "=(" + (map(exp) | #sh) + ")";
[ .coordinates[].xyz[] ]
| m(.x), m(.y), m(.z), m(.id), m(.inner_arr[].a), m(.inner_arr[].b)
' input
)
declare -a "${MAPFILE[#]}"
The jq script packs all xyz objects in a single array and filters it with the m function for each field represented as a path expression. The function returns a string formatted as field=(val1 val2... valN), where the field name is the last component of the path expression, i.e. x for .x and a for .inner_arr[].a (extracted on the first item of the array).
Then you can check the shell variables with declare -p var or ${var[#]}. ${var} refers to the first element only.
declare -p MAPFILE
declare -p z
echo a: "${a[#]}" / size = ${#a[#]}
declare -a MAPFILE=([0]="x=(1 8 10 7 2 51)" [1]="y=(2 9 6 3 6 9)" [2]="z=(4 9 9 0 4 2)" [3]="id=(99 100 101 99 100 101)" [4]="a=(1 3 1 3 1 3 1 3 1 3 1 3)" [5]="b=(2 4 2 4 2 4 2 4 2 4 2 4)")
declare -a z=([0]="4" [1]="9" [2]="9" [3]="0" [4]="4" [5]="2")
a: 1 3 1 3 1 3 1 3 1 3 1 3 / size = 12

Related

how to extract and modify inner array objects with parent object data in jq

We are tying to format a json similar to this:
[
{"id": 1,
"type": "A",
"changes": [
{"id": 12},
{"id": 13}
],
"wanted_key": "good",
"unwanted_key": "aaa"
},
{"id": 2,
"type": "A",
"unwanted_key": "aaa"
},
{"id": 3,
"type": "B",
"changes": [
{"id": 31},
{"id": 32}
],
"unwanted_key": "aaa",
"unwanted_key2": "aaa"
},
{"id": 4,
"type": "B",
"unwanted_key3": "aaa"
},
null,
null,
{"id": 7}
]
into something like this:
[
{
"id": 1,
"type": "A",
"wanted_key": true # every record must have this key/value
},
{
"id": 12, # note: this was in the "changes" property of record id 1
"type": "A", # type should be the same type than record id 1
"wanted_key": true
},
{
"id": 13, # note: this was in the "changes" property of record id 1
"type": "A", # type should be the same type than record id 1
"wanted_key": true
},
{
"id": 2,
"type": "A",
"wanted_key": true
},
{
"id": 3,
"type": "B",
"wanted_key": true
},
{
"id": 31, # note: this was in the "changes" property of record id 3
"type": "B", # type should be the same type than record id 3
"wanted_key": true
},
{
"id": 32, # note: this was in the "changes" property of record id 3
"type": "B", # type should be the same type than record id 3
"wanted_key": true
},
{
"id": 4,
"type": "B",
"wanted_key": true
},
{
"id": 7,
"type": "UNKN", # records without a type should have this type
"wanted_key": true
}
]
So far, I've been able to:
remove null records
obtain the keys we need with their default
give records without a type a default type
What we are missing:
from records having a changes key, create new records with the type of their parent record
join all records in a single array
Unfortunately we are not entirely sure how to proceed... Any help would be appreciated.
So far our jq goes like this:
del(..|nulls) | map({id, type: (.type // "UNKN"), wanted_key: (true)}) | del(..|nulls)
Here's our test code:
https://jqplay.org/s/eLAWwP1ha8P
The following should work:
map(select(values))
| map(., .type as $type | (.changes[]? + {$type}))
| map({id, type: (.type // "UNKN"), wanted_key: true})
Only select non-null values
Return the original items followed by their inner changes array (+ outer type)
Extract 3 properties for output
Multiple map calls can usually be combined, so this becomes:
map(
select(values)
| ., (.type as $type | (.changes[]? + {$type}))
| {id, type: (.type // "UNKN"), wanted_key: true}
)
Another option without variables:
map(
select(values)
| ., .changes[]? + {type}
| {id, type: (.type // "UNKN"), wanted_key: true}
)
# or:
map(select(values))
| map(., .changes[]? + {type})
| map({id, type: (.type // "UNKN"), wanted_key: true})
or even with a separate normalization step for the unknown type:
map(select(values))
| map(.type //= "UNKN")
| map(., .changes[]? + {type})
| map({id, type, wanted_key: true})
# condensed to a single line:
map(select(values) | .type //= "UNKN" | ., .changes[]? + {type} | {id, type, wanted_key: true})
Explanation:
Select only non-null values from the array
If type is not set, create the property with value "UNKN"
Produce the original array items, followed by their nested changes elements extended with the parent type
Reshape objects to only contain properties id, type, and wanted_key.
Here's one way:
map(
select(values)
| (.type // "UNKN") as $type
| ., .changes[]?
| {id, $type, wanted_key: true}
)
[
{
"id": 1,
"type": "A",
"wanted_key": true
},
{
"id": 12,
"type": "A",
"wanted_key": true
},
{
"id": 13,
"type": "A",
"wanted_key": true
},
{
"id": 2,
"type": "A",
"wanted_key": true
},
{
"id": 3,
"type": "B",
"wanted_key": true
},
{
"id": 31,
"type": "B",
"wanted_key": true
},
{
"id": 32,
"type": "B",
"wanted_key": true
},
{
"id": 4,
"type": "B",
"wanted_key": true
},
{
"id": 7,
"type": "UNKN",
"wanted_key": true
}
]
Demo
Something like below should work
map(
select(type == "object") |
( {id}, {id : ( .changes[]? .id )} ) +
{ type: (.type // "UNKN"), wanted_key: true }
)
jq play - demo

Adding more data in array of object in PostgreSQL

I have a table of cart with 2 columns (user_num, data).
user_num will have the phone number of user and
data will have an array of object like [{ "id": 1, "quantity": 1 }, { "id": 2, "quantity": 2 }, { "id": 3, "quantity": 3 }] here id is product id.
user_num | data
----------+--------------------------------------------------------------------------------------
1 | [{ "id": 1, "quantity": 1 }, { "id": 2, "quantity": 2 }, { "id": 3, "quantity": 3 }]
I want to add more data of products in above array of objects in PostgreSQL.
Thanks!
To add the value use the JSONB array append operator ||
Demo
update
test
set
data = data || '[{"id": 4, "quantity": 4}, {"id": 5, "quantity": 5}]'
where
user_num = 1;

Modify arrays within objects in jq

I have an array of objects, and want to filter the arrays in the b property to only have elements matching the a property of the object.
[
{
"a": 3,
"b": [
1,
2,
3
]
},
{
"a": 5,
"b": [
3,
5,
4,
3,
5
]
}
]
produces
[
{
"a": 3,
"b": [
3
]
},
{
"a": 5,
"b": [
5,
5
]
}
]
Currently, I've arrived at
[.[] | (.a as $a | .b |= [.[] | select(. == $a)])]
That works, but I'm wondering if there's a better (shorter, more readable) way.
I can think of two ways to do this with less code and both are variants of what you have already figured out on your own.
map(.a as $a | .b |= map(select(. == $a)))
del(.[] | .a as $a | .b[] | select(. != $a))

JQ - Denormalize nested object

I've been trying to convert some JSON to csv and I have the following problem:
I have the following input json:
{"id": 100, "a": [{"t" : 1,"c" : 2 }, {"t": 2, "c" : 3 }] }
{"id": 200, "a": [{"t": 2, "c" : 3 }] }
{"id": 300, "a": [{"t": 1, "c" : 3 }] }
And I expect the following CSV output:
id,t1,t2
100,2,3
200,,3
300,3,
Unfortunately JQ doesn't output if one of select has no match.
Example:
echo '{ "id": 100, "a": [{"t" : 1,"c" : 2 }, {"t": 2, "c" : 3 }] }' | jq '{t1: (.a[] | select(.t==1)).c , t2: (.a[] | select(.t==2)).c }'
output:
{ "t1": 2, "t2": 3 }
but if one of the objects select returns no match it doesn't return at all.
Example:
echo '{ "id": 100, "a": [{"t" : 1,"c" : 2 }] }' | jq '{t1: (.a[] | select(.t==1)).c , t2: (.a[] | select(.t==2)).c }'
Expected output:
{ "t1": 2, "t2": null }
Does anyone know how to achieve this with JQ?
EDIT:
Based on a comment made by #peak I found the solution that I was looking for.
jq -r '["id","t1","t2"],[.id, (.a[] | select(.t==1)).c//null, (.a[] | select(.t==2)).c//null ]|#csv'
The alternative operator does exactly what I was looking for.
Alternative Operator
Here's a simple solution that does not assume anything about the ordering of the items in the .a array, and easily generalizes to arbitrarily many .t values:
# Convert an array of {t, c} to a dictionary:
def tod: map({(.t|tostring): .c}) | add;
["id", "t1", "t2"], # header
(inputs
| (.a | tod) as $dict
| [.id, (range(1;3) as $i | $dict[$i|tostring]) ])
| #csv
Command-line options
Use the -n option (because inputs is being used), and the -r option (to produce CSV).
This is an absolute mess, but it works:
$ cat tmp.json
{"id": 100, "a": [{"t" : 1,"c" : 2 }, {"t": 2, "c" : 3 }] }
{"id": 200, "a": [{"t": 2, "c" : 3 }] }
{"id": 300, "a": [{"t": 1, "c" : 3 }] }
$ cat filter.jq
def t(id):
.a |
map({key: "t\(.t)", value: .c}) |
({t1:null, t2:null, id:id} | to_entries) + . | from_entries
;
inputs |
map(.id as $id | t($id)) |
(.[0] | keys) as $hdr |
([$hdr] + map(to_entries |map(.value)))[]|
#csv
$ jq -rn --slurp -f filter.jq tmp.json
"id","t1","t2"
2,3,100
,3,200
3,,300
In short, you produce a direct object containing the values from your input, then add it to a "default" object to fill in the missing keys.

Parse JSON Nested SubValues in Powershell to Table

I converted the JSON string to Powershell in v5. The original json string is below:
$j = #'
[{
"id": "1",
"Members": [
"A",
"B",
"C"
]
}, {
"id": "2",
"Members": [
"A",
"C"
]
}, {
"id": "3",
"Members": [
"A",
"D"
]
}]
'#
$json = $j | ConvertFrom-Json
I would like the result set to look like the result set below. Eventually I will export to SQL:
id Members
----- --------
1 A
1 B
1 C
2 A
2 C
3 A
3 D
try this
$json | % {
$id = $_.id
$_.members | select #{n='id';e={$id}}, #{n='members';e={$_}}
}