JQ append object after specific object - json

Given following json file.
{
"ver" : "v2.0",
"date" : "11 Jul 2022 21:28 WIB",
"disk" : {},
"network" : {},
"bench" : {}
}
I want to append an object after date, so the resulting file will be like this.
{
"ver" : "v2.0",
"date" : "11 Jul 2022 21:28 WIB",
"hw": {
"cpu": "intel"
},
"disk" : {},
"network" : {},
"bench" : {}
}
I found this snippet jq -S '. |= . + {"hw":{ "cpu" : "intel" }}' that will append before last object, I tried to modify it a bit but I get jq: error (at main.json:7): Cannot index object with number.
Can anyone provide me correct query ?

As indicated in they's answer, the ordering of keys don't matter to the reading application
An object is an unsorted collection of keys. The ordering of keys really should not matter to an application reading the JSON document. If you want ordered data, then consider using arrays instead.
But for some reason, if the ordering needs to be guaranteed, you can achieve the same this way
to_entries |
( map(.key == "date") | index(true) ) as $pos |
.[0:$pos+1] + [{"key":"hw","value":{"cpu":"intel"}}] + .[$pos+1:] |
from_entries
The idea is to convert the JSON records into k/v pairs, find the position of the key named "date" and then use slice expressions to append the required object after the position and then append back the rest of the pairs.
demo - jqplay
To pass the JSON as a variable, use the --argjson flag
jq --argjson n '{"hw":{"cpu":"intel"}}' '
to_entries |
( map(.key == "date") | index(true) ) as $pos |
.[0:$pos+1] + ($n|to_entries) + .[$pos+1:] |
from_entries' json
Or if the JSON text is stored in a file, e.g. json_file, use the --slurpfile option
jq --slurpfile n json_file '
to_entries |
( map(.key == "date") | index(true) ) as $pos |
.[0:$pos+1] + ($n[0]|to_entries) + .[$pos+1:] |
from_entries' json
If the content is passed over from standard input, use /dev/stdin as your input file. Can be used with heredocs as well.
echo '{"hw":{"cpu":"intel"}}' |
jq --slurpfile n /dev/stdin '
to_entries |
( map(.key == "date") | index(true) ) as $pos |
.[0:$pos+1] + ($n[0]|to_entries) + .[$pos+1:] |
from_entries' json

Related

Mapping over a JSON array of objects and processing values using JQ

Just started playing around with jq and cannot for the life of me come to terms with how I should approach this in a cleaner way. I have some data from AWS SSM Parameter Store that I receive as JSON, that I want to process.
The data is structured in the following way
[
{
"Name": "/path/to/key_value",
"Value": "foo"
},
{
"Name": "/path/to/key_value_2",
"Value": "bar"
},
...
]
I want it output in the following way: key_value=foo key_value_2=bar. My first thought was to process it as follows: map([.Name | split("/") | last, .Value] | join("=")) | join(" ") but then I get the following error: jq: error (at <stdin>:9): Cannot index array with string "Value". It's as if the reference to the Value value is lost after piping the value for the Name parameter.
Of course I could just solve it like this, but it's just plain ugly: map([.Value, .Name | split("/") | last] | reverse | join("=")) | join(" "). How do I process the value for Name without losing reference to Value?
Edit: JQ Play link
map((.Name | split("/") | last) + "=" + .Value) | join(" ")
Will output:
"key_value=foo key_value_2=bar"
Online demo
The 'trick' is to wrap the .Name | split("/") | last) into () so that .value remains available
If you prefer string interpolation (\()) over (key) + .Value, you can rewrite it as:
map("\(.Name | split("/") | last)=\(.Value)") | join(" ")
Online demo

How to convert arbitrary nested JSON to CSV with jq – so you can convert it back?

How do I use jq to convert an arbitrary JSON array of objects to CSV, while objects in this array are nested?
StackOverflow has a sea of questions/answers where specific input or output fields are referenced, but I'd like to have a generic solution that
includes a header row,
works for any JSON input including nested arrays + objects,
allows records that have missing values for keys that are present in other records
does not hard-code any field names,
allows converting the CSV back into the nested JSON structure if needed, and
uses key paths as header names (see the following description).
Dot notation
Many JSON-using products (like CouchDB, MongoDB, …) and libraries (like Lodash, …) use variations of syntax that allows access to nested property values / subfields by joining key fragments with a character, often a dot (‘dot notation’).
An example of a key path like this would be "a.b.0.c" to refer to the deeply nested property in this JSON snippet:
{
"a": {
"b": [
{
"c": 123,
}
]
}
}
Caveat: Using this method is a pragmatic solution for most cases, but means that either dot characters have to be banned in property names, or a more complex (and definitely never used property name) has to be invented for escaping dots in property names / accessing nested fields. MongoDB simply banned usage of "." in documents until v5.0, some libraries have workarounds for field access (Lodash example).
Despite this, for simplicity, a solution should use the described dot syntax in the CSV output’s header for nested properties. Bonus if there is a solution variant that solves this problem, e.g. with JSONPath.
Example JSON array as input
[
{
"a": {
"b": [
{
"c": 123
}
]
}
},
{
"a": {
"b": [
{
"c": "foo \" bar",
"d": "qux"
}
]
}
},
{
"a": {
"b": [
{
"d": 456
}
]
}
}
]
Example CSV output
The output should have a header that includes all fields (even if the object at the first array does not have defined values for all existing key paths).
To make the output intuitively editable by humans, each row should represent one object in the input array.
The expected output should look like this:
"a.b.0.c","a.b.0.d"
123,
"foo "" bar","qux"
,456
Command line
This is what I need:
cat example.json | jq <MISSING CODE HERE>
Solution 1, using dot notation
Here is the jq call to convert your array of nested JSON objects to CSV:
jq -r '(. | map(leaf_paths) | unique) as $cols | map (. as $row | ($cols | map(. as $col | $row | getpath($col)))) as $rows | ([($cols | map(. | map(tostring) | join(".")))] + $rows) | map(#csv) | .[]
The fastest way to try this solution out is to use JQPlay.
The CSV output will have a header row. It will contain all properties that exist anywhere in the input objects, including nested ones, in dot notation. Each input array element will be represented as a single row, properties that are missing will be represented as empty CSV fields.
Using solution 1 in bash or a similar shell
Create the JSON input file…
echo '[{"a": {"b": [{"c": 123}]}},{"a": {"b": [{"c": "foo \" bar","d": "qux"}]}},{"a": {"b": [{"d": 456}]}}]' > example.json
Then use this jq command to output the CSV on the standard output:
cat example.json | jq -r '(. | map(leaf_paths) | unique) as $cols | map (. as $row | ($cols | map(. as $col | $row | getpath($col)))) as $rows | ([($cols | map(. | map(tostring) | join(".")))] + $rows) | map(#csv) | .[]'
…or write the output to example.csv:
cat example.json | jq -r '(. | map(leaf_paths) | unique) as $cols | map (. as $row | ($cols | map(. as $col | $row | getpath($col)))) as $rows | ([($cols | map(. | map(tostring) | join(".")))] + $rows) | map(#csv) | .[]' > example.csv
Converting the data from solution 1 back to JSON
Here is a Node.js example that you can try on RunKit. It converts a CSV generated with the method in solution 1 back to an array of nested JSON objects.
Explanation for solution 1
Here is a longer, commented version of the jq filter.
# 1) Find all unique leaf property names of all objects in the input array. Each nested property name is an array with the components of its key path, for example ["a", 0, "b"].
(. | map(leaf_paths) | unique) as $cols |
# 2) Use the found key paths to determine all (nested) property values in the given input records.
map (. as $row | ($cols | map(. as $col | $row | getpath($col)))) as $rows |
# 3) Create the raw output array of rows. Each row is represented as an array of values, one element per existing column.
(
# 3.1) This represents the header row. Key paths are generated here.
[($cols | map(. | map(tostring) | join(".")))]
+ # 3.2) concatenate the header row with all other rows
$rows
)
# 4) Convert each row to a escaped CSV string.
| map(#csv)
# 5) output each array element directly. Without this, the result would be a JSON array of CSV strings.
| .[]
Solution 2: for input that does have dots in property names
If you do need to support dot characters in property names, you can either use a different separator string for the key path syntax (replace the dot in "." with something else), or replace the map(tostring) | join(".") part with tostring - this yields a JSON array of strings that you can use as key paths - no dot notation needed. Here is a JQPlay with this solution variant.
Full jq command:
jq -r (. | map(leaf_paths) | unique) as $cols | map (. as $row | ($cols | map(. as $col | $row | getpath($col)))) as $rows | ([($cols | map(. | tostring))] + $rows) | map(#csv) | .[]
The output CSV for the variant would look like this then – it’s less readable and not useful for cases where you want humans to intuitively understand the CSV’s header:
"[""a"",""b"",0,""c""]","[""a"",""b"",0,""d""]"
123,
"foo "" bar","qux"
,456
See below for an idea how to convert this format back to a representation in your programming language.
Bonus: Converting the generated CSV back to JSON
If the input's nested properties contain no ".", it’s simple to convert the CSV back to JSON, for example with a library that supports dot notation, or with JSONPath.
JavaScript: Use Lodash's _.set()
Other languages: Find a package/library that implements JSONPath and use selectors like $.a.b.0.c or $['a']['b'][0]['c'] to set each nested property of each record.
Solution 2 (with JSON arrays as headers) allows you to interpret the headers as JSON array strings. Then you can generate a JSON Path from each header, and re-create all records/objects:
"[""a"",""b"",0,""c""]" (CSV)
→ ["a","b",0,"c"] (array of key-path components after unescaping and parsing as JSON)
→ $.["a"]["b"][0]["c"] (JSONPath)
→ { a: { b: [{c: … }] } } (Nested regenerated object)
I've written an example Node.js script to convert a CSV like this back to JSON. You can try solution 2 in RunKit.
The following tocsv and fromcsv functions provide a solution to the stated problem except for one complication regarding requirement (6) concerning the headers. Essentially, this requirement can be met using the functions given here by adding a matrix transposition step.
Whether or not a transposition step is added, the advantage of the approach taken here is that there are no restrictions on the JSON keys or values. In particular, they may
contain periods (dots), newlines and/or NUL characters.
In the example, an array of objects is given, but in fact any stream of valid JSON documents could be used as input to tocsv; thanks to the magic of jq, the original stream will be recreated by fromcsv (in the sense of entity-by-entity equality).
Of course, since there is no CSV standard, the CSV produced by the
tocsv function might not be understood by all CSV processors. In
particular, please note that the tocsv function defined here maps
embedded newlines in JSON strings or key names to the two-character
string "\n" (i.e., a literal backslash followed by the letter "n");
the inverse operation performs the inverse translation to meet the
"round-trip" requirement.
(The use of tail is just to simplify the presentation; it would be
trivial to modify the solution to make it an only-jq one.)
The CSV is generated on the assumption that any value can be
included in a field so long as (a) the field is quoted, and (b)
double-quotes within the field are doubled.
Any generic solution that supports "round-trips" is bound to be
somewhat complicated. The main reason why the solution presented here is
more complex than one might expect is because a third column is
added, partly to make it easy to distinguish between integers and
integer-valued strings, but mainly because it makes it easy to
distinguish between the size-1 and size-2 arrays produced by jq's
--stream option. Needless to say, there are other ways
these issues could be addressed; the number of calls to jq could
also be reduced.
The solution is presented as a test script that checks the round-trip requirement on a telling test case:
#!/bin/bash
function json {
cat<<EOF
[
{
"a": 1,
"b": [
1,
2,
"1"
],
"c": "d\",ef",
"embed\"ed": "quote",
"null": null,
"string": "null",
"control characters": "a\u0000c",
"newline": "a\nb"
},
{
"x": 1
}
]
EOF
}
function tocsv {
jq -ncr --stream '
(["path", "value", "stringp"],
(inputs | . + [.[1]|type=="string"]))
| map( tostring|gsub("\"";"\"\"") | gsub("\n"; "\\n"))
| "\"\(.[0])\",\"\(.[1])\",\(.[2])"
'
}
function fromcsv {
tail -n +2 | # first duplicate backslashes and deduplicate double-quotes
jq -rR '"[\(gsub("\\\\";"\\\\") | gsub("\"\"";"\\\"") ) ]"' |
jq -c '.[2] as $s
| .[0] |= fromjson
| .[1] |= if $s then . else fromjson end
| if $s == null then [.[0]] else .[:-1] end
# handle newlines
| map(if type == "string" then gsub("\\\\n";"\n") else . end)' |
jq -n 'fromstream(inputs)'
}
# Check the roundtrip:
json | tocsv | fromcsv | jq -s '.[0] == .[1]' - <(json)
Here is the CSV that would be produced by json | tocsv, except that SO seems to disallow literal NULs, so I have replaced that by \0:
"path","value",stringp
"[0,""a""]","1",false
"[0,""b"",0]","1",false
"[0,""b"",1]","2",false
"[0,""b"",2]","1",true
"[0,""b"",2]","false",null
"[0,""c""]","d"",ef",true
"[0,""embed\""ed""]","quote",true
"[0,""null""]","null",false
"[0,""string""]","null",true
"[0,""control characters""]","a\0c",true
"[0,""newline""]","a\nb",true
"[0,""newline""]","false",null
"[1,""x""]","1",false
"[1,""x""]","false",null
"[1]","false",null

Can I output boolean based on values in a list?

Edit: I used the solution provided by #peak to do the following:
$ jq -r --argjson whitelist '["role1", "role2"]' '
select(has("roles") and any(.roles[]; . == "role1" or . == "role2"))
| (reduce ."roles"[] as $r ({}; .[$r]=true)) as $roles
| [.email, .username, .given_name, .family_name, ($roles[$whitelist[]]
| . != null)]
| #csv
' users.json
Added the select() to filter out users who haven't onboarded yet and don't have any roles, and to ensure the users included in the output have at least one of the target roles.
Scenario: user profiles as JSON docs, where each profile has a list object with their assigned roles. Example:
{
"username": "janedoe",
"roles": [
"role1",
"role4",
"role5"
]
}
The actual data file is an ndjson file, one user object as above per line.
I am only interested in specific roles, say role1, role3, and role4. I want to produce a CSV formatted as:
username,role1?,role3?,role4?
e.g.,
janedoe,true,false,true
The part I haven't figured out is how to output booleans or Y / N in response to the values in the list object. Is this something I can do in jq itself?
With your input, the invocation:
jq -r --argjson whitelist '["role1", "role3", "role4"]' '
(["username"] + $whitelist),
[.username, ($whitelist[] as $w | .roles | index([$w]) != null)]
| #csv
'
produces:
"username","role1","role3","role4"
"janedoe",true,false,true
Notes:
The second last line of the jq filter above could be shortened to:
[.username, (.roles | index($whitelist[]) != null)]
Presumably if there were more than one user, you'd only want
the header row once, in which case the above solution
would need to be tweaked.
Using IN/1
Because index/1 is not as efficient as it might be,
you might like to consider this alternative:
(["username"] + $whitelist),
(.roles as $roles | [.username, ($whitelist[] | IN($roles[]) )])
| #csv
Using a JSON dictionary
If the number of roles was very large, then it would probably be more
efficient to construct a JSON dictionary to avoid repeated linear lookups:
(reduce .roles[] as $r ({}; .[$r]=true)) as $roles
| (["username"] + $whitelist),
[.username, ($roles[$whitelist[]] != null)]
| #csv
With ndjson as input
For efficiency, and to ensure there's just one header, you could use inputs with the -n command-line option. Adding the extra fields mentioned in the revised Q, you might end up with:
jq -nr --argjson whitelist '["role1", "role2"]' '
["email", "username", "given_name", "family_name"] as $greenlist
| ($greenlist + $whitelist),
(inputs
| select(has("roles") and any(.roles[] == $whitelist[]; true))
| (reduce ."roles"[] as $r ({}; .[$r]=true)) as $roles
| [ .[$greenlist[]], ($roles[$whitelist[]] != null) ])
| #csv
' users.json

Create JSON from string with format "key1=value1,key2=value2" using jq

I'm trying to create a json file from a string with the following format:
string="key1=value1,key2=value2"
Is there a way to create a json using jq by specifying the = and , symbols as separators for the keys and values?
The output I'm looking for would be:
{"key1": "value1", "key2” :”value2"}
I've tried to use this post as a reference:
Create JSON using jq from pipe-separated keys and values in bash -- however, it expects input that contains a line with only keys, before later lines with only values; here, the keys and values are all interspersed.
Here's a reduce-free solution that assumes string is the shell variable (not part of the string to be parsed), and that parsing of the string can be accomplished by first splitting on ",":
jq -R 'split(",")
| map( index("=") as $i | {(.[0:$i]) : .[$i+1:]})
| add' <<< "$string"
Notice that this allows "=" to appear within the values.
The only trickiness here is that when a key name is specified programmatically, it must be enclosed within parentheses.
Supplemental question
string="key1=value1|key2=value2,value3|key3=value4"
In this case, you would first split on "|", and then find the first occurrence of "=":
split("|")
| map( index("=") as $i | {(.[0:$i]) : .[$i+1:]})
| add
| map_values(if index(",") then split(",") else . end)
Output:
{
"key1": "value1",
"key2": [
"value2",
"value3"
],
"key3": "value4"
}
string="key1=value1,key2=value2"
jq -Rc '
split(",")
| [.[] | match( "([^=]*)=(.*)" )]
| reduce .[].captures as $item ({}; .[$item[0].string]=$item[1].string)
' <<<"$string"
echo -n "key1=value1,key2=value2" | \
jq -csR '[split(",")[]|split("=") | {(.[0]): .[1]}]|add'
this gives
{"key1":"value1","key2":"value2"}

Convert even odd index in array to key value pairs in json using jq

I'm trying to use jq to parse Solr 6.5 metrics into key value pairs:
{
"responseHeader": {
"status": 0,
"QTime": 7962
},
"metrics": [
"solr.core.shard1",
"QUERY./select",
"solr.core.shard2",
"QUERY./update"
...
]
}
I'd like to pick even odd entries in metrics array and put them together into a single object as key value pairs like this:
{
"solr.core.shard1": "QUERY./select",
"solr.core.shard2": "QUERY./update",
...
}
Till now, I am only able to come up with:
.metrics | to_entries | .[] | {(select(.key % 2 == 0).value): select(.key % 2 == 1).value}
But this returns an error or no results.
I'd be grateful if someone could point me in the right direction. I feel like the answer is probably in the map operator, but I haven't been able to figure it out.
jq solution:
jq '[ .metrics as $m | range(0; $m | length; 2)
| {($m[.]): $m[(. + 1)]} ] | add' jsonfile
The output:
{
"solr.core.shard1": "QUERY./select",
"solr.core.shard2": "QUERY./update"
}
https://stedolan.github.io/jq/manual/v1.5/#range(upto),range(from;upto)range(from;upto;by)
Here's a helper function which makes the solution trivial:
# Emit a stream consisting of pairs of items taken from `stream`
def pairwise(stream):
foreach stream as $i ([];
if length == 1 then . + [$i] else [$i] end;
select(length == 2));
From here there are several good options, e.g. we could start with:
.metrics
| [pairwise(.[]) | {(.[0]): .[1]}]
| add
With your input, this produces:
{
"solr.core.shard1": "QUERY./select",
"solr.core.shard2": "QUERY./update"
}
So you might want to write:
.metrics |= ([pairwise(.[]) | {(.[0]): .[1]}] | add)