JQ combine json with key:value - json

I have two Json files
First :
'''
{
"KeyID": 7532173,
"KeyDetails": "Level 12"
}
'''
Second:
'''
{
"KeyID": 7532173,
"Level": "Access Level"
}
'''
I would like to combine them matching the key:value pair of KeyId
Please advise on how to proceed

You could use if ... then ... else ... end, or more briefly, select:
jq -s 'select(.[0].KeyID==.[1].KeyID) | add' file1.json file2.json

Related

Get value of JSON object using jq --stream

I'm trying to extract the value of an JSON object using jq --stream, because the real data can the size of multiple GigaBytes.
This is the JSON I'm using for my tests, where I want to extract the value of item:
{
"other": "content here",
"item": {
"A": {
"B": "C"
}
},
"test": "test"
}
The jq options I'm using:
jq --stream --null-input 'fromstream(inputs | select(.[0][0] == "item"))[]' example.json
However, I don't get any output with this command.
A strange thing I found is that when removing the object after the item the above command seems to work:
{
"other": "content here",
"item": {
"A": {
"B": "C"
}
}
}
The result looks as expected:
❯ jq --stream --null-input 'fromstream(inputs | select(.[0][0] == "item"))[]' example.json
{
"A": {
"B": "C"
}
}
But as I cannot control the input JSON this is not the solution.
I'm using jq version 1.6 on MacOS.
You didn't truncate the stream, therefore after filtering it to only include the parts below .item, fromstream is missing the final back-tracking item [["item"]]. Either add it manually at the end (not recommended, this would also include the top-level object in the result), or, much simpler, use 1 | truncate_stream to strip the first level altogether:
jq --stream --null-input '
fromstream(1 | truncate_stream(inputs | select(.[0][0] == "item")))
' example.json
{
"A": {
"B": "C"
}
}
Alternatively, you can use reduce and setpath to build up the result object yourself:
jq --stream --null-input '
reduce inputs as $in (null;
if $in | .[0][0] == "item" and has(1) then setpath($in[0];$in[1]) else . end
)
' example.json
{
"item": {
"A": {
"B": "C"
}
}
}
To remove the top level object, either filter for .item at the end, or, similarly to truncate_stream, remove the path's first item using [1:] to strip the first level:
jq --stream --null-input '
reduce inputs as $in (null;
if $in | .[0][0] == "item" and has(1) then setpath($in[0][1:];$in[1]) else . end
)
' example.json
{
"A": {
"B": "C"
}
}

How would you collect the first few entries of a list from a large json file using jq?

I am trying to process a large json file for testing purposes that has a few thousand entries. The json contains a long list of data to is too large for me to process in one go. Using a jq, is there an easy way to get a valid snippet of the json that only contains the first few entries from the data list? For example is there a query that would look at the whole json file and return to me a valid json that only contains the first 4 entries from data? Thank you!
{
"info":{
"name":"some-name"
},
"data":[
{...},
{...},
{...},
{...}
}
Based on your snippet, the relevant jq would be:
.data |= .[:4]
Here's an example using the --stream option:
$ cat input.json
{
"info": {"name": "some-name"},
"data": [
{"a":1},
{"b":2},
{"c":3},
{"d":4},
{"e":5},
{"f":6},
{"g":7}
]
}
jq --stream -n '
reduce (
inputs | select(has(1) and (.[0] | .[0] == "data" and .[1] < 4))
) as $in (
{}; .[$in[0][-1]] = $in[1]
)
' input.json
{
"a": 1,
"b": 2,
"c": 3,
"d": 4
}
Note: Using limit would have been more efficient in this case, but I tried to be more generic for the purpose of scalability.

jq combine json files into single array

I have several json files I want to combine. Some are arrays of objects and some are single objects. I want to effectively concatenate all of this into a single array.
For example:
[
{ "name": "file1" }
]
{ "name": "file2" }
{ "name": "file3" }
And I want to end up with:
[
{ "name": "file1" }
{ "name": "file2" },
{ "name": "file3" },
]
How can I do this using jq or similar?
The following illustrates an efficient way to accomplish the required task:
jq -n 'reduce inputs as $in (null;
. + if $in|type == "array" then $in else [$in] end)
' $(find . -name '*.json') > combined.json
The -n command-line option is necessary to avoid skipping the first file.
This did it:
jq -n '[inputs] | add' $(find . -name '*.json') > combined.json

JQ to merge new values between 2 json files

I'm a rookie wirh JQ.
I would like to merge 2 json files with JQ. But only for the present keys in first file.
First file (first.json)
{
"##locale": "en",
"foo": "bar1"
}
Second file (second.json)
{
"##locale": "en",
"foo": "bar2",
"oof": "rab"
}
I already tried.
edit: jq -n '.[0] * .[1]' first.json second.json
jq -s '.[0] * .[1]' first.json second.json
But the returned result is wrong.
{
"##locale": "en",
"foo": "bar2",
"oof": "rab"
}
"oof" entry should not be present.
Expected merged.
{
"##locale": "en",
"foo": "bar2"
}
Best regards.
And here's a one-liner, which happens to be quite efficient:
jq --argfile first first.json '. as $in | $first | with_entries(.value = $in[.key] )' second.json
Consider:
jq -n '.
| input as $first # read first input
| input as $second # read second input
| $first * $second # make the merger of the two the context item
| [ to_entries[] # ...then break it out into key/value pairs
| select($first[.key]) # ...and filter those for whether they exist in the first input
] | from_entries # ...before reassembling into a single object.
' first.json second.json
...which properly emits:
{
"##locale": "en",
"foo": "bar2"
}

jq: convert array to object indexed by filename?

Using jq how can I convert an array into object indexed by filename, or read multiple files into one object indexed by their filename?
e.g.
jq -s 'map(select(.roles[]? | contains ("mysql")))' -C dir/file1.json dir/file2.json
This gives me the data I want, but I need to know which file they came from.
So instead of
[
{ "roles": ["mysql"] },
{ "roles": ["mysql", "php"] }
]
for output, I want:
{
"file1": { "roles": ["mysql"] },
"file2": { "roles": ["mysql", "php"] }
}
I do want the ".json" file extension stripped too if possible, and just the basename (dir excluded).
Example
file1.json
{ "roles": ["mysql"] }
file2.json
{ "roles": ["mysql", "php"] }
file3.json
{ }
My real files obviously have other stuff in them too, but that should be enough for this example. file3 is simply to demonstrate "roles" is sometimes missing.
In other words: I'm trying to find files that contain "mysql" in their list of "roles". I need the filename and contents combined into one JSON object.
To simplify the problem further:
jq 'input_filename' f1 f2
Gives me all the filenames like I want, but I don't know how to combine them into one object or array.
Whereas,
jq -s 'map(input_filename)' f1 f2
Gives me the same filename repeated once for each file. e.g. [ "f1", "f1" ] instead of [ "f1", "f2" ]
If your jq has inputs (as does jq 1.5) then the task can be accomplished with just one invocation of jq.
Also, it might be more efficient to use any than iterating over all the elements of .roles.
The trick is to invoke jq with the -n option, e.g.
jq -n '
[inputs
| select(.roles and any(.roles[]; contains("mysql")))
| {(input_filename | gsub(".*/|\\.json$";"")): .}]
| add' file*.json
jq approach:
jq 'if (.roles[] | contains("mysql")) then {(input_filename | gsub(".*/|\\.json$";"")): .}
else empty end' ./file1.json ./file2.json | jq -s 'add'
The expected output:
{
"file1": {
"roles": [
"mysql"
]
},
"file2": {
"roles": [
"mysql",
"php"
]
}
}