How to remove unnecessary items from an JSON with JQ? - json

I need from this: (example)
{
"jobs": [
{},
{}
],
"services": {
"service-1": {
"version": "master"
},
"service-2": {
"foo": true,
"version": "master"
},
"service-3": {
"bar": "baz"
}
}
}
Make this:
{
"services": {
"service-1": {
"version": "master"
},
"service-2": {
"version": "master"
}
}
}
So delete all except .services.*.version. Help please, can't handle it with my knowledge of the JQ.

Translating your expression .services.*.version quite generically, you could use tostream as follows:
reduce (tostream
| select(length==2 and
(.[0] | (.[0] == "services" and
.[-1] == "version")))) as $pv (null;
setpath($pv[0]; $pv[1]) )
Using jq's streaming parser
To reduce memory requirements, you could modify the above solution to use jq's streaming parser, either with reduce as above, or with fromstream:
jq -n --stream -f program.jq input.json
where program.jq contains:
fromstream(inputs
| select( if length == 2
then .[0] | (.[0] == "services" and
.[-1] == "version")
else . end ) )
.services.*.version.*
Interpreting .services.*.version more broadly so as not to require the terminal component of the path to be .version, simply replace
.[-1] == "version"
with:
index("version")
in the above.

If you wish to differentiate between "version" being absent or null:
{services}
| .services |= map_values(select(has("version")) | {version} )

Related

Using jq with 'contains' in a 'select' inside a 'del' is not working

i try to remove some entries from a dict in a json. It works by using == but with contains it doesn't work.
Jq call working:
jq 'del(.entries[] | select(.var == "foo"))' input.json
Jq call not working:
jq 'del(.entries[] | select(.var | contains("foo")))' input.json
input.json:
{
"entries": [
{
"name": "test1",
"var": "foo"
},
{
"name": "test2",
"var": "bar"
}
]
}
Output:
{
"entries": [
{
"name": "test2",
"var": "bar"
}
]
}
The result of jq '.entries[] | select(.var == "foo")' input.json and jq '.entries[] | select(.var | contains("foo"))' input.json is the same, so I think the two del-calls should also work.
Is this a bug in jq or did I something wrong?
This must be a bug as it seems to work perfectly on jq 1.6 (try it here).
If you're unable to update to jq 1.6 you should be able to use the following command instead, which I've successfully tested on jq 1.5 :
jq '.entries |= map(select(.var | contains("foo") | not))' file.json

Update one JSON file values with values from another JSON using JQ (on all levels)

I have two JSON files:
source.json:
{
"general": {
"level1": {
"key1": "x-x-x-x-x-x-x-x",
"key3": "z-z-z-z-z-z-z-z",
"key4": "w-w-w-w-w-w-w-w"
},
"another" : {
"key": "123456",
"comments": {
"one": "111",
"other": "222"
}
}
},
"title": "The best"
}
and the
target.json:
{
"general": {
"level1": {
"key1": "xxxxxxxx",
"key2": "yyyyyyyy",
"key3": "zzzzzzzz"
},
"onemore": {
"kkeeyy": "0000000"
}
},
"specific": {
"stuff": "test"
},
"title": {
"one": "one title",
"other": "other title"
}
}
I need all the values for keys which exist in both files, copied from source.json to target.json, considering all the levels.
I've seen and tested the solution from this post.
It only copies the first level of keys, and I couldn't get it to do what I need.
The result from solution in this post, looks like this:
{
"general": {
"level1": {
"key1": "x-x-x-x-x-x-x-x",
"key3": "z-z-z-z-z-z-z-z",
"key4": "w-w-w-w-w-w-w-w"
},
"another": {
"key": "123456",
"comments": {
"one": "111",
"other": "222"
}
}
},
"specific": {
"stuff": "test"
},
"title": "The best"
}
Everything under the "general" key was copied as is.
What I need, is this:
{
"general": {
"level1": {
"key1": "x-x-x-x-x-x-x-x",
"key2": "yyyyyyyy",
"key3": "z-z-z-z-z-z-z-z"
},
"onemore": {
"kkeeyy": "0000000"
}
},
"specific": {
"stuff": "test"
},
"title": {
"one": "one title",
"other": "other title"
}
}
Only "key1" and "key3" should be copied.
Keys in target JSON must not be deleted and new keys should not be created.
Can anyone help?
One approach you could take is get all the paths to all scalar values for each input and take the set intersections. Then copy values from source to target from those paths.
First we'll need an intersect function (which was surprisingly difficult to craft):
def set_intersect($other):
(map({ ($other[] | tojson): true }) | add) as $o
| reduce (.[] | tojson) as $v ({}; if $o[$v] then .[$v] = true else . end)
| keys_unsorted
| map(fromjson);
Then to do the update:
$ jq --argfile s source.json '
reduce ([paths(scalars)] | set_intersect([$s | paths(scalars)])[]) as $p (.;
setpath($p; $s | getpath($p))
)
' target.json
[Note: this response answers the original question, with respect to the original data. The OP may have had paths in mind rather than keys.]
There is no need to compute the intersection to achieve a reasonably efficient solution.
First, let's hypothesize the following invocation of jq:
jq -n --argfile source source.json --argfile target target.json -f copy.jq
In the file copy.jq, we can begin by defining a helper function:
# emit an array of the distinct terminal keys in the input entity
def keys: [paths | .[-1] | select(type=="string")] | unique;
In order to inspect all the paths to leaf elements of $source, we can use tostream:
($target | keys) as $t
| reduce ($source|tostream|select(length==2)) as [$p,$v]
($target;
if $t|index($p[-1]) then setpath($p; $v) else . end)
Alternatives
Since $t is sorted, it would (at least in theory) make sense to use bsearch instead of index:
bsearch($p[-1]) > -1
Also, instead of tostream we could use paths(scalars).
Putting these alternatives together:
($target | keys) as $t
| reduce ($source|paths(scalars)) as $p
($target;
if $t|bsearch($p[-1]) > -1
then setpath($p; $source|getpath($p))
else . end)
Output
{
"general": {
"level1": {
"key1": "x-x-x-x-x-x-x-x",
"key2": "yyyyyyyy",
"key3": "z-z-z-z-z-z-z-z"
},
"onemore": {
"kkeeyy": "0000000"
}
},
"specific": {
"stuff": "test"
}
}
The following provides a solution to the revised question, which is actually about "paths" rather than "keys".
([$target|paths(scalars)] | unique) as $paths
| reduce ($source|paths(scalars)) as $p
($target;
if $paths | bsearch($p) > -1
then setpath($p; $source|getpath($p))
else . end)
unique is called so that binary search can be used subsequently.
Invocation:
jq -n --argfile source source.json --argfile target target.json -f program.jq

Change entry in a JSON list that matches a condition without discarding rest of document

I am trying to open a file, look through the file and change a value based on the value and pass this either to a file or var.
Below is an example of the JSON
{
"Par": [
{
"Key": "12345L",
"Value": "https://100.100.100.100:100",
"UseLastValue": true
},
{
"Key": "12345S",
"Value": "VAL2CHANGE",
"UseLastValue": true
},
{
"Key": "12345T",
"Value": "HAPPY-HELLO",
"UseLastValue": true
}
],
"CANCOPY": false,
"LOGFILE": ["HELPLOG"]
}
i have been using jq and i have been successful in isolating the object group and change the value.
cat jsonfile,json | jq '.Par | map(select(.Value=="VAL2CHANGE")) | .[] | .Value="VALHASBEENCHANGED"'
This gives
{
"Key": "12345S",
"Value": "VALHASBEENCHANGED",
"UseLastValue": true
}
What id like to achieve is to retain the full JSON output with the changed value
{
"Par": [
{
"Key": "12345L",
"Value": "https://100.100.100.100:100",
"UseLastValue": true
},
{
"Key": "12345S",
"Value": "VALHASBEENCHANGED",
"UseLastValue": true
},
{
"Key": "12345T",
"Value": "HAPPY-HELLO",
"UseLastValue": true
}
],
"CANCOPY": false,
"LOGFILE": ["HELPLOG"]
}
I.E.
jq '.Par | map(select(.Value=="VAL2CHANGE")) | .[] | .Value="VALHASBEENCHANGED"' (NOW PUT IT BACK IN FILE)
OR
open file, look in file, file value to be changed and change this and output this to a file or to screen
To add, the json file will only contain the value im looking for once as im creating this. If any other values need changing i will name differently.
jq --arg match "VAL2CHANGE" \
--arg replace "VALHASBEENCHANGED" \
'.Par |= map(if .Value == $match then (.Value=$replace) else . end)' \
<in.json
To more comprehensively replace a string anywhere it may be in a nested data structure, you can use the walk function -- which will be in the standard library in jq 1.6, but can be manually pulled in in 1.5:
jq --arg match "VAL2CHANGE" \
--arg replace "VALHASBEENCHANGED" '
# taken from jq 1.6; will not be needed here after that version is released.
# Apply f to composite entities recursively, and to atoms
def walk(f):
. as $in
| if type == "object" then
reduce keys_unsorted[] as $key
( {}; . + { ($key): ($in[$key] | walk(f)) } ) | f
elif type == "array" then map( walk(f) ) | f
else f
end;
walk(if . == $match then $replace else . end)' <in.json
If you're just replacing based on the values, you could stream the file and replace the values as you rebuild the result.
$ jq --arg change 'VAL2CHANGE' --arg value 'VALHASBEENCHANGED' -n --stream '
fromstream(inputs | if length == 2 and .[1] == $change then .[1] = $value else . end)
' input.json

jq: translate array of objects to object

I have a response from curl in a format like this:
[
{
"list": [
{
"value": 1,
"id": 12
},
{
"value": 15,
"id": 13
},
{
"value": -4,
"id": 14
}
]
},
...
]
Given a mapping between ids like this:
{
"12": "newId1",
"13": "newId2",
"14": "newId3"
}
I want to make this:
[
{
"list": {
"newId1": 1,
"newId2": 15,
"newId3": -4,
}
},
...
]
Such that I get a mapping from ids to values (and along the way I'd like to remap the ids).
I've been working at this for a while and every time I get a deadend.
Note: I can use Shell or the like to preform loops if necessary.
edit: Here's one version what I've developed so far:
jq '[].list.id = ($mapping.[] | select(.id == key)) | del(.id)' -M --argjson "mapping" "$mapping"
I don't think it's the best one, but I'm looking to see if I can find an old version that was closer to what I need.
[EDIT: The following response was in answer to the question when it described (a) the mapping as shown below, and (b) the input data as having the form:
[
{
"list": [
{
"value": 1,
"id1": 12
},
{
"value": 15,
"id2": 13
},
{
"value": -4,
"id3": 14
}
]
}
]
END OF EDIT]
In the following I'll assume that the mapping is available via the following function, but that is an inessential assumption:
def mapping: {
"id1": "newId1",
"id2": "newId2",
"id3": "newId3"
} ;
The following jq filter will then produce the desired output:
map( .list
|= (map( to_entries[]
| (mapping[.key]) as $mapped
| select($mapped)
| {($mapped|tostring): .value} )
| add) )
There's plenty of ways to skin a cat. I'd do it like this:
.[].list |= reduce .[] as $i ({};
($i.id|tostring) as $k
| (select($mapping | has($k))[$mapping[$k]] = $i.value) // .
)
You would just provide the mapping through a separate file or argument.
$ cat program.jq
.[].list |= reduce .[] as $i ({};
($i.id|tostring) as $k
| (select($mapping | has($k))[$mapping[$k]] = $i.value) // .
)
$ cat mapping.json
{
"12": "newId1",
"13": "newId2",
"14": "newId3"
}
$ jq --argfile mapping mapping.json -f program.jq input.json
[
{
"list": {
"newId1": 1,
"newId2": 15,
"newId3": -4
}
}
]
Here is a reduce-free solution to the revised problem.
In the following I'll assume that the mapping is available via the following function, but that is an inessential assumption:
def mapping:
{
"12": "newId1",
"13": "newId2",
"14": "newId3"
} ;
map( .list
|= (map( mapping[.id|tostring] as $mapped
| select($mapped)
| {($mapped): .value} )
| add) )
The "select" is for safety (i.e., it checks that the .id under consideration is indeed mapped). It might also be appropriate to ensure that $mapped is a string by writing {($mapped|tostring): .value}.

jq conditional processing on multiple files

I have multiple json files:
a.json
{
"status": "passed",
"id": "id1"
}
{
"status": "passed",
"id": "id2"
}
b.json
{
"status": "passed",
"id": "id1"
}
{
"status": "failed",
"id": "id2"
}
I want to know which id was passed in a.json and which is failed now in b.json.
expected.json
{
"status": "failed",
"id": "id2"
}
I tried something like:
jq --slurpfile a a.json --slurpfile b b.json -n '$a[] | reduce select(.status == "passed") as $passed (.; $b | select($a.id == .id and .status == "failed"))'
$passed is supposed to contain the list of passed entry in a.json and reduce will merge all the objects for which the id matches and are failed.
However it does not produce the expected result, and the documentation is kind of limited.
How to produce expected.json from a.json and b.json ?
For me your filter produces the error
jq: error (at <unknown>): Cannot index array with string "id"
I suspect this is because you wrote $b instead of $b[] and $a.id instead of $passed.id. Here is my guess at what you intended to write:
$a[]
| reduce select(.status == "passed") as $passed (.;
$b[] | select( $passed.id == .id and .status == "failed")
)
which produces the output
null
{
"status": "failed",
"id": "id2"
}
You can filter away the null by adding | values e.g.
$a[]
| reduce select(.status == "passed") as $passed (.;
$b[] | select( $passed.id == .id and .status == "failed")
)
| values
However you don't really need reduce here. A simpler way is just:
$a[]
| select(.status == "passed") as $passed
| $b[]
| select( $passed.id == .id and .status == "failed")
If you intend to go further with this I would recommend a different approach: first construct an object combining $a and $b and then project what you want from it. e.g.
reduce (($a[]|{(.id):{a:.status}}),($b[]|{(.id):{b:.status}})) as $v ({};.*$v)
will give you
{
"id1": {
"a": "passed",
"b": "passed"
},
"id2": {
"a": "passed",
"b": "failed"
}
}
To convert that back to the output you requested add
| keys[] as $id
| .[$id]
| select(.a == "passed" and .b == "failed")
| {$id, status:.b}
to obtain
{
"id": "id2",
"status": "failed"
}
The following solutions to the problem are oriented primarily towards efficiency, but it turns out that they are quite straightforward and concise.
For efficiency, we will construct a "dictionary" of ids of those who have passed in a.json to make the required lookup very fast.
Also, if you have a version of jq with inputs, it is easy to avoid "slurping" the contents of b.json.
Solution for jq 1.4 or higher
Here is a generic solution which, however, slurps both files:
Invocation (note the use of the -s option):
jq -s --slurpfile a a.json -f passed-and-failed.jq b.json
Program:
([$a[] | select(.status=="passed") | {(.id): true}] | add) as $passed
| .[] | select(.status == "failed" and $passed[.id])
That is, first construct the dictionary, and then emit the objects in b.json that satisfy the condition.
Solution for jq 1.5 or higher
Invocation (note the use of the -n option):
jq -n --slurpfile a a.json -f passed-and-failed.jq b.json
INDEX/2 is currently available from the master branch, but is provided here in case your jq does not have it, in which case you might want to add its definition to ~/.jq:
def INDEX(stream; idx_expr):
reduce stream as $row ({};
.[$row|idx_expr|
if type != "string" then tojson
else .
end] |= $row);
The solution now becomes a simple two-liner:
INDEX($a[] | select(.status == "passed") | .id; .) as $passed
| inputs | select(.status == "failed" and $passed[.id])