Using jq I'd like to convert data of the format:
{
"key": "something-else",
"value": {
"value": "bloop",
"isEncrypted": false
}
}
{
"key": "something",
"value": {
"value": "blah",
"isEncrypted": false
}
}
To the format:
{
something: "blah",
something-else: "bloop"
}
Filtering out 'encrypted values' along the way. How can I achieve this? I've gotten as far as the following:
.parameters | to_entries[] | select (.value.isEncrypted == false) | .key + ": " + .value.value
Which produces:
"something-else: bloop"
"something: blah"
Close, but not there just yet. I suspect that there's some clever function for this.
Given the example input, here's a simple solution, assuming the stream of objects is available as an array. (This can be done using jq -s if the JSON objects are given as input to jq, or in your case, following your example, simply using .parameters | to_entries).
map( select(.value.isEncrypted == false) | {(.key): .value.value } )
| add
This produces the JSON object:
{
"something-else": "bloop",
"something": "blah"
}
The key ideas here are:
the syntax for object construction: {( KEYNAME ): VALUE}
add
One way to gain an understanding of how this works is to run the first part of the filter (map(...)) first.
Using keys_unsorted
If you want to avoid the overhead of to_entries, you might want to consider the following approach, which piggy-backs off your implicit description of .parameters:
.parameters
| [ keys_unsorted[] as $k
| if .[$k].isEncrypted == false
then { ($k) : .[$k].value } else empty end ]
| add
Related
I have similar question as how-to-use-jq-to-find-all-paths-to-a-certain-key and also checked the accepted answer.
The solution described in answer works for the exactly same input as the ticket, but after I updated the json a little bit as following (remove last foo part), I can't get any output by filter:
fromstream(tostream | select(.[0]|index("foo")))
The updated input json:
{
"A": {
"A1": {
"foo": {
"_": "_"
}
},
"A2": {
"_": "_"
}
},
"B": {
"B1": {}
}
}
The expected output should be:
{
"A": {
"A1": {
"foo": {
"_": "_"
}
}
}
}
But I got nothing. You can check here https://jqplay.org/s/s21FFUeoQz0.
I've also tried the filter without fromstream:
tostream | select(.[0]|index("foo"))
The output will be as following, it seems work here.
[["A","A1","foo","_"],"_"]
[["A","A1","foo","_"]]
[["A","A1","foo"]]
So I suspect the fromstream has some problem. Can anyone help me figure out? Thanks!
fromstream(tostream | select(length==1 or (.[0]|index("foo"))))
Depending on your requirements, you might be able to make things more efficient by wrapping the above expression in a call to first().
—-
FYPI, a less obscure approach would be along the lines of:
paths as $p
| select($p[-1]=="foo")
| getpath($p) as $v
| {} | setpath($p;$v)
You're looking for something like this:
. as $in
| reduce (paths | select(.[-1] == "foo")) as $p (
null;
setpath($p; $in | getpath($p))
)
Online demo
Who doesn't like obscure, inefficient solutions to problems?
delpaths(
[paths|select(index("foo"))] as $foos
| [paths|select(. as $p | $foos | map(index($p) != 0) | all)]
)
First build a list of all paths containing a "foo" object/property. Then filter all paths leading to these paths. Finally, delete them.
I would like to "transpose" (not sure that's the right word) JSON elements.
For example, I have a JSON file like this:
{
"name": {
"0": "fred",
"1": "barney"
},
"loudness": {
"0": "extreme",
"1": "not so loud"
}
}
... and I would like to generate a JSON array like this:
[
{
"name": "fred",
"loudness": "extreme"
},
{
"name": "barney",
"loudness": "not so loud"
}
]
My original JSON has many more first level elements than just "name" and "loudness", and many more names, features, etc.
For this simple example I could fully specify the transformation like this:
$ echo '{"name":{"0":"fred","1":"barney"},"loudness":{"0":"extreme","1":"not so loud"}}'| \
> jq '[{"name":.name."0", "loudness":.loudness."0"},{"name":.name."1", "loudness":.loudness."1"}]'
[
{
"name": "fred",
"loudness": "extreme"
},
{
"name": "barney",
"loudness": "not so loud"
}
]
... but this isn't feasible for the original JSON.
How can jq create the desired output while being key-agnostic for my much larger JSON file?
Yes, transpose is an appropriate word, as the following makes explicit.
The following generic helper function makes for a simple solution that is completely agnostic about the key names, both of the enclosing object and the inner objects:
# Input: an array of values
def objectify($keys):
. as $in | reduce range(0;length) as $i ({}; .[$keys[$i]] = $in[$i]);
Assuming consistency of the ordering of the inner keys
Assuming the key names in the inner objects are given in a consistent order, a solution can now obtained as follows:
keys_unsorted as $keys
| [.[] | [.[]]] | transpose
| map(objectify($keys))
Without assuming consistency of the ordering of the inner keys
If the ordering of the inner keys cannot be assumed to be consistent, then one approach would be to order them, e.g. using this generic helper function:
def reorder($keys):
. as $in | reduce $keys[] as $k ({}; .[$k] = $in[$k]);
or if you prefer a reduce-free def:
def reorder($keys): [$keys[] as $k | {($k): .[$k]}] | add;
The "main" program above can then be modified as follows:
keys_unsorted as $keys
| (.[$keys[0]]|keys_unsorted) as $inner
| map_values(reorder($inner))
| [.[] | [.[]]] | transpose
| map(objectify($keys))
Caveat
The preceding solution only considers the key names in the first inner object.
Building upon Peak's solution, here is an alternative based on group_by to deal with arbitrary orders of inner keys.
keys_unsorted as $keys
| map(to_entries[])
| group_by(.key)
| map(with_entries(.key = $keys[.key] | .value |= .value))
Using paths is a good idea as pointed out by Hobbs. You could also do something like this :
[ path(.[][]) as $p | { key: $p[0], value: getpath($p), id: $p[1] } ]
| group_by(.id)
| map(from_entries)
This is a bit hairy, but it works:
. as $data |
reduce paths(scalars) as $p (
[];
setpath(
[ $p[1] | tonumber, $p[0] ];
( $data | getpath($p) )
)
)
First, capture the top level as $data because . is about to get a new value in the reduce block.
Then, call paths(scalars) which gives a key path to all of the leaf nodes in the input. e.g. for your sample it would give ["name", "0"] then ["name", "1"], then ["loudness", "0"], then ["loudness", "1"].
Run a reduce on each of those paths, starting the reduction with an empty array.
For each path, construct a new path, in the opposite order, with numbers-in-strings turned into real numbers that can be used as array indices, e.g. ["name", "0"] becomes [0, "name"].
Then use getpath to get the value at the old path in $data and setpath to set a value at the new path in . and return it as the next . for the reduce.
At the end, the result will be
[
{
"name": "fred",
"loudness": "extreme"
},
{
"name": "barney",
"loudness": "not so loud"
}
]
If your real data structure might be two levels deep then you would need to replace [ $p[1] | tonumber, $p[0] ] with a more appropriate expression to transform the path. Or maybe some of your "values" are objects/arrays that you want to leave alone, in which case you probably need to replace paths(scalars) with something like paths | select(length == 2).
I've been looking over several examples of 'jq' parsing of json strings, all very helpful, but not conclusive for my particular problem
Here's my input json :
{
"outcome" : "TrBean",
"result" : {"TrAct" : {
"executiontime" : 16938570,
"invocations" : 133863,
"waittime" : 4981
}}
}
{
"outcome" : "WwwBean",
"result" : {}
}
{
"outcome": "CRFeatureBean",
"result": {
"CRChannels": {
"executiontime": 78127,
"invocations": 9983,
"waittime": 213
},
"getCRChannels": {
"executiontime": 98704,
"invocations": 10113,
"waittime": 212
},
"getCRToMigrate": {
"executiontime": 32,
"invocations": 4,
"waittime": 0
},
"getCRId": {
"executiontime": 28198633,
"invocations": 747336,
"waittime": 19856
}
}
}
I'm trying to feed graphite via collectd exec plugin (PUTVAL), so I need info in one line. I tried with ./jq '.result|to_entries[]|{"method:" .key, "inv": .value.invocations}|"PUTVAL \(.method)/invoke:\(.invokes)"' ... but I need to have "outcome" in every line too.
Also I do not know the amount, nor the names of the result-objects
So, I'd like to end up with :
TrBean_TrAct
WwwBean
CRFeatureBean_CRChannels
CRFeatureBean_getCRChannels
CRFeatureBean_getCRToMigrate
CrFeatureBean_getCRId
The following jq filter produces the desired output when jq is invoked with the -r command-line option:
((.result | keys_unsorted[]) // null) as $key
| if $key == null then .outcome
else [.outcome, $key] | join("_")
end
There are of course many possible variations, e.g.
((.result | keys_unsorted[]) // null) as $key
| [.outcome, ($key // empty)]
| join("_")
or if you want a short one-liner:
.outcome + ("_" + (.result | keys_unsorted[]) // null)
In any case, the key to simplicity here is to generate the keys of .result as a stream. Handling the "edge case" makes the solution slightly more complicated than it would otherwise be, i.e. .outcome + "_" + (.result | keys_unsorted[])
Example invocation: jq -r -f program.jq input.json
I've got an array of multiple JSON. I would like to get the number of of JSON which contain a specific value.
Example:
[
{
"key": "value1",
"2ndKey":"2ndValue1"
},
{
"key": "value2",
"2ndKey":"2ndValue2"
},
{
"key": "value1",
"2ndKey":"2ndValue3"
}
]
So in case I'm looking for value1 in key, the result should be 2.
I would like to get an solution using jq. I had already some tries, however they did not fully work. The best one yet was the following:
cat /tmp/tmp.txt | jq ' select(.[].key == "value1" ) | length '
I get the correct results but it is shown multiple times.
Can anybody help me to further improve my code. Thanks in advance!
You are pretty close. Try this
map(select(.key == "value1")) | length
or the equivalent
[ .[] | select(.key == "value1") ] | length
An efficient and convenient way to count is to use 'count' as defined below:
def count(s; cond): reduce s as $x (0; if ($x|cond) then .+1 else . end);
count(.[]; .key == "value1")
I have a JSON data set with around 8.7 million key value pairs extracted from a Redis store, where each key is guaranteed to be an 8 digit number, and the key is an 8 alphanumeric character value i.e.
[{
"91201544":"INXX0019",
"90429396":"THXX0020",
"20140367":"ITXX0043",
...
}]
To reduce Redis memory usage, I want to transform this into a hash of hashes, where the hash prefix key is the first 6 characters of the key (see this link) and then store this back into Redis.
Specifically, I want my resulting JSON data structure (that I'll then write some code to parse this JSON structure and create a Redis command file consisting of HSET, etc) to look more like
[{
"000000": { "00000023": "INCD1234",
"00000027": "INCF1423",
....
},
....
"904293": { "90429300": "THXX0020",
"90429302": "THXX0024",
"90429305": "THXY0013"}
}]
Since I've been impressed by jq and I'm trying to be more proficient at functional style programming, I wanted to use jq for this task. So far I've come up with the following:
% jq '.[0] | to_entries | map({key: .key, pfx: .key[0:6], value: .value}) | group_by(.pfx)'
This gives me something like
[
[
{
"key": "00000130",
"pfx": "000001",
"value": "CAXX3231"
},
{
"key": "00000162",
"pfx": "000001",
"value": "CAXX4606"
}
],
[
{
"key": "00000238",
"pfx": "000002",
"value": "CAXX1967"
},
{
"key": "00000256",
"pfx": "000002",
"value": "CAXX0727"
}
],
....
]
I've tried the following:
% jq 'map(map({key: .pfx, value: {key, value}}))
| map(reduce .[] as $item ({}; {key: $item.key, value: [.value[], $item.value]} ))
| map( {key, value: .value | from_entries} )
| from_entries'
which does give me the correct result, but also prints out an error for every reduce (I believe) of
jq: error: Cannot iterate over null
The end result is
{
"000001": {
"00000130": "CAXX3231",
"00000162": "CAXX4606"
},
"000002": {
"00000238": "CAXX1967",
"00000256": "CAXX0727"
},
...
}
which is correct, but how can I avoid getting this stderr warning thrown as well?
I'm not sure there's enough data here to assess what the source of the problem is. I find it hard to believe that what you tried results in that. I'm getting errors with that all the way.
Try this filter instead:
.[0]
| to_entries
| group_by(.key[0:6])
| map({
key: .[0].key[0:6],
value: map(.key=.key[6:8]) | from_entries
})
| from_entries
Given data that looks like this:
[{
"91201544":"INXX0019",
"90429396":"THXX0020",
"20140367":"ITXX0043",
"00000023":"INCD1234",
"00000027":"INCF1423",
"90429300":"THXX0020",
"90429302":"THXX0024",
"90429305":"THXY0013"
}]
Results in this:
{
"000000": {
"23": "INCD1234",
"27": "INCF1423"
},
"201403": {
"67": "ITXX0043"
},
"904293": {
"00": "THXX0020",
"02": "THXX0024",
"05": "THXY0013",
"96": "THXX0020"
},
"912015": {
"44": "INXX0019"
}
}
I understand that this is not what you are asking for but, just for the reference, I think it will be MUCH more faster to do this with Redis's built-in Lua scripting.
And it turns out that it is a bit more straightforward:
for _,key in pairs(redis.call('keys', '*')) do
local val = redis.call('get', key)
local short_key = string.sub(key, 0, -2)
redis.call('hset', short_key, key, val)
redis.call('del', key)
end
This will be done in place without transferring from/to Redis and converting to/from JSON.
Run it from console as:
$ redis-cli eval "$(cat script.lua)" 0
For the record, jq's group_by relies on sorting, which of course will slow things down noticeably when the input is sufficiently large. The following is about 40% faster even when the input array has just 100,000 items:
def compress:
. as $in
| reduce keys[] as $key ({};
$key[0:6] as $k6
| $key[6:] as $k2
| .[$k6] += {($k2): $in[$key]} );
.[0] | compress
Given Jeff's input, the output is identical.