Create JSON using jq from pipe-separated keys and values in bash - json

I am trying to create a json object from a string in bash. The string is as follows.
CONTAINER|CPU%|MEMUSAGE/LIMIT|MEM%|NETI/O|BLOCKI/O|PIDS
nginx_container|0.02%|25.09MiB/15.26GiB|0.16%|0B/0B|22.09MB/4.096kB|0
The output is from docker stats command and my end goal is to publish custom metrics to aws cloudwatch. I would like to format this string as json.
{
"CONTAINER":"nginx_container",
"CPU%":"0.02%",
....
}
I have used jq command before and it seems like it should work well in this case but I have not been able to come up with a good solution yet. Other than hardcoding variable names and indexing using sed or awk. Then creating a json from scratch. Any suggestions would be appreciated. Thanks.

Prerequisite
For all of the below, it's assumed that your content is in a shell variable named s:
s='CONTAINER|CPU%|MEMUSAGE/LIMIT|MEM%|NETI/O|BLOCKI/O|PIDS
nginx_container|0.02%|25.09MiB/15.26GiB|0.16%|0B/0B|22.09MB/4.096kB|0'
What (modern jq)
# thanks to #JeffMercado and #chepner for refinements, see comments
jq -Rn '
( input | split("|") ) as $keys |
( inputs | split("|") ) as $vals |
[[$keys, $vals] | transpose[] | {key:.[0],value:.[1]}] | from_entries
' <<<"$s"
How (modern jq)
This requires very new (probably 1.5?) jq to work, and is a dense chunk of code. To break it down:
Using -n prevents jq from reading stdin on its own, leaving the entirety of the input stream available to be read by input and inputs -- the former to read a single line, and the latter to read all remaining lines. (-R, for raw input, causes textual lines rather than JSON objects to be read).
With [$keys, $vals] | transpose[], we're generating [key, value] pairs (in Python terms, zipping the two lists).
With {key:.[0],value:.[1]}, we're making each [key, value] pair into an object of the form {"key": key, "value": value}
With from_entries, we're combining those pairs into objects containing those keys and values.
What (shell-assisted)
This will work with a significantly older jq than the above, and is an easily adopted approach for scenarios where a native-jq solution can be harder to wrangle:
{
IFS='|' read -r -a keys # read first line into an array of strings
## read each subsequent line into an array named "values"
while IFS='|' read -r -a values; do
# setup: positional arguments to pass in literal variables, query with code
jq_args=( )
jq_query='.'
# copy values into the arguments, reference them from the generated code
for idx in "${!values[#]}"; do
[[ ${keys[$idx]} ]] || continue # skip values with no corresponding key
jq_args+=( --arg "key$idx" "${keys[$idx]}" )
jq_args+=( --arg "value$idx" "${values[$idx]}" )
jq_query+=" | .[\$key${idx}]=\$value${idx}"
done
# run the generated command
jq "${jq_args[#]}" "$jq_query" <<<'{}'
done
} <<<"$s"
How (shell-assisted)
The invoked jq command from the above is similar to:
jq --arg key0 'CONTAINER' \
--arg value0 'nginx_container' \
--arg key1 'CPU%' \
--arg value1 '0.0.2%' \
--arg key2 'MEMUSAGE/LIMIT' \
--arg value2 '25.09MiB/15.26GiB' \
'. | .[$key0]=$value0 | .[$key1]=$value1 | .[$key2]=$value2' \
<<<'{}'
...passing each key and value out-of-band (such that it's treated as a literal string rather than parsed as JSON), then referring to them individually.
Result
Either of the above will emit:
{
"CONTAINER": "nginx_container",
"CPU%": "0.02%",
"MEMUSAGE/LIMIT": "25.09MiB/15.26GiB",
"MEM%": "0.16%",
"NETI/O": "0B/0B",
"BLOCKI/O": "22.09MB/4.096kB",
"PIDS": "0"
}
Why
In short: Because it's guaranteed to generate valid JSON as output.
Consider the following as an example that would break more naive approaches:
s='key ending in a backslash\
value "with quotes"'
Sure, these are unexpected scenarios, but jq knows how to deal with them:
{
"key ending in a backslash\\": "value \"with quotes\""
}
...whereas an implementation that didn't understand JSON strings could easily end up emitting:
{
"key ending in a backslash\": "value "with quotes""
}

I know this is an old post, but the tool you seek is called jo: https://github.com/jpmens/jo
A quick and easy example:
$ jo my_variable="simple"
{"my_variable":"simple"}
A little more complex
$ jo -p name=jo n=17 parser=false
{
"name": "jo",
"n": 17,
"parser": false
}
Add an array
$ jo -p name=jo n=17 parser=false my_array=$(jo -a {1..5})
{
"name": "jo",
"n": 17,
"parser": false,
"my_array": [
1,
2,
3,
4,
5
]
}
I've made some pretty complex stuff with jo and the nice thing is that you don't have to worry about rolling your own solution worrying about the possiblity of making invalid json.

You can ask docker to give you JSON data in the first place
docker stats --format "{{json .}}"
For more on this, see: https://docs.docker.com/config/formatting/

JSONSTR=""
declare -a JSONNAMES=()
declare -A JSONARRAY=()
LOOPNUM=0
cat ~/newfile | while IFS=: read CONTAINER CPU MEMUSE MEMPC NETIO BLKIO PIDS; do
if [[ "$LOOPNUM" = 0 ]]; then
JSONNAMES=("$CONTAINER" "$CPU" "$MEMUSE" "$MEMPC" "$NETIO" "$BLKIO" "$PIDS")
LOOPNUM=$(( LOOPNUM+1 ))
else
echo "{ \"${JSONNAMES[0]}\": \"${CONTAINER}\", \"${JSONNAMES[1]}\": \"${CPU}\", \"${JSONNAMES[2]}\": \"${MEMUSE}\", \"${JSONNAMES[3]}\": \"${MEMPC}\", \"${JSONNAMES[4]}\": \"${NETIO}\", \"${JSONNAMES[5]}\": \"${BLKIO}\", \"${JSONNAMES[6]}\": \"${PIDS}\" }"
fi
done
Returns:
{ "CONTAINER": "nginx_container", "CPU%": "0.02%", "MEMUSAGE/LIMIT": "25.09MiB/15.26GiB", "MEM%": "0.16%", "NETI/O": "0B/0B", "BLOCKI/O": "22.09MB/4.096kB", "PIDS": "0" }

Here is a solution which uses the -R and -s options along with transpose:
split("\n") # [ "CONTAINER...", "nginx_container|0.02%...", ...]
| (.[0] | split("|")) as $keys # [ "CONTAINER", "CPU%", "MEMUSAGE/LIMIT", ... ]
| (.[1:][] | split("|")) # [ "nginx_container", "0.02%", ... ] [ ... ] ...
| select(length > 0) # (remove empty [] caused by trailing newline)
| [$keys, .] # [ ["CONTAINER", ...], ["nginx_container", ...] ] ...
| [ transpose[] | {(.[0]):.[1]} ] # [ {"CONTAINER": "nginx_container"}, ... ] ...
| add # {"CONTAINER": "nginx_container", "CPU%": "0.02%" ...

json_template='{"CONTAINER":"%s","CPU%":"%s","MEMUSAGE/LIMIT":"%s", "MEM%":"%s","NETI/O":"%s","BLOCKI/O":"%s","PIDS":"%s"}'
json_string=$(printf "$json_template" "nginx_container" "0.02%" "25.09MiB/15.26GiB" "0.16%" "0B/0B" "22.09MB/4.096kB" "0")
echo "$json_string"
Not using jq but possible to use args and environment in values.
CONTAINER=nginx_container
json_template='{"CONTAINER":"%s","CPU%":"%s","MEMUSAGE/LIMIT":"%s", "MEM%":"%s","NETI/O":"%s","BLOCKI/O":"%s","PIDS":"%s"}'
json_string=$(printf "$json_template" "$CONTAINER" "$1" "25.09MiB/15.26GiB" "0.16%" "0B/0B" "22.09MB/4.096kB" "0")
echo "$json_string"

If you're starting with tabular data, I think it makes more sense to use something that works with tabular data natively, like sqawk to make it into json, and then use jq work with it further.
echo 'CONTAINER|CPU%|MEMUSAGE/LIMIT|MEM%|NETI/O|BLOCKI/O|PIDS
nginx_container|0.02%|25.09MiB/15.26GiB|0.16%|0B/0B|22.09MB/4.096kB|0' \
| sqawk -FS '[|]' -RS '\n' -output json 'select * from a' header=1 \
| jq '.[] | with_entries(select(.key|test("^a.*")|not))'
{
"CONTAINER": "nginx_container",
"CPU%": "0.02%",
"MEMUSAGE/LIMIT": "25.09MiB/15.26GiB",
"MEM%": "0.16%",
"NETI/O": "0B/0B",
"BLOCKI/O": "22.09MB/4.096kB",
"PIDS": "0"
}
Without jq, sqawk gives a bit too much:
[
{
"anr": "1",
"anf": "7",
"a0": "nginx_container|0.02%|25.09MiB/15.26GiB|0.16%|0B/0B|22.09MB/4.096kB|0",
"CONTAINER": "nginx_container",
"CPU%": "0.02%",
"MEMUSAGE/LIMIT": "25.09MiB/15.26GiB",
"MEM%": "0.16%",
"NETI/O": "0B/0B",
"BLOCKI/O": "22.09MB/4.096kB",
"PIDS": "0",
"a8": "",
"a9": "",
"a10": ""
}
]

Related

jq how to pass json keys from a shell variable

I have a json file I am parsing with jq. This is a sample of the file
[{
"key1":{...},
"key2":{...}
}]
[{
"key1":{...},
"key2":{...}
}]
...
each line is a list containing a json (which I know is not technically a json format but jq still works on such a file)
The below jq command works:
cat file.json | jq -r '.[] | [.key1,.key2]'
The above correctly shows:
[
<value_of_key1>,<value_of_key2>
]
[
<value_of_key1>,<value_of_key2>
]
However, I want .key1,.key2 to be dynamic since these keys can change. So I want to pass a variable to jq. Something like:
$KEYS=.key1,.key2
cat file.json | jq -r --arg var "$KEYS" '.[] | [$var]'
But the above is returning the keys themselves:
[
".key1,.key2"
]
[
".key1,.key2"
]
why is this happening? what is the correct command to make this happen?
This answer does not help me. I am not getting any errors as the OP in that question.
Fetching the value of a jq variable doesn't cause it to be executed as jq code.
Furthermore, jq lacks the facility to take a string, compile it as jq code, and evaluate the result. (This is commonly known as eval.)
So, short of a writing a jq parser and evaluator in jq, you will need to impose limits and/or accept a different format.
For example,
keys='[ [ "key1", "childkey" ], [ "key2", "childkey2" ] ]' # JSON
jq --argjson keys "$keys" '.[] | [ getpath( $keys[] ) ]' file.json
or
keys='key1.childkey,key2.childkey2'
jq --arg keys "$keys" '
( ( $keys / "," ) | map( . / "." ) ) as $keys |
.[] | [ getpath( $keys[] ) ]
' file.json
Suppose you have:
cat file
[{
"key1":1,
"key2":2
}]
[{
"key1":1,
"key2":2
}]
You can use a jq command like so:
jq '.[] | [.key1,.key2]' file
[
1,
2
]
[
1,
2
]
You can use -f to execute a filter from a file and nothing keeps you from creating the file separately from the shell variables.
Example:
keys=".key1"
echo ".[] | [${keys}]" >jqf
jq -f jqf file
[
1
]
[
1
]
Or just build the string directly into jq:
# note double " causing string interpolation
jq ".[] | [${keys}]" file
You can use --argjson option and destructuring.
file.json
[{"key1":{"a":1},"key2":{"b":2}}]
[{"key1":{"c":1},"key2":{"d":2}}]
$ in='["key1","key2"]' jq -c --argjson keys "$in" '$keys as [$key1,$key2] | .[] | [.[$key1,$key2]]' file.json
output:
[{"a":1},{"b":2}]
[{"c":1},{"d":2}]
Elaborating on ikegami's answer.
To start with here's my version of the answer:
$ in='key1.a,key2.b'; jq -c --arg keys "$in" '($keys/","|map(./".")) as $paths | .[] | [getpath($paths[])]' <<<$'[{"key1":{"a":1},"key2":{"b":2}}] [{"key1":{"a":3},"key2":{"b":4}}]'
This gives output
[1,2]
[3,4]
Let's try it.
We have input
[{"key1":{"a":1},"key2":{"b":2}}]
[{"key1":{"a":3},"key2":{"b":4}}]
And we want to construct array
[["key1","a"],["key2","b"]]
then use it on getpath(PATHS) builtin to extract values out of our input.
To start with we are given in shell variable with string value key1.a,key2.b. Let's call this $keys.
Then $keys/"," gives
["key1.a","key2.b"]
["key1.a","key2.b"]
After that $keys/","|map(./".") gives what we want.
[["key1","a"],["key2","b"]]
[["key1","a"],["key2","b"]]
Let's call this $paths.
Now if we do .[]|[getpath($paths[])] we get the values from our input equivalent to
[.[] | .key1.a, .key2.b]
which is
[1,2]
[3,4]

Replacing a value in a JSON structure with randomly-chosen strings from a list using jq

There are lots of similar questions but none for dynamically joining 2 files.
What I'm trying to do is to dynamically edit the following structure:
{
"features": [
{
"type": "Feature",
"properties": {
"name": "0",
"height": 0.7
}
},
{
"type": "Feature",
"properties": {
"name": "1",
"height": 0
}
}
]
}
I want to replace only the one field .features[].properties.name with a random value from a 1d-array inside another txt file. There are 8,000 features and around 100 names I've prepared.
This is what I've got now failing with errors:
#!/bin/bash
declare -a names=("name1" "name2" "name3")
jq '{
"features" : [
"type" : "Feature",
"properties" : {
"name" : `$names[seq 0 100]`,
"height" : .features[].properties.height
},
.features[].geometry
]
}' < areas.json
Is it even possible to do in a single command or I should use python or js for such tasks?
Your document (https://echarts.baidu.com/examples/data-gl/asset/data/buildings.json) is actually small enough that we don't need to do any crazy memory-conservation tricks to make it work; the following functions as-is:
# create sample data
[[ -e words.txt ]] || printf '%s\n' 'First Word' 'Second Word' 'Third Word' >words.txt
# actually run the replacements
jq -n --slurpfile buildings buildings.json '
# define a jq function that changes the current property name with the next input
def replaceName: (.properties.name |= input);
# now, for each document in buildings.json, replace each name it contains
$buildings[] | (.features |= map(replaceName))
' < <(shuf -r words.txt | jq -R .)
This works because shuf -r words.txt creates an unending stream of words randomly chosen from words.txt, and the jq -R . inside the process substitution quotes those as strings. (Because we only call input once per item in buildings.json, we don't try to keep running after that file's contents have been completely consumed).
For the tiny two-record document given in the question, the output looks like:
{
"features": [
{
"type": "Feature",
"properties": {
"name": "Third Word",
"height": 0.7
}
},
{
"type": "Feature",
"properties": {
"name": "Second Word",
"height": 0
}
}
]
}
...with the actual words varying each run; it's similarly been smoketested with the full externally-hosted file.
Here's a solution to the problem of choosing the names randomly with replacement, using the very simple PRNG written in jq
copied from https://rosettacode.org/wiki/Random_numbers#jq
Invocation:
jq --argjson names '["name1","name2","name3","name4"]' \
-f areas.jq areas.json
areas.jq
# The random numbers are in [0 -- 32767] inclusive.
# Input: an array of length at least 2 interpreted as [count, state, ...]
# Output: [count+1, newstate, r] where r is the next pseudo-random number.
def next_rand_Microsoft:
.[0] as $count | .[1] as $state
| ( (214013 * $state) + 2531011) % 2147483648 # mod 2^31
| [$count+1 , ., (. / 65536 | floor) ] ;
# generate a stream of random integers < $n
def randoms($n):
def r: next_rand_Microsoft
| (.[2] % $n), r;
[0,11] | r ;
. as $in
| ($names|length) as $count
| (.features|length) as $n
| [limit($n; randoms($count))] as $randoms
| reduce range(0; $n) as $i (.;
.features[$i].properties.name = $names[$randoms[$i]] )
Assuming your areas.json is valid JSON, then I believe the following would come close to accomplishing your intended edit:
names='["name1","name2","name3","name4"]'
jq --argjson names "$names" '.features[].properties.name = $names
' < areas.json
However, given your proposed solution, it's not clear to me what you mean by a "random value from a 1d-array". If you mean that the index should be randomly chosen (as by a PRNG), then I would suggest computing it using your favorite PRNG and passing in that random value as another argument to jq, as illustrated in the following section.
So the question becomes how to transform the text
['name1','name2','name3','name4']
into a valid JSON array. There are numerous ways this can be done, whether using jq or not, but I believe that is best left as a separate question or as an exercise, because the selection of the method will probably depend on specific details which are not mentioned in this Q. Personally, I'd use sed if possible; you might also consider using hjson, as also illustrated in the following section.
Illustration using hjson and awk
hjson -j <<< "['name1','name2','name3','name4']" > names.json.tmp
function randint {
awk -v n="$(jq length names.json.tmp)" '
function randint(n) {return int(n * rand())}
BEGIN {srand(); print randint(n)}'
}
jq --argfile names names.json.tmp --argjson n $(randint) '
.features[].properties.name = $names[$n]
' < areas.json
Addendum
Currently, jq does not have a builtin PRNG, but if you want to use jq and if you want a value from the "names" array to be chosen at random (with replacement?) for each occurrence of the .name field, then one option would be to pre-compute an array of the randomly selected names (an array of length features | length) using your favorite PRNG, and passing that array into jq:
jq --argjson randomnames "$randomnames" '
reduce range(0; .features[]|length) as $i (.;
.features[$i].properties.name = $randomnames[$i])
' < areas.json
Another option would be to use a PRNG written in jq, as illustrated elsewhere on this page.

Constructing a JSON object from a bash associative array

I would like to convert an associative array in bash to a JSON hash/dict. I would prefer to use JQ to do this as it is already a dependency and I can rely on it to produce well formed json. Could someone demonstrate how to achieve this?
#!/bin/bash
declare -A dict=()
dict["foo"]=1
dict["bar"]=2
dict["baz"]=3
for i in "${!dict[#]}"
do
echo "key : $i"
echo "value: ${dict[$i]}"
done
echo 'desired output using jq: { "foo": 1, "bar": 2, "baz": 3 }'
There are many possibilities, but given that you already have written a bash for loop, you might like to begin with this variation of your script:
#!/bin/bash
# Requires bash with associative arrays
declare -A dict
dict["foo"]=1
dict["bar"]=2
dict["baz"]=3
for i in "${!dict[#]}"
do
echo "$i"
echo "${dict[$i]}"
done |
jq -n -R 'reduce inputs as $i ({}; . + { ($i): (input|(tonumber? // .)) })'
The result reflects the ordering of keys produced by the bash for loop:
{
"bar": 2,
"baz": 3,
"foo": 1
}
In general, the approach based on feeding jq the key-value pairs, with one key on a line followed by the corresponding value on the next line, has much to recommend it. A generic solution following this general scheme, but using NUL as the "line-end" character, is given below.
Keys and Values as JSON Entities
To make the above more generic, it would be better to present the keys and values as JSON entities. In the present case, we could write:
for i in "${!dict[#]}"
do
echo "\"$i\""
echo "${dict[$i]}"
done |
jq -n 'reduce inputs as $i ({}; . + { ($i): input })'
Other Variations
JSON keys must be JSON strings, so it may take some work to ensure that the desired mapping from bash keys to JSON keys is implemented. Similar remarks apply to the mapping from bash array values to JSON values. One way to handle arbitrary bash keys would be to let jq do the conversion:
printf "%s" "$i" | jq -Rs .
You could of course do the same thing with the bash array values, and let jq check whether the value can be converted to a number or to some other JSON type as desired (e.g. using fromjson? // .).
A Generic Solution
Here is a generic solution along the lines mentioned in the jq FAQ and advocated by #CharlesDuffy. It uses NUL as the delimiter when passing the bash keys and values to jq, and has the advantage of only requiring one call to jq. If desired, the filter fromjson? // . can be omitted or replaced by another one.
declare -A dict=( [$'foo\naha']=$'a\nb' [bar]=2 [baz]=$'{"x":0}' )
for key in "${!dict[#]}"; do
printf '%s\0%s\0' "$key" "${dict[$key]}"
done |
jq -Rs '
split("\u0000")
| . as $a
| reduce range(0; length/2) as $i
({}; . + {($a[2*$i]): ($a[2*$i + 1]|fromjson? // .)})'
Output:
{
"foo\naha": "a\nb",
"bar": 2,
"baz": {
"x": 0
}
}
This answer is from nico103 on freenode #jq:
#!/bin/bash
declare -A dict=()
dict["foo"]=1
dict["bar"]=2
dict["baz"]=3
assoc2json() {
declare -n v=$1
printf '%s\0' "${!v[#]}" "${v[#]}" |
jq -Rs 'split("\u0000") | . as $v | (length / 2) as $n | reduce range($n) as $idx ({}; .[$v[$idx]]=$v[$idx+$n])'
}
assoc2json dict
You can initialize a variable to an empty object {} and add the key/values {($key):$value} for each iteration, re-injecting the result in the same variable :
#!/bin/bash
declare -A dict=()
dict["foo"]=1
dict["bar"]=2
dict["baz"]=3
data='{}'
for i in "${!dict[#]}"
do
data=$(jq -n --arg data "$data" \
--arg key "$i" \
--arg value "${dict[$i]}" \
'$data | fromjson + { ($key) : ($value | tonumber) }')
done
echo "$data"
This has been posted, and credited to nico103 on IRC, which is to say, me.
The thing that scares me, naturally, is that these associative array keys and values need quoting. Here's a start that requires some additional work to dequote keys and values:
function assoc2json {
typeset -n v=$1
printf '%q\n' "${!v[#]}" "${v[#]}" |
jq -Rcn '[inputs] |
. as $v |
(length / 2) as $n |
reduce range($n) as $idx ({}; .[$v[$idx]]=$v[$idx+$n])'
}
$ assoc2json a
{"foo\\ bar":"1","b":"bar\\ baz\\\"\\{\\}\\[\\]","c":"$'a\\nb'","d":"1"}
$
So now all that's needed is a jq function that removes the quotes, which come in several flavors:
if the string starts with a single-quote (ksh) then it ends with a single quote and those need to be removed
if the string starts with a dollar sign and a single-quote and ends in a double-quote, then those need to be removed and internal backslash escapes need to be unescaped
else leave as-is
I leave this last iterm as an exercise for the reader.
I should note that I'm using printf here as the iterator!
bash 5.2 introduces the #k parameter transformation which, makes this much easier. Like:
$ declare -A dict=([foo]=1 [bar]=2 [baz]=3)
$ jq -n '[$ARGS.positional | _nwise(2) | {(.[0]): .[1]}] | add' --args "${dict[#]#k}"
{
"foo": "1",
"bar": "2",
"baz": "3"
}

Match a key-value pattern in JSON text and get the value for another key

Suppose I have a file with this JSON:
[
{
"label" : "deploy",
"pk" : 2388175,
"key" : "gsfd45"
},
{
"label" : "jenkins",
"key" : "eQtIAwP",
"pk" : 2388165
}
]
I want to get the value for key "pk" if it is in the hash that has label = "deploy".
How can I do this? Do I need to write a script?
To parse JSON in Bash, use jq!
$ jq '.[] | select(.label=="deploy").pk' file
2388175
If you want to store deploy in a variable, use --arg. From jq manual → Invoking jq:
--arg name value
This option passes a value to the jq program as a predefined variable. If you run jq with --arg foo bar, then $foo is available in the program and has the value "bar". Note that value will be treated as a string, so --arg foo 123 will bind $foo to "123".
$ v="deploy"
$ jq --arg var "$v" '.[] | select(.label==$var).pk' file
2388175
$ v="blabla"
$ jq --arg var "$v" '.[] | select(.label==$var).pk' file
# empty!
$ v="jenkins"
$ jq --arg var "$v" '.[] | select(.label==$var).pk' file
2388165
By pieces:
Print everything:
$ jq '.[]' file
{
"key": "gsfd45",
"pk": 2388175,
"label": "deploy"
}
{
"pk": 2388165,
"key": "eQtIAwP",
"label": "jenkins"
}
Print those records where label equals "deploy":
$ jq '.[] | select(.label=="deploy")' file
{
"key": "gsfd45",
"pk": 2388175,
"label": "deploy"
}
Print just the field pk in such case:
$ jq '.[] | select(.label=="deploy").pk' file
2388175
If jq was not availale on your server, python should be there, right? ^_*
#!/bin/python
import json
with open('data.json') as data_file:
data = json.load(data_file)
for d in data:
if d['label'] == 'deploy':
print(d["pk"])
assume your file named as data.json save it as id.py, and run with:
python id.py
It needs python3 installed on your system.
change the line print (d["pk"]) into print d["pk"] if you only have python2 installed.
The output would be:
2388175
Edit
added the if check, didn't notice OP wanted to check the label.
In awk. It's a bit incomplete but as you didn't have anything to show, you can work on this one:
$ awk -F: '$1~/"label"/{l=$2} l~/deploy/ && $1 ~ /pk/ {sub(/,/,"",$2);print $2}' file
2388175
When awk meets a record with "label" on it, it stores the $2. Once the pk is found and flag l has deploy in it, remove comma and print.
If the elegant solution provided by James Brown does not work (e.g. different ordering of the key/value pairs) here is something that tries to get at least the string between the braces into one record (by setting RS), then the string is splittet at the key value pair with key "pk" (by setting FS).
After that setup the pattern looks for the label/deploy key/value pair in $0 and then, only if there are two fields (e.g. the pk was present and a field split took place) the string after the comma in $2 is deleted and the value of key pk remains and is printed:
script.awk
BEGIN {
RS="[{}[\\]]"
FS="\"pk\"[^:]*:"
}
/"label"[^:]*:[^\"]*"deploy"/ {
if( NF == 2 ) {
# "pk" is present in $0, remove everything after comma
sub(/,.*/, "", $2)
print $2
}
}
You use this script with awk like this: awk -f script.awk yourfile.
I have only tried it with GNU awk, but RS and FS should also work with awk, too.

Flatten nested JSON using jq

I'd like to flatten a nested json object, e.g. {"a":{"b":1}} to {"a.b":1} in order to digest it in solr.
I have 11 TB of json files which are both nested and contains dots in field names, meaning not elasticsearch (dots) nor solr (nested without the _childDocument_ notation) can digest it as is.
The other solutions would be to replace dots in the field names with underscores and push it to elasticsearch, but I have far better experience with solr therefore I prefer the flatten solution (unless solr can digest those nested jsons as is??).
I will prefer elasticsearch only if the digestion process will take far less time than solr, because my priority is digesting as fast as I can (thus I chose jq instead of scripting it in python).
Kindly help.
EDIT:
I think the pair of examples 3&4 solves this for me:
https://lucidworks.com/blog/2014/08/12/indexing-custom-json-data/
I'll try soon.
You can also use the following jq command to flatten nested JSON objects in this manner:
[leaf_paths as $path | {"key": $path | join("."), "value": getpath($path)}] | from_entries
The way it works is: leaf_paths returns a stream of arrays which represent the paths on the given JSON document at which "leaf elements" appear, that is, elements which do not have child elements, such as numbers, strings and booleans. We pipe that stream into objects with key and value properties, where key contains the elements of the path array as a string joined by dots and value contains the element at that path. Finally, we put the entire thing in an array and run from_entries on it, which transforms an array of {key, value} objects into an object containing those key-value pairs.
This is just a variant of Santiago's jq:
. as $in
| reduce leaf_paths as $path ({};
. + { ($path | map(tostring) | join(".")): $in | getpath($path) })
It avoids the overhead of the key/value construction and destruction.
(If you have access to a version of jq later than jq 1.5, you can omit the "map(tostring)".)
Two important points about both these jq solutions:
Arrays are also flattened.
E.g. given {"a": {"b": [0,1,2]}} as input, the output would be:
{
"a.b.0": 0,
"a.b.1": 1,
"a.b.2": 2
}
If any of the keys in the original JSON contain periods, then key collisions are possible; such collisions will generally result in the loss of a value. This would happen, for example, with the following input:
{"a.b":0, "a": {"b": 1}}
Here is a solution that uses tostream, select, join, reduce and setpath
reduce ( tostream | select(length==2) | .[0] |= [join(".")] ) as [$p,$v] (
{}
; setpath($p; $v)
)
I've recently written a script called jqg that flattens arbitrarily complex JSON and searches the results using a regex; to simply flatten the JSON, your regex would be '.', which matches everything. Unlike the answers above, the script will handle embedded arrays, false and null values, and can optionally treat empty arrays and objects ([] & {}) as leaf nodes.
$ jq . test/odd-values.json
{
"one": {
"start-string": "foo",
"null-value": null,
"integer-number": 101
},
"two": [
{
"two-a": {
"non-integer-number": 101.75,
"number-zero": 0
},
"true-boolean": true,
"two-b": {
"false-boolean": false
}
}
],
"three": {
"empty-string": "",
"empty-object": {},
"empty-array": []
},
"end-string": "bar"
}
$ jqg . test/odd-values.json
{
"one.start-string": "foo",
"one.null-value": null,
"one.integer-number": 101,
"two.0.two-a.non-integer-number": 101.75,
"two.0.two-a.number-zero": 0,
"two.0.true-boolean": true,
"two.0.two-b.false-boolean": false,
"three.empty-string": "",
"three.empty-object": {},
"three.empty-array": [],
"end-string": "bar"
}
jqg was tested using jq 1.6
Note: I am the author of the jqg script.
As it turns out, curl -XPOST 'http://localhost:8983/solr/flat/update/json/docs' -d #json_file does just this:
{
"a.b":[1],
"id":"24e3e780-3a9e-4fa7-9159-fc5294e803cd",
"_version_":1535841499921514496
}
EDIT 1: solr 6.0.1 with bin/solr -e cloud. collection name is flat, all the rest are default (with data-driven-schema which is also default).
EDIT 2: The final script I used: find . -name '*.json' -exec curl -XPOST 'http://localhost:8983/solr/collection1/update/json/docs' -d #{} \;.
EDIT 3: Is is also possible to parallel with xargs and to add the id field with jq: find . -name '*.json' -print0 | xargs -0 -n 1 -P 8 -I {} sh -c "cat {} | jq '. + {id: .a.b}' | curl -XPOST 'http://localhost:8983/solr/collection/update/json/docs' -d #-" where -P is the parallelism factor. I used jq to set an id so multiple uploads of the same document won't create duplicates in the collection (when I searched for the optimal value of -P it created duplicates in the collection)
As #hraban mentioned, leaf_paths does not work as expected (furthermore, it is deprecated). leaf_paths is equivalent to paths(scalars), it returns the paths of any values for which scalars returns a truthy value. scalars returns its input value if it is a scalar, or null otherwise. The problem with that is that null and false are not truthy values, so they will be removed from the output. The following code does work, by checking the type of the values directly:
. as $in
| reduce paths(type != "object" and type != "array") as $path ({};
. + { ($path | map(tostring) | join(".")): $in | getpath($path) })