I'm having a JSON file groups.json which is having data in below format -
[
{ "key" : "value" },
{ "key" : "value" },
{ "key" : "value" }
]
I need these key value pairs to a bash array of strings like this -
bashArray = [ { "key" : "value" } { "key" : "value" } { "key : "value" } ]
How can I achieve this on Bash3.x?
With modern versions of bash, you'd simply use mapfile in conjunction with the -c option of jq (as illustrated in several other SO threads, e.g. Convert a JSON array to a bash array of strings)
With older versions of bash, you would still use the -c option but would build up the array one item at a time, along the lines of:
while read -r line ; do
ary+=("$line")
done < <(jq -c .......)
Example
#!/bin/bash
function json {
cat<<EOF
[
{ "key" : "value1" },
{ "key" : "value2" },
{ "key" : "value3" }
]
EOF
}
while read -r line ; do
ary+=("$line")
done < <(json | jq -c .[])
printf "%s\n" "${ary[#]}"
Output:
{"key":"value1"}
{"key":"value2"}
{"key":"value3"}
Related
Let's say I have the following JSON
[
{
name : "A",
value : "1"
},
{
name : "B",
value : "5"
},
{
name : "E",
value : "8"
}
]
and I simply want to to be like
{
name : "A",
value : "1"
},
{
name : "B",
value : "5"
},
{
name : "E",
value : "8"
}
I used jq normal filter so jq'.[]', however I get a list of objects separated by a return as such:
{
name : "A",
value : "1"
}
{
name : "B",
value : "5"
}
{
name : "E",
value : "8"
}
Notice that the commas between the objects have magically vanished. Using reduce would work only if the object is indexed by the name let's say, I used the following:
jq 'reduce .[] as $i ({}; .[$i.name] = $i)'
Anybody did run into a similar situation?
Neither the input as shown nor the desired output is valid as JSON or as a JSON stream, so the question seems questionable, and the following responses are accordingly offered with the caveat that they probably should be avoided.
It should also be noted that, except for the sed-only approach, the solutions offered here produce comma-separated-JSON, which may not be what is desired.
They assume that the quasi-JSON input is in a file qjson.txt.
sed-only
< qjson.txt sed -e '1d;$d; s/^ //'
hjson, jq, and sed
< qjson.txt hjson -j | jq -r '.[] | (.,",")' | sed '$d'
hjson and jq
< qjson.txt hjson -j | jq -r '
foreach .[] as $x (-1; .+1;
if . == 0 then $x else ",", $x end)'
Using jq how can I convert an array into object indexed by filename, or read multiple files into one object indexed by their filename?
e.g.
jq -s 'map(select(.roles[]? | contains ("mysql")))' -C dir/file1.json dir/file2.json
This gives me the data I want, but I need to know which file they came from.
So instead of
[
{ "roles": ["mysql"] },
{ "roles": ["mysql", "php"] }
]
for output, I want:
{
"file1": { "roles": ["mysql"] },
"file2": { "roles": ["mysql", "php"] }
}
I do want the ".json" file extension stripped too if possible, and just the basename (dir excluded).
Example
file1.json
{ "roles": ["mysql"] }
file2.json
{ "roles": ["mysql", "php"] }
file3.json
{ }
My real files obviously have other stuff in them too, but that should be enough for this example. file3 is simply to demonstrate "roles" is sometimes missing.
In other words: I'm trying to find files that contain "mysql" in their list of "roles". I need the filename and contents combined into one JSON object.
To simplify the problem further:
jq 'input_filename' f1 f2
Gives me all the filenames like I want, but I don't know how to combine them into one object or array.
Whereas,
jq -s 'map(input_filename)' f1 f2
Gives me the same filename repeated once for each file. e.g. [ "f1", "f1" ] instead of [ "f1", "f2" ]
If your jq has inputs (as does jq 1.5) then the task can be accomplished with just one invocation of jq.
Also, it might be more efficient to use any than iterating over all the elements of .roles.
The trick is to invoke jq with the -n option, e.g.
jq -n '
[inputs
| select(.roles and any(.roles[]; contains("mysql")))
| {(input_filename | gsub(".*/|\\.json$";"")): .}]
| add' file*.json
jq approach:
jq 'if (.roles[] | contains("mysql")) then {(input_filename | gsub(".*/|\\.json$";"")): .}
else empty end' ./file1.json ./file2.json | jq -s 'add'
The expected output:
{
"file1": {
"roles": [
"mysql"
]
},
"file2": {
"roles": [
"mysql",
"php"
]
}
}
I have two json file, one contains a map key name and a type, the other is a flat json file.
eg. first file contains something like this:
[ { "field": "col1", "type": "int" }, { "field" : "col2", "type" : "string" }]
second file is a large jsons object file separated by line break:
{ "col1":123, "col2": "foo"}
{ "col1":123, "col2": "foo"}
...
can I use JQ to generate an output json like this:
{ "col1":{ "int" : 123 }, "col2": { "string" : "foo"} }
{ "col1":{ "int" : 123 }, "col2": { "string" : "foo"} }
....
Sure. You might want to transform your first file in an easier to consume format first: map the .type to the .field properties to an object (to use as a dictionary)
reduce .[] as $i ({}; .[$i.field] = $i.type)
Then you could go through your second file to use these mappings to update the values. Use --argfile to read the contents of the first file into a variable.
$ jq --argfile file1 file1.json '
(reduce $file1[] as $i ({}; .[$i.field] = $i.type)) as $map
| with_entries(.value = { ($map[.key]): .value })
' file2.json
which yields:
{
"col1": {
"int": 123
},
"col2": {
"string": "foo"
}
}
{
"col1": {
"int": 123
},
"col2": {
"string": "foo"
}
}
Yes. You could use the --slurpfile option but your dictionary is already a single JSON entity (a JSON object in your case), so it would be simpler to read the dictionary using the --argfile option.
Assuming that:
your jq filter is in a file, say merge.jq;
your dictionary is in dictionary.json;
your input stream is in input.json
the jq invocation would look like this:
jq -f merge.jq --argfile dict dictionary.json input.json
With the above, you would of course refer to the dictionary as $dict in merge.jq
(Of course you could specify the filter on the jq command line, if that's what you prefer.)
Now, over to you!
I have JSON like this that I'm parsing with jq:
{
"data": [
{
"item": {
"name": "string 1"
},
"item": {
"name": "string 2"
},
"item": {
"name": "string 3"
}
}
]
}
...and I'm trying to get "string 1" "string 2" and "string 3" into a Bash array, but I can't find a solution that ignores the whitespace in them. Is there a method in jq that I'm missing, or perhaps an elegant solution in Bash for it?
Current method:
json_names=$(cat file.json | jq ".data[] .item .name")
read -a name_array <<< $json_names
The below assume your JSON text is in a string named s. That is:
s='{
"data": [
{
"item1": {
"name": "string 1"
},
"item2": {
"name": "string 2"
},
"item3": {
"name": "string 3"
}
}
]
}'
Unfortunately, both of the below will misbehave with strings containing literal newlines; since jq doesn't have support for NUL-delimited output, this is difficult to work around.
On bash 4 (with slightly sloppy error handling, but tersely):
readarray -t name_array < <(jq -r '.data[] | .[] | .name' <<<"$s")
...or on bash 3.x or newer (with very comprehensive error handling, but verbosely):
# -d '' tells read to process up to a NUL, and will exit with a nonzero exit status if that
# NUL is not seen; thus, this causes the read to pass through any error which occurred in
# jq.
IFS=$'\n' read -r -d '' -a name_array \
< <(jq -r '.data[] | .[] | .name' <<<"$s" && printf '\0')
This populates a bash array, contents of which can be displayed with:
declare -p name_array
Arrays are assigned in the form:
NAME=(VALUE1 VALUE2 ... )
where NAME is the name of the variable, VALUE1, VALUE2, and the rest are fields separated with characters that are present in the $IFS (input field separator) variable.
Since jq outputs the string values as lines (sequences separated with the new line character), then you can temporarily override $IFS, e.g.:
# Disable globbing, remember current -f flag value
[[ "$-" == *f* ]] || globbing_disabled=1
set -f
IFS=$'\n' a=( $(jq --raw-output '.data[].item.name' file.json) )
# Restore globbing
test -n "$globbing_disabled" && set +f
The above will create an array of three items for the following file.json:
{
"data": [
{"item": {
"name": "string 1"
}},
{"item": {
"name": "string 2"
}},
{"item": {
"name": "string 3"
}}
]
}
The following shows how to create a bash array consisting of arbitrary JSON texts produced by a run of jq.
In the following, I'll assume input.json is a file with the following:
["string 1", "new\nline", {"x": 1}, ["one\ttab", 4]]
With this input, the jq filter .[] produces four JSON texts -- two JSON strings, a JSON object, and a JSON array.
The following bash script can then be used to set x to be a bash array of the JSON texts:
#!/bin/bash
x=()
while read -r value
do
x+=("$value")
done < <(jq -c '.[]' input.json)
For example, adding this bash expression to the script:
for a in "${x[#]}" ; do echo a="$a"; done
would yield:
a="string 1"
a="new\nline"
a={"x":1}
a=["one\ttab",4]
Notice how (encoded) newlines and (encoded) tabs are handled properly.
I am trying to parse JSON data into variable format
[
{
"Name" : "a",
"Value" : "1"
},
{
"Name" : "b",
"Value" : "2"
},
{
"Name" : "c",
"Value" : "3"
}
]
output should be like
a=1
b=2
c=3
This is what I tried, but it is not giving the expected result:
jq '.[].Value' file.txt
Since you're only printing out two values, it might just be easier to print out the strings directly.
$ jq -r '.[] | "\(.Name)=\(.Value)"' file.txt
You can use the following jq command:
jq -r '.[]|[.Name,.Value]|join("=")' file.json
Output:
a=1
b=2
c=3
Using jq:
jq 'map({(.Name):.Value})|add|.//={}' < data.json
Produces:
{
"a": "1",
"b": "2",
"c": "3"
}
If you have jq version 1.5+, you can use from_entries instead.
This does it
python3 -c 'import json; print("\n".join(["{}={}".format(x["Value"], x["Name"]) for x in json.load(open("file.json"))]))'
result
a=1
b=2
c=3