jq is suppose to
process/filter JSON inputs and producing the filter's results as JSON
However, I found that after the jq process/filter, output result is no longer in JSON format any more.
E.g., https://stedolan.github.io/jq/tutorial/#result5, i.e.,
$ curl -s 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq '.[] | {message: .commit.message, name: .commit.committer.name}'
{
"message": "Merge pull request #162 from stedolan/utf8-fixes\n\nUtf8 fixes. Closes #161",
"name": "Stephen Dolan"
}
{
"message": "Reject all overlong UTF8 sequences.",
"name": "Stephen Dolan"
}
. . .
Is there any workaround?
UPDATE:
How to wrap the whole return into a json structure of:
{ "Commits": [ {...}, {...}, {...} ] }
I've tried:
jq '.[] | Commits: [{message: .commit.message, name: .commit.committer.name}]'
jq 'Commits: [.[] | {message: .commit.message, name: .commit.committer.name}]'
but neither works.
Found it, on the same page,
https://stedolan.github.io/jq/tutorial/#result6
If you want to get the output as a single array, you can tell jq to “collect” all of the answers by wrapping the filter in square brackets:
jq '[.[] | {message: .commit.message, name: .commit.committer.name}]'
Technically speaking, unless otherwise instructed (notably with the -r command-line option), jq produces a stream of JSON entities.
One way to convert an input stream of JSON entities into a JSON array containing them is to use the -s command-line option.
Response to UPDATE
To produce a JSON object of the form:
{ "Commits": [ {...}, {...}, {...} ] }
you could write something like:
jq '{Commits: [.[] | {message: .commit.message, name: .commit.committer.name}]}'
(jq understands the '{Commits: _}' shorthand.)
Related
I am using jq to parse a json file. This is the current output from jq:
[
"key1.childk2",
"key2.childk3"
]
I would like to turn this into a readable json format itself as list of lists like below:
[
["key1","childk2"],
["key2","childk3"]
]
I would ideally prefer to do this with jq, however any other shell tool that can work on shell variables is fair game.
You can use jq split filter:
jq '[.[] | split(".")]'
[
[
"key1",
"childk2"
],
[
"key2",
"childk3"
]
]
You should use it somewhere in the original jq command
With jq you can structurally turn the input into your desired JSON document using simple filters like map(./"."), but regarding the requested readability, the output wouldn't have exactly your desired formatting.
Without any further flags, jq would pretty-print the output as:
[
[
"key1",
"childk2"
],
[
"key2",
"childk3"
]
]
Demo
Using the --compact-output or -c flag would compress the whole JSON document into one line, not just the elements of the outer array:
[["key1","childk2"],["key2","childk3"]]
Demo
So, if you really wanted to, you could also glue together the parts yourself as you like them to be formatted from within jq, but honestly, I would discourage you from doing so as by circumventing jq's internal JSON composer you might end up outputting invalid JSON.
jq -r '"[", " " + (map(./"." | tojson) | .[:-1][] += ",")[], "]"'
[
["key1","childk2"],
["key2","childk3"]
]
Demo
Here is a ruby to do that:
ruby -r json -e 'puts JSON.parse($<.read).map{|e| e.split(".")}.to_json' file
[["key1","childk2"],["key2","childk3"]]
Or if you want it pretty:
ruby -r json -e 'puts JSON.pretty_generate(
JSON.parse($<.read).map{|e| e.split(".")})
' file
[
[
"key1",
"childk2"
],
[
"key2",
"childk3"
]
]
Or you can produce your precise format:
ruby -r json -e '
l=[]
JSON.parse($<.read).map{|e| l << e.split(".").to_s}
puts "[\n\t#{l.join(",\n\t")}\n]"
' file
[
["key1", "childk2"],
["key2", "childk3"]
]
With the following input file:
{
"events": [
{
"mydata": {
"id": "123456",
"account": "21234"
}
},
{
"mydata": {
"id": "123457",
"account": "21234"
}
}
]
}
When I run it through this JQ filter,
jq ".events[] | [.mydata.id, .mydata.account]" events.json
I get a set of arrays:
[
"123456",
"21234"
]
[
"123457",
"21234"
]
When I put this output through the #csv filter to create CSV output:
jq ".events[] | [.mydata.id, .mydata.account] | #csv" events.json
I get a CSV file with one record per row:
"\"123456\",\"21234\""
"\"123457\",\"21234\""
I would like CSV file with two records per row, like this:
"123456","21234"
"123457","21234"
What am I doing wrong?
Use the -r flag.
Here is the explanation in the manual:
--raw-output / -r: With this option, if the filter's result is a string then it will be written directly to standard output rather than
being formatted as a JSON string with quotes.
jq -r '.events[] | [.mydata.id, .mydata.account] | #csv'
Yields
"123456","21234"
"123457","21234"
I have a list of IPv4 addresses being output in a list each separated by \n. The program I would like to import these into is expecting it in this format:
{
"data":[
{ "IP":"127.0.0.1" },
{ "IP":"192.168.0.1" }
]
}
Input data for the above would have been this:
127.0.0.1
192.168.0.1
I've looked in the jq cookbook for ideas but the closest I've been able to string together is using [] not {}, not inside data, and only has the value without key.
jq -sR '[sub("\n$";"") | splits("\n") | sub("^ +";"") | [splits(" +")]]'
Outputs:
[
[
"127.0.0.1"
],
[
"192.168.0.1"
]
]
Here is a solution:
jq -Rn '{data: [ {IP: inputs} ] }' input.txt
If this seems a bit magical, you might like to use the more mundane variant:
jq -Rn '{data: [ inputs | {IP: .} ] }' input.txt
Of course, in practice, you might also want to remove extraneous whitespace in the input, filter out comments, perform validity checking or filter out invalid input ...
Using jq how can I convert an array into object indexed by filename, or read multiple files into one object indexed by their filename?
e.g.
jq -s 'map(select(.roles[]? | contains ("mysql")))' -C dir/file1.json dir/file2.json
This gives me the data I want, but I need to know which file they came from.
So instead of
[
{ "roles": ["mysql"] },
{ "roles": ["mysql", "php"] }
]
for output, I want:
{
"file1": { "roles": ["mysql"] },
"file2": { "roles": ["mysql", "php"] }
}
I do want the ".json" file extension stripped too if possible, and just the basename (dir excluded).
Example
file1.json
{ "roles": ["mysql"] }
file2.json
{ "roles": ["mysql", "php"] }
file3.json
{ }
My real files obviously have other stuff in them too, but that should be enough for this example. file3 is simply to demonstrate "roles" is sometimes missing.
In other words: I'm trying to find files that contain "mysql" in their list of "roles". I need the filename and contents combined into one JSON object.
To simplify the problem further:
jq 'input_filename' f1 f2
Gives me all the filenames like I want, but I don't know how to combine them into one object or array.
Whereas,
jq -s 'map(input_filename)' f1 f2
Gives me the same filename repeated once for each file. e.g. [ "f1", "f1" ] instead of [ "f1", "f2" ]
If your jq has inputs (as does jq 1.5) then the task can be accomplished with just one invocation of jq.
Also, it might be more efficient to use any than iterating over all the elements of .roles.
The trick is to invoke jq with the -n option, e.g.
jq -n '
[inputs
| select(.roles and any(.roles[]; contains("mysql")))
| {(input_filename | gsub(".*/|\\.json$";"")): .}]
| add' file*.json
jq approach:
jq 'if (.roles[] | contains("mysql")) then {(input_filename | gsub(".*/|\\.json$";"")): .}
else empty end' ./file1.json ./file2.json | jq -s 'add'
The expected output:
{
"file1": {
"roles": [
"mysql"
]
},
"file2": {
"roles": [
"mysql",
"php"
]
}
}
I'm trying to figure out how to use jq to load the content of one json file into the hash of another file. E.g.:
this file:
{
"annotations": {...},
"rows": [ {...}, {...}]
}
should be inserted into this file at the hash dashboard:
{
"dashboard": { },
"overwrite": true,
"message": "new commit"
}
so the resulting file should be
{
"dashboard": {
"annotations": {...},
"rows": [ {...}, {...}]
},
"overwrite": true,
"message": "new commit"
}
I was thinking to do it with an pipe | or |= operator but I can't figure out how to use one content and assign it to a select filter of the other file.
jq solution:
jq --slurpfile annot annot.json '.dashboard |= $annot[0]' dashb.json
--slurpfile variable-name filename:
This option reads all the JSON texts in the named file and binds an
array of the parsed JSON values to the given global variable.
If your jq does not have --slurpfile, then you could, for example, run jq as follows:
jq -s '.[0] as $fragment | .[1] | (.dashboard |= $fragment)' fragment.json dashboard.json