Pretty-print valid JSONs mixed with string keys - json

I have a Redis hash with keys and values like string key -- serialized JSON value.
Corresponding rediscli query (hgetall some_redis_hash) being dumped in a file:
redis_key1
{"value1__key1": "value1__value1", "value1__key2": "value1__value2" ...}
redis_key2
{"value2__key1": "value2__value1", "value2__key2": "value2__value2" ...}
...
and so on.
So the question is, how do I pretty-print these values enclosed in brackets? (note that key strings between are making the document invalid, if you'll try to parse the entire one)
The first thought is to get particular pairs from Redis, strip parasite keys, and use jq on the remaining valid JSON, as shown below:
rediscli hget some_redis_hash redis_key1 > file && tail -n +2 file
- file now contains valid JSON as value, the first string representing Redis key is stripped by tail -
cat file | jq
- produces pretty-printed value -
So the question is, how to pretty-print without such preprocessing?
Or (would be better in this particular case) how to merge keys and values in one big JSON, where Redis keys, accessible on the upper level, will be followed by dicts of their values?
Like that:
rediscli hgetall some_redis_hash > file
cat file | cool_parser
- prints { "redis_key1": {"value1__key1": "value1__value1", ...}, "redis_key2": ... }

A simple way for just pretty-printing would be the following:
cat file | jq --raw-input --raw-output '. as $raw | try fromjson catch $raw'
It tries to parse each line as json with fromjson, and just outputs the original line (with $raw) if it can't.
(The --raw-input is there so that we can invoke fromjson enclosed in a try instead of running it on every line directly, and --raw-output is there so that any non-json lines are not enclosed in quotes in the output.)
A solution for the second part of your questions using only jq:
cat file \
| jq --raw-input --null-input '[inputs] | _nwise(2) | {(.[0]): .[1] | fromjson}' \
| jq --null-input '[inputs] | add'
--null-input combined with [inputs] produces the whole input as an array
which _nwise(2) then chunks into groups of two (more info on _nwise)
which {(.[0]): .[1] | fromjson} then transforms into a list of jsons
which | jq --null-input '[inputs] | add' then combines into a single json
Or in a single jq invocation:
cat file | jq --raw-input --null-input \
'[ [inputs] | _nwise(2) | {(.[0]): .[1] | fromjson} ] | add'
...but by that point you might be better off writing an easier to understand python script.

Related

Bash: Ignore key value pairs from a JSON that failed to parse using jq

I'm writing a bash script to read a JSON file and export the key-value pairs as environment variables. Though I could extract the key-value pairs, I'm struggling to skip those entries that failed to parse by jq.
JSON (key3 should fail to parse)
{
"KEY1":"ABC",
"KEY2":"XYZ",
"KEY3":"---ABC---\n
dskfjlksfj"
}
Here is what I tried
for pair in $(cat test.json | jq -r -R '. as $line | try fromjson catch $line | to_entries | map("\(.key)=\(.value)") | .[]' ); do
echo $pair
export $pair
done
And this is the error
jq: error (at <stdin>:1): string ("{") has no keys
jq: error (at <stdin>:2): string (" \"key1...) has no keys
My code is based on these posts:
How to convert a JSON object to key=value format in jq?
How to ignore broken JSON line in jq?
Ignore Unparseable JSON with jq
Here's a response to the revised question. Unfortunately, it will only be useful in certain limited cases, not including the example you give. (Basically, it depends on jq's parser being able to recover before the end of file.)
while read -r line ; do
echo export "$line"
done < <(< test.json jq -rn '
def do:
try inputs catch null
| objects
| to_entries[]
| "\(.key)=\"\(.value|#sh)\"" ;
recurse(do) | select(.)
')
Note that further refinements may be warranted, especially if there is potentially something fishy about the key names being used as shell variable names.
[Note: this response was made to the original question, which has since been changed. The response essentially assumes the input consists of JSONLines interspersed with other lines.)
Since the goal seems to be to ignore lines that don't have valid key-value pairs, you can simply use catch empty:
while read -r line ; do
echo export "$line"
done < <(test.json jq -r -R '
try fromjson catch empty
| objects
| to_entries[]
| "\(.key)=\"\(.value|#sh)\""
')
Note also the use of #sh and of the shell's read, and the fact that .value (in jq) and $line (in the shell) are both quoted. These are all important for robustness, though further refinements might still be necessary for additional robustness.
Perhaps there is an algorithm that will repair the broken JSON produced by the upstream system. If not, the following is a horrible but possibly useful "hack" that will at least capture KEY1 and KEY2 in the example in the Q:
jq -Rr '
capture("\"(?<key>[^\"]*)\"[ \t]*:[ \t]*(?<value>[^}]+)")
| (.value |= sub("[ \t]+$"; "") ) # trailing whitespace
| if .value|test("^\".*\"") then .value |= sub("\"[ \t]*[,}[ \t]*$"; "\"") else . end
| select(.value | test("^\".*\"$") or (contains("\"")|not) ) # a string or not a string
| "\(.key)=\(.value|#sh)"
'
The broken JSON in the example could be repaired in a number of ways, e.g.:
sed '/\\n$/{N; s/\\n\n/\\n/;}'
produces:
{
"KEY1":"ABC",
"KEY2":"XYZ",
"KEY3":"---ABC---\ndskfjlksfj"
}
At least that's JSON :-)

Can this jq map be simplified?

Given this JSON:
{
"key": "/books/OL1000072M",
"source_records": [
"ia:daywithtroubadou00pern",
"bwb:9780822519157",
"marc:marc_loc_2016/BooksAll.2016.part25.utf8:103836014:1267"
]
}
Can the following jq code be simplified?
jq -r '.key as $olid | .source_records | map([$olid, .])[] | #tsv'
The use of variable assignment feels like cheating and I'm wondering if it can be eliminated. The goal is to map the key value onto each of the source_records values and output a two column TSV.
Instead of mapping into an array, and then iterating over it (map(…)[]) just create an array and collect its items ([…]). Also, you can get rid of the variable binding (as) by moving the second part into its own context using parens.
jq -r '[.key] + (.source_records[] | [.]) | #tsv'
Alternatively, instead of using #tsv you could build your tab-separated output string yourself. Either by concatenation (… + …) or by string interpolation ("\(…)"):
jq -r '.key + "\t" + .source_records[]'
jq -r '"\(.key)\t\(.source_records[])"'
Output:
/books/OL1000072M ia:daywithtroubadou00pern
/books/OL1000072M bwb:9780822519157
/books/OL1000072M marc:marc_loc_2016/BooksAll.2016.part25.utf8:103836014:1267
It's not much shorter, but I think it's clearer than the original and clearer than the other shorter answers.
jq -r '.key as $olid | .source_records[] | [ $olid, . ] | #tsv'

Convert value of json from int to string using jq

Given a json that looks something like:
[{"id":1,"firstName":"firstName1","lastName":"lastName1"},
{"id":2,"firstName":"firstName2","lastName":"lastName2"},
{"id":3,"firstName":"firstName3","lastName":"lastName3"}]
What would be the best way to convert the id value from an int to a string and then saving the file?
I have tried:
echo "$(jq -r '[.[] | .id = .id|tostring]' test.json)" > test.json
But that seems to put each entry into a string and adds the backslashes
[
"{\"id\":1,\"firstName\":\"firstName1\",\"lastName\":\"lastName1\"}",
"{\"id\":2,\"firstName\":\"firstName2\",\"lastName\":\"lastName2\"}",
"{\"id\":3,\"firstName\":\"firstName3\",\"lastName\":\"lastName3\"}"
]
| has a lower priority than the assignment (=). The expression .id = .id | tostring is interpreted as (.id = .id) | tostring.
The assignment does change anything and can be removed. The script becomes [ .[] | tostring ], that explains the output (each object is serialized as JSON into a string).
The solution is to use parentheses to enforce the desired order of execution.
The command is:
jq '[ .[] | .id = (.id | tostring) ]' test.json
Do not use process expansion ($(...)) to compose an echo command line. It is inefficient and not needed.
Redirect the output of jq directly to a file. Use a different file than the input file (or it ends up destroying your data).
jq '[ .[] | .id = (.id | tostring) ]' test.json > output.json

unescape backslash in jq output

https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/cid/5317139/property/IsomericSMILES/JSON
For the above JSON, the following jq prints 5317139 CCC/C=C\\1/C2=C(C3C(O3)CC2)C(=O)O1.
.PropertyTable.Properties
| .[]
| [.CID, .IsomericSMILES]
| #tsv
But there are two \ before the first 1. Is it wrong, should three be just one \? How to get the correct number of backslash?
The extra backslash in the output is the result of the request to produce TSV, since "\" has a special role to play in jq's TSV (e.g. "\t" signifies the tab character).
By contrast, consider:
jq -r '
.PropertyTable.Properties
| .[]
| [.CID, .IsomericSMILES]
| join("\t")' smiles.json
5317139 CCC/C=C\1/C2=C(C3C(O3)CC2)C(=O)O1

Can't put JSON output into CSV format with jq

I'm building a list of AWS EBS volumes attributes so I can store it as CSV in a variable, using jq. I'm going to output the variable to a spread sheet.
The first command gives the values I'm looking for using jq:
aws ec2 describe-volumes | jq -r '.Volumes[] | .VolumeId, .AvailabilityZone, .Attachments[].InstanceId, .Attachments[].State, (.Tags // [] | from_entries.Name)'
Gives output that I want like this:
MIAPRBcdm0002_test_instance
vol-0105a1678373ae440
us-east-1c
i-0403bef9c0f6062e6
attached
MIAPRBcdwb00000_app1_vpc
vol-0d6048ec6b2b6f1a4
us-east-1c
MIAPRBcdwb00001 /carbon
vol-0cfcc6e164d91f42f
us-east-1c
i-0403bef9c0f6062e6
attached
However, if I put it into CSV format so I can output the variable to a spread sheet, the command blows up and doesn't work:
aws ec2 describe-volumes | jq -r '.Volumes[] | .VolumeId, .AvailabilityZone, .Attachments[].InstanceId, .Attachments[].State, (.Tags // [] | from_entries.Name)| #csv'
jq: error (at <stdin>:4418): string ("vol-743d1234") cannot be csv-formatted, only array
Even putting the top level of the JSON into CSV format fails for EBS volumes:
aws ec2 describe-volumes | jq -r '.Volumes[].VolumeId | #csv'
jq: error (at <stdin>:4418): string ("vol-743d1234") cannot be csv-formatted, only array
Here is the AWS EBS Volumes JSON FILE that I am working with, with these commands (the file has been cleaned of company identifiers, but is valid json).
How can I get this json into CSV format using jq?
You can only apply #csv over an array content, just enclose your filter within a [..] as below
jq -r '[.Volumes[] | .VolumeId, .AvailabilityZone, .Attachments[].InstanceId, .Attachments[].State, (.Tags // [] | from_entries.Name)]|#csv'
Using the above might still retain the quotes, so using join() would also be appropriate here
jq -r '[.Volumes[] | .VolumeId, .AvailabilityZone, .Attachments[].InstanceId, .Attachments[].State, (.Tags // [] | from_entries.Name)] | join(",")'
The accepted Answer resolves another obscure jq error:
string ("xxx") cannot be csv-formatted, only array
In my case I did not want the entire output of jq, but rather each Elastic Search document I supplied to jq to be printed as a CSV string on a line of its own. To accomplish this I simply moved the brackets to enclose only the items to be included on each line.
First, by placing my brackets only around items to be included on each line of output, I produced:
jq -r '.hits.hits[]._source | [.syscheck.path, .syscheck.size_after]'
[
"/etc/group-",
"783"
]
[
"/etc/gshadow-",
"640"
]
[
"/etc/group",
"795"
]
[
"/etc/gshadow",
"652"
]
[
"/etc/ssh/sshd_config",
"3940"
]
Piping this to | #csv prints each document's values of .syscheck.path and .syscheck.size_after, quoted and comma-separated, on a separate line:
$ jq -r '.hits.hits[]._source | [.syscheck.path, .syscheck.size_after] | #csv'
"/etc/group-","783"
"/etc/gshadow-","640"
"/etc/group","795"
"/etc/gshadow","652"
"/etc/ssh/sshd_config","3940"
Or to omit quotation marks, following the pattern noted in the accepted Answer:
$ jq -r '.hits.hits[]._source | [.syscheck.path, .syscheck.size_after] | join(",")'
/etc/group-,783
/etc/gshadow-,640
/etc/group,795
/etc/gshadow,652
/etc/ssh/sshd_config,3940