jq string manipulation on domain names and dns records - json

I am attempting to learn some jq but am running into trouble.
I am working with a dataset of dns records like {"timestamp":"1592145252","name":"0.127.9.109.rev.sfr.net","type":"a","value":"109.9.127.0"}
I cannot figure out how to
strip the subdomain details out of the name field. in this example i just want sfr.net
print the name backwards, eg: 0.127.9.109.rev.sfr.net would become ten.rfs.ver.901.9.721.0
my end goal is to print lines like this:
0.127.9.109.rev.sfr.net,ten.rfs.ver.901.9.721.0,a,sfr.net
Thanks SO!

To extract the "domain" part, you could use simple string manipulation methods to select it. Assuming anything after the .rev. part is the domain, you could do this:
split(".rev.")[1]
To reverse a string, jq doesn't have the operations to do it directly for strings. However it does have a function to reverse arrays. So you could convert to an array, reverse, then convert back.
split("") | reverse | join("")
To put it all together for your input:
.name | [
.,
(split("") | reverse | join("")),
(split(".rev.")[1])
] | join(",")

Here's one approach using reverse and capture:
jq -r '
.type as $type
| .name
| "\(.),\(explode|reverse|implode),\($type),"
+ capture("(?<subdomain>[^.]+[.][^.]+)$").subdomain'

Like this :
$ jq -r '.name' file.json | grep -oE '\w+\.\w+$'
sfr.net
$ jq -r '.name' file.json | rev
ten.rfs.ver.901.9.721.0

Related

Can this jq map be simplified?

Given this JSON:
{
"key": "/books/OL1000072M",
"source_records": [
"ia:daywithtroubadou00pern",
"bwb:9780822519157",
"marc:marc_loc_2016/BooksAll.2016.part25.utf8:103836014:1267"
]
}
Can the following jq code be simplified?
jq -r '.key as $olid | .source_records | map([$olid, .])[] | #tsv'
The use of variable assignment feels like cheating and I'm wondering if it can be eliminated. The goal is to map the key value onto each of the source_records values and output a two column TSV.
Instead of mapping into an array, and then iterating over it (map(…)[]) just create an array and collect its items ([…]). Also, you can get rid of the variable binding (as) by moving the second part into its own context using parens.
jq -r '[.key] + (.source_records[] | [.]) | #tsv'
Alternatively, instead of using #tsv you could build your tab-separated output string yourself. Either by concatenation (… + …) or by string interpolation ("\(…)"):
jq -r '.key + "\t" + .source_records[]'
jq -r '"\(.key)\t\(.source_records[])"'
Output:
/books/OL1000072M ia:daywithtroubadou00pern
/books/OL1000072M bwb:9780822519157
/books/OL1000072M marc:marc_loc_2016/BooksAll.2016.part25.utf8:103836014:1267
It's not much shorter, but I think it's clearer than the original and clearer than the other shorter answers.
jq -r '.key as $olid | .source_records[] | [ $olid, . ] | #tsv'

Pretty-print valid JSONs mixed with string keys

I have a Redis hash with keys and values like string key -- serialized JSON value.
Corresponding rediscli query (hgetall some_redis_hash) being dumped in a file:
redis_key1
{"value1__key1": "value1__value1", "value1__key2": "value1__value2" ...}
redis_key2
{"value2__key1": "value2__value1", "value2__key2": "value2__value2" ...}
...
and so on.
So the question is, how do I pretty-print these values enclosed in brackets? (note that key strings between are making the document invalid, if you'll try to parse the entire one)
The first thought is to get particular pairs from Redis, strip parasite keys, and use jq on the remaining valid JSON, as shown below:
rediscli hget some_redis_hash redis_key1 > file && tail -n +2 file
- file now contains valid JSON as value, the first string representing Redis key is stripped by tail -
cat file | jq
- produces pretty-printed value -
So the question is, how to pretty-print without such preprocessing?
Or (would be better in this particular case) how to merge keys and values in one big JSON, where Redis keys, accessible on the upper level, will be followed by dicts of their values?
Like that:
rediscli hgetall some_redis_hash > file
cat file | cool_parser
- prints { "redis_key1": {"value1__key1": "value1__value1", ...}, "redis_key2": ... }
A simple way for just pretty-printing would be the following:
cat file | jq --raw-input --raw-output '. as $raw | try fromjson catch $raw'
It tries to parse each line as json with fromjson, and just outputs the original line (with $raw) if it can't.
(The --raw-input is there so that we can invoke fromjson enclosed in a try instead of running it on every line directly, and --raw-output is there so that any non-json lines are not enclosed in quotes in the output.)
A solution for the second part of your questions using only jq:
cat file \
| jq --raw-input --null-input '[inputs] | _nwise(2) | {(.[0]): .[1] | fromjson}' \
| jq --null-input '[inputs] | add'
--null-input combined with [inputs] produces the whole input as an array
which _nwise(2) then chunks into groups of two (more info on _nwise)
which {(.[0]): .[1] | fromjson} then transforms into a list of jsons
which | jq --null-input '[inputs] | add' then combines into a single json
Or in a single jq invocation:
cat file | jq --raw-input --null-input \
'[ [inputs] | _nwise(2) | {(.[0]): .[1] | fromjson} ] | add'
...but by that point you might be better off writing an easier to understand python script.

Can't put JSON output into CSV format with jq

I'm building a list of AWS EBS volumes attributes so I can store it as CSV in a variable, using jq. I'm going to output the variable to a spread sheet.
The first command gives the values I'm looking for using jq:
aws ec2 describe-volumes | jq -r '.Volumes[] | .VolumeId, .AvailabilityZone, .Attachments[].InstanceId, .Attachments[].State, (.Tags // [] | from_entries.Name)'
Gives output that I want like this:
MIAPRBcdm0002_test_instance
vol-0105a1678373ae440
us-east-1c
i-0403bef9c0f6062e6
attached
MIAPRBcdwb00000_app1_vpc
vol-0d6048ec6b2b6f1a4
us-east-1c
MIAPRBcdwb00001 /carbon
vol-0cfcc6e164d91f42f
us-east-1c
i-0403bef9c0f6062e6
attached
However, if I put it into CSV format so I can output the variable to a spread sheet, the command blows up and doesn't work:
aws ec2 describe-volumes | jq -r '.Volumes[] | .VolumeId, .AvailabilityZone, .Attachments[].InstanceId, .Attachments[].State, (.Tags // [] | from_entries.Name)| #csv'
jq: error (at <stdin>:4418): string ("vol-743d1234") cannot be csv-formatted, only array
Even putting the top level of the JSON into CSV format fails for EBS volumes:
aws ec2 describe-volumes | jq -r '.Volumes[].VolumeId | #csv'
jq: error (at <stdin>:4418): string ("vol-743d1234") cannot be csv-formatted, only array
Here is the AWS EBS Volumes JSON FILE that I am working with, with these commands (the file has been cleaned of company identifiers, but is valid json).
How can I get this json into CSV format using jq?
You can only apply #csv over an array content, just enclose your filter within a [..] as below
jq -r '[.Volumes[] | .VolumeId, .AvailabilityZone, .Attachments[].InstanceId, .Attachments[].State, (.Tags // [] | from_entries.Name)]|#csv'
Using the above might still retain the quotes, so using join() would also be appropriate here
jq -r '[.Volumes[] | .VolumeId, .AvailabilityZone, .Attachments[].InstanceId, .Attachments[].State, (.Tags // [] | from_entries.Name)] | join(",")'
The accepted Answer resolves another obscure jq error:
string ("xxx") cannot be csv-formatted, only array
In my case I did not want the entire output of jq, but rather each Elastic Search document I supplied to jq to be printed as a CSV string on a line of its own. To accomplish this I simply moved the brackets to enclose only the items to be included on each line.
First, by placing my brackets only around items to be included on each line of output, I produced:
jq -r '.hits.hits[]._source | [.syscheck.path, .syscheck.size_after]'
[
"/etc/group-",
"783"
]
[
"/etc/gshadow-",
"640"
]
[
"/etc/group",
"795"
]
[
"/etc/gshadow",
"652"
]
[
"/etc/ssh/sshd_config",
"3940"
]
Piping this to | #csv prints each document's values of .syscheck.path and .syscheck.size_after, quoted and comma-separated, on a separate line:
$ jq -r '.hits.hits[]._source | [.syscheck.path, .syscheck.size_after] | #csv'
"/etc/group-","783"
"/etc/gshadow-","640"
"/etc/group","795"
"/etc/gshadow","652"
"/etc/ssh/sshd_config","3940"
Or to omit quotation marks, following the pattern noted in the accepted Answer:
$ jq -r '.hits.hits[]._source | [.syscheck.path, .syscheck.size_after] | join(",")'
/etc/group-,783
/etc/gshadow-,640
/etc/group,795
/etc/gshadow,652
/etc/ssh/sshd_config,3940

Optimize JSON denormalization using JQ - "cartesian product" from 1:N

I have a JSON database change log, output of wal2json. It looks like this:
{"xid":1190,"timestamp":"2018-07-19 17:18:02.905354+02","change":[
{"kind":"update","table":"mytable2","columnnames":["id","name","age"],"columnvalues":[401,"Update AA",20],"oldkeys":{"keynames":["id"],"keyvalues":[401]}},
{"kind":"update","table":"mytable2","columnnames":["id","name","age"],"columnvalues":[401,"Update BB",20],"oldkeys":{"keynames":["id"],"keyvalues":[401]}}]}
...
Each top level entry (xid) is a transaction, each item in change is, well, a change. One row may change multiple times.
To import to an OLAP system with limited feature set, I need to have the order explicitly stated. So I need to add a sn for each change in a transaction.
Also, each change must be a top level entry - the OLAP can't iterate sub-items within one entry.
{"xid":1190, "sn":1, "kind":"update", "data":{"id":401,"name":"Update AA","age":20} }
{"xid":1190, "sn":2, "kind":"update", "data":{"id":401,"name":"Update BB","age":20} }
{"xid":1191, "sn":1, "kind":"insert", "data":{"id":625,"name":"Inserted","age":20} }
{"xid":1191, "sn":2, "kind":"delete", "data":{"id":625} }
(The reason is that the OLAP has limited ability to transform the data during import, and also doesn't have the order as a parameter.)
So, I do this using jq:
function transformJsonDataStructure {
## First let's reformat it to XML, then transform using XPATH, then back to JSON.
## Example input:
# {"xid":1074,"timestamp":"2018-07-18 17:49:54.719475+02","change":[
# {"kind":"update","table":"mytable2","columnnames":["id","name","age"],"columnvalues":[401,"Update AA",20],"oldkeys":{"keynames":["id"],"keyvalues":[401]}},
# {"kind":"update","table":"mytable2","columnnames":["id","name","age"],"columnvalues":[401,"Update BB",20],"oldkeys":{"keynames":["id"],"keyvalues":[401]}}]}
cat "$1" | while read -r LINE ; do
XID=`echo "$LINE" | jq -c '.xid'`;
export SN=0;
#serr "{xid: $XID, changes: $CHANGES}";
echo "$LINE" | jq -c '.change[]' | while read -r CHANGE ; do
SN=$((SN+=1))
KIND=`echo "$CHANGE" | jq -c --raw-output .kind`;
TABLE=`echo "$CHANGE" | jq -c --raw-output .table`;
DEST_FILE="$TARGET_PATH-$TABLE.json";
case "$KIND" in
update|insert)
MAP=$(convertTwoArraysToMap "$(echo "$CHANGE" | jq -c ".columnnames")" "$(echo "$CHANGE" | jq -c ".columnvalues")") ;;
delete)
MAP=$(convertTwoArraysToMap "$(echo "$CHANGE" | jq -c ".oldkeys.keynames")" "$(echo "$CHANGE" | jq -c ".oldkeys.keyvalues")") ;;
esac
#echo "{\"xid\":$XID, \"table\":\"$TABLE\", \"kind\":\"$KIND\", \"data\":$MAP }" >> "$DEST_FILE"; ;;
echo "{\"xid\":$XID, \"sn\":$SN, \"kind\":\"$KIND\", \"data\":$MAP }" | tee --append "$DEST_FILE";
done;
done;
return;
}
The problem is the performance. I am calling jq few times per entry. This is quite slow, around 1000x times slower than without the transformation.
How can perform the transformation above using just one pass? (jq is not a must, other tool can be used too, but should be in CentOS packages. I want to avoid coding an extra tool for that.
From man jq it seems that it could be capable of processing the whole file (JSON entry per row) in one go. I could do it in XSLT but I can't wrap my head around jq. Especially the iteration of the change array and combining columnnames and columnvalues to a map.
For the iteration, I think map or map_values could be used.
For the 2 arrays to map, I see the from_entries and with_entries functions, but can't get it work.
Any jq master around to advise?
The following helper function converts the incoming array into an object using headers as the keys:
def objectify(headers):
[headers, .] | transpose | map({(.[0]): .[1]}) | add;
The trick now is to use range(0;length) to generate .sn:
{xid} +
(.change
| range(0;length) as $i
| .[$i]
| .columnnames as $header
| {sn: ($i + 1),
kind,
data: (.columnvalues|objectify($header)) } )
Output
For the given log entry, the output would be:
{"xid":1190,"sn":1,"kind":"update","data":{"id":401,"name":"Update AA","age":20}}
{"xid":1190,"sn":2,"kind":"update","data":{"id":401,"name":"Update BB","age":20}}
Moral
If a solution looks too complicated, it probably is.

Parse JSON field with jq and output repeatedly for every child

I have the following JSON:
{
"field1":"foo",
"array":[
{
child_field1:"c1_1",
child_field2:"c1_2"
},
{
child_field1:"c2_1",
child_field2:"c2_2"
}
]...
}
and using jq, I would like to return the following output, where the value of field1 is repeated for every child element.:
foo,c1_1,c1_2
foo,c2_1,c2_2
...
I can access each field separately, am having trouble returning the desired result above.
Can this be done with jq?
jq -r '.array[] as $a | [.field1, $a.child_field1, $a.child_field2] | #csv'
Does the right thing for the sample data you provided, but I freely admit there are lots of ways to do that kind of thing in jq, and that was only the first one which sprang to mind.
I fed it through #csv because it seemed like that was what you wanted, but if you prefer the actual output, exactly as you have written, then:
jq -r '.array[] as $a | "\(.field1),\($a.child_field1),\($a.child_field2)"'
will produce it
In cases like this, there's no need for reduce or `to_entries', or to list the fields explicitly -- one can simply exploit jq's backtracking behavior:
.field1 as $f
| .array[]
| [$f, .[]]
| #csv
As pointed out by #MatthewLDaniel, there are many alternatives to using #csv here.
jq solution:
jq -r '.field1 as $f1 | .array[]
| [$f1, .[]]
| join(",")' input.json
The output:
foo,c1_1,c1_2
foo,c2_1,c2_2