I have a json file, example.json:
[
[
"126",
1522767000
],
[
"122",
1522859400
],
[
"126",
1523348520
]
]
...and would like to add multiple parent items with the desired output:
{
"target": "Systolic",
"datapoints": [
[
"126",
1522767000
],
[
"122",
1522859400
],
[
"126",
1523348520
]
]
}
I'm having trouble, attempting things like:
cat example.json | jq -s '{target:.[]}', which adds the one key but not understanding how to add a value to the target and another key datapoints.
With straightforward jq expression:
jq '{target: "Systolic", datapoints: .}' example.json
The output:
{
"target": "Systolic",
"datapoints": [
[
"126",
1522767000
],
[
"122",
1522859400
],
[
"126",
1523348520
]
]
}
Related
I have a json like this
[
{
"name": "hosts",
"ipaddress": "1.2.3.4",
"status": "UP",
"randomkey": "randomvalue"
},
{
"name": "hosts",
"ipaddress": "5.6.7.8",
"status": "DOWN",
"newkey": "newvalue"
},
{
"name": "hosts",
"ipaddress": "9.10.11.12",
"status": "RESTART",
"anotherkey": "anothervalue"
}
]
I want to merge the objects and looking for some output like this
[
{
"name": "hosts", //doesn't matter if it is ["hosts"]
"ipaddress": ["1.2.3.4", "5.6.7.8", "9.10.11.12"],
"status": ["UP", "DOWN", "RESTART"],
"randomkey": ["randomvalue"],
"newkey": ["newvalue"],
"anotherkey": ["anothervalue"]
}
]
I can hardcode each and every key and do something like this - { ipaddress: (map(.ipaddress) | unique ) } + { status: (map(.status) | unique ) } + { randomkey: (map(.randomkey) | unique ) }
The important ask here is the values are random and cannot be hardcoded.
Is there a way i can merge all the keys without hardcoding the key here?
Using reduce, then unique would be one way:
jq '[
reduce (.[] | to_entries[]) as {$key, $value} ({}; .[$key] += [$value])
| map_values(unique)
]'
[
{
"name": [
"hosts"
],
"ipaddress": [
"1.2.3.4",
"5.6.7.8",
"9.10.11.12"
],
"status": [
"DOWN",
"RESTART",
"UP"
],
"randomkey": [
"randomvalue"
],
"newkey": [
"newvalue"
],
"anotherkey": [
"anothervalue"
]
}
]
Demo
Using group_by and map, then unique again would be another:
jq '[
map(to_entries[]) | group_by(.key)
| map({key: first.key, value: map(.value) | unique})
| from_entries
]'
[
{
"anotherkey": [
"anothervalue"
],
"ipaddress": [
"1.2.3.4",
"5.6.7.8",
"9.10.11.12"
],
"name": [
"hosts"
],
"newkey": [
"newvalue"
],
"randomkey": [
"randomvalue"
],
"status": [
"DOWN",
"RESTART",
"UP"
]
}
]
Demo
Can somebody help me to extract with | jq the following:
{
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {
"pod": "dev-cds-5c97cf7f78-sw6b9"
},
"values": [
[
1588204800,
"0.3561394483796914"
],
[
1588215600,
"0.3607968456046861"
],
[
1588226400,
"0.3813882532417868"
],
[
1588237200,
"0.6264355815408573"
]
]
},
{
"metric": {
"pod": "uat-cds-66ccc9685-b5tvh"
},
"values": [
[
1588204800,
"0.9969746974696218"
],
[
1588215600,
"0.7400881057270005"
],
[
1588226400,
"1.2298959318837195"
],
[
1588237200,
"0.9482296838254507"
]
]
}
]
}
}
I need to obtain all-values individually by given word dev-cds and not all the name dev-cds-5c97cf7f78-sw6b9.
Result desired:
{
"metric": {
"pod": "dev-cds-5c97cf7f78-sw6b9"
},
"values": [
[
1588204800,
"0.3561394483796914"
],
[
1588215600,
"0.3607968456046861"
],
[
1588226400,
"0.3813882532417868"
],
[
1588237200,
"0.6264355815408573"
]
]
}
You should first iterate over the result array. Check if the pod inside, metric object has the value that contains "dev-cds".
.data.result[] | if .metric.pod | contains("dev-cds") then . else empty end
https://jqplay.org/s/54OH83qHKP
I have been trying to extract a csv from the below json file using jq but not able to get so far. Does any experts out here can help?
{
"values": [
{
"resourceId": "xxxx-xxxx-xxx-8b16-xxxxxx",
"property-contents": {
"property-content": [
{
"statKey": "config|name",
"timestamps": [
1517591034069
],
"values": [
"somebname.UNIVERSE.test.com"
]
},
{
"statKey": "summary|guest|ipAddress",
"timestamps": [
1517591034069
],
"values": [
"100.xx.5.xx"
]
},
{
"statKey": "summary|parentCluster",
"timestamps": [
1551120506024
],
"values": [
"UFO-UFO"
]
},
{
"statKey": "summary|parentDatacenter",
"timestamps": [
1551120806021
],
"values": [
"GALAXY-D123"
]
},
{
"statKey": "summary|parentVcenter",
"timestamps": [
1517591334271
],
"values": [
"X-RAY123"
]
},
{
"statKey": "summary|runtime|powerState",
"timestamps": [
1517591034069
],
"values": [
"Powered On"
]
}
]
}
},
..
...
xxx-xxxx-xxx-8b16-xxxxxx,somebname.UNIVERSE.test.com,100.xx.5.xx,UFO-UFO,GALAXY-D123,X-RAY123,Powered On
Expected o/p is:
xxx-xxxx-xxx-8b16-xxxxxx,somebname.UNIVERSE.test.com,100.xx.5.xx,UFO-UFO,GALAXY-D123,X-RAY123,Powered On
Your expected output leaves some things unclear:
The second CSV column contains somebname.UNIVERSE.test.com, which was presumably derived from the section "property-content": [ { ..., "values": [ "somebname.UNIVERSE.test.com" ], ... }. How do you determine which element in the "property-content" list to pick for the second column? Is it because it's the first element? Is it because of its "statKey": "config|name"?
What if the "property-content" list is empty? What if it doesn't have the "statKey" entry you're looking for? What if the "values" list has zero or more than one element? The CSV row can only contain one scalar value. The same question applies for subsequent columns.
Making a wild guess here,
$ jq -r '.values[] | [ .resourceId, (."property-contents"."property-content"[] | .values[]) ] | join(",")' your.json
xxxx-xxxx-xxx-8b16-xxxxxx,somebname.UNIVERSE.test.com,100.xx.5.xx,UFO-UFO,GALAXY-D123,X-RAY123,Powered On
I cannot guarantee (and somewhat doubt) that this works in the general case, but I've been unable to extract a general case from your one example.
I have json data of the form below. I want to transform it, making the key of each record into a field of that record in a streaming fashion. My problem: I don't know how to do that without truncating the key and losing it. I have inferred the required structure of the stream, see at the bottom.
Question: how do I transform the input data into a stream without losing the key?
Data:
{
"foo" : {
"a" : 1,
"b" : 2
},
"bar" : {
"a" : 1,
"b" : 2
}
}
A non-streaming transformation uses:
jq 'with_entries(.value += {key}) | .[]'
yielding:
{
"a": 1,
"b": 2,
"key": "foo"
}
{
"a": 1,
"b": 2,
"key": "bar"
}
Now, if my data file is very very large, I'd prefer to stream:
jq -ncr --stream 'fromstream(1|truncate_stream(inputs))`
The problem: this truncates the keys "foo" and "bar". On the other hand, not truncating the stream and just calling fromstream(inputs) is pretty meaningless: this makes the whole --stream part a no-op and jq reads everything into memory.
The structure of the stream is the following, using . | tostream:
[
[
"foo",
"a"
],
1
]
[
[
"foo",
"b"
],
2
]
[
[
"foo",
"b"
]
]
[
[
"bar",
"a"
],
1
]
[
[
"bar",
"b"
],
2
]
[
[
"bar",
"b"
]
]
[
[
"bar"
]
]
while with truncation, . as $dot | (1|truncate_stream($dot | tostream)), the structure is:
[
[
"a"
],
1
]
[
[
"b"
],
2
]
[
[
"b"
]
]
[
[
"a"
],
1
]
[
[
"b"
],
2
]
[
[
"b"
]
]
So it looks like that in order for me to construct a stream the way I need it, I will have to generate the following structure (I have inserted a [["foo"]] after the first record is finished):
[
[
"foo",
"a"
],
1
]
[
[
"foo",
"b"
],
2
]
[
[
"foo",
"b"
]
]
[
[
"foo"
]
]
[
[
"bar",
"a"
],
1
]
[
[
"bar",
"b"
],
2
]
[
[
"bar",
"b"
]
]
[
[
"bar"
]
]
Making this into a string jq can consume, I indeed get what I need (see also the snippet here: https://jqplay.org/s/iEkMfm_u92):
fromstream([ [ "foo", "a" ], 1 ],[ [ "foo", "b" ], 2 ],[ [ "foo", "b" ] ],[["foo"]],[ [ "bar", "a" ], 1 ],[ [ "bar", "b" ], 2 ],[ [ "bar", "b" ] ],[ [ "bar" ] ])
yielding:
{
"foo": {
"a": 1,
"b": 2
}
}
{
"bar": {
"a": 1,
"b": 2
}
}
The final result (see https://jqplay.org/s/-UgbEC4BN8) would be:
fromstream([ [ "foo", "a" ], 1 ],[ [ "foo", "b" ], 2 ],[ [ "foo", "b" ] ],[["foo"]],[ [ "bar", "a" ], 1 ],[ [ "bar", "b" ], 2 ],[ [ "bar", "b" ] ],[ [ "bar" ] ]) | with_entries(.value += {key}) | .[]
yielding
{
"a": 1,
"b": 2,
"key": "foo"
}
{
"a": 1,
"b": 2,
"key": "bar"
}
A generic function, atomize(s), for converting objects to key-value objects is provided in the jq Cookbook. Using it, the solution to the problem here is simply:
atomize(inputs) | to_entries[] | .value + {key}
({key} is shorthand for {key: .key}.)
For reference, here is the def:
atomize(s)
# Convert an object (presented in streaming form as the stream s) into
# a stream of single-key objects
# Example:
# atomize(inputs) (used in conjunction with "jq -n --stream")
def atomize(s):
fromstream(foreach s as $in ( {previous:null, emit: null};
if ($in | length == 2) and ($in|.[0][0]) != .previous and .previous != null
then {emit: [[.previous]], previous: ($in|.[0][0])}
else { previous: ($in|.[0][0]), emit: null}
end;
(.emit // empty), $in
) ) ;
I'm trying to extract the sids, ll, state, name, smry values in my JSON file using jq and export to a csv.
JSON File (out.json):
{
"data": [
{
"meta": {
"uid": 74529,
"ll": [
-66.9333,
47.0667
],
"sids": [
"CA008102500 6"
],
"state": "NB",
"elev": 1250,
"name": "LONG LAKE"
},
"smry": [
[
"42",
"1955-02-23"
]
]
},
{
"meta": {
"uid": 74534,
"ll": [
-67.2333,
45.9667
],
"sids": [
"CA008103425 6"
],
"state": "NB",
"elev": 150.9,
"name": "NACKAWIC"
},
"smry": [
[
"40",
"1969-02-23"
]
]
},
{
"meta": {
"uid": 74549,
"ll": [
-67.4667,
47.4667
],
"sids": [
"CA008104933 6"
],
"state": "NB",
"elev": 794,
"name": "ST QUENTIN"
},
"smry": [
[
"M",
"M"
]
]
},
{
"meta": {
"uid": 74550,
"ll": [
-67.2667,
45.1833
],
"sids": [
"CA008104936 6"
],
"state": "NB",
"elev": 36.1,
"name": "ST STEPHEN"
},
"smry": [
[
"48",
"1900-02-23"
]
]
},
{
"meta": {
"uid": 74554,
"ll": [
-67.25,
47.2667
],
"sids": [
"CA008105000 6"
],
"state": "NB",
"elev": 915.4,
"name": "SISSON DAM"
},
"smry": [
[
"35",
"1955-02-23"
]
]
}
]
}
Terminal Code:
jq '.data | [ {sids, ll, state, name, smry} ]' out.json
I am getting the following errors:
assertion "cb == jq_util_input_next_input_cb" failed: file "/usr/src/ports/jq/jq-1.5-3.x86_64/src/jq-1.5/util.c", line 371, function: jq_util_input_get_position
Aborted (core dumped)
Example Expected Output:
sids, ll, state, name, smry
CA008102500, -66.9333, 47.0667, NB, LONG LAKE, 42,1955-02-23
CA008103425, -67.2333, 45.9667, NB, NACKAWIC, 35,1955-02-23
What am I doing wrong?
It's a bit more complex because you need to flatten sids, ll and smry before you can flatten the whole record. I recommend to create a jq file:
foo.jq:
.data[]|{
"sids":(.meta.sids[0]|split(" ")[0]),
"ll":(.meta.ll|map(tostring)|join(",")),
"state":.meta.state,
"name":.meta.name,
"smry":(.smry[]|join(","))
}|join(",")
# or, for robust csv output
# } | #csv
And then call:
jq -rf foo.jq file.json
Output:
CA008102500,-66.9333,47.0667,NB,LONG LAKE,42,1955-02-23
CA008103425,-67.2333,45.9667,NB,NACKAWIC,40,1969-02-23
CA008104933,-67.4667,47.4667,NB,ST QUENTIN,M,M
CA008104936,-67.2667,45.1833,NB,ST STEPHEN,48,1900-02-23
CA008105000,-67.25,47.2667,NB,SISSON DAM,35,1955-02-23