I have file with 30, 000 JSON lines delimited by new line. I am using JQ to process it.
Below is each line schema (new.json).
{
"indexed": {
"date-parts": [
[
2020,
8,
13
]
],
"date-time": "2020-08-13T06:27:26Z",
"timestamp": 1597300046660
},
"reference-count": 42,
"publisher": "American Chemical Society (ACS)",
"issue": "3",
"content-domain": {
"domain": [],
"crossmark-restriction": false
},
"short-container-title": [
"Org. Lett."
],
"published-print": {
"date-parts": [
[
2005,
2
]
]
},
"DOI": "10.1021/ol047829t",
"type": "journal-article",
"created": {
"date-parts": [
[
2005,
1,
27
]
],
"date-time": "2005-01-27T05:53:29Z",
"timestamp": 1106805209000
},
"page": "383-386",
"source": "Crossref",
"is-referenced-by-count": 38,
"title": [
"Liquid-Crystalline [60]Fullerene-TTF Dyads"
],
"prefix": "10.1021",
"volume": "7",
"author": [
{
"given": "Emmanuel",
"family": "Allard",
"affiliation": []
},
{
"given": "Frédéric",
"family": "Oswald",
"affiliation": []
},
{
"given": "Bertrand",
"family": "Donnio",
"affiliation": []
},
{
"given": "Daniel",
"family": "Guillon",
"affiliation": []
}
],
"member": "316",
"container-title": [
"Organic Letters"
],
"original-title": [],
"link": [
{
"URL": "https://pubs.acs.org/doi/pdf/10.1021/ol047829t",
"content-type": "unspecified",
"content-version": "vor",
"intended-application": "similarity-checking"
}
],
"deposited": {
"date-parts": [
[
2020,
4,
7
]
],
"date-time": "2020-04-07T13:39:55Z",
"timestamp": 1586266795000
},
"score": null,
"subtitle": [],
"short-title": [],
"issued": {
"date-parts": [
[
2005,
2
]
]
},
"references-count": 42,
"alternative-id": [
"10.1021/ol047829t"
],
"URL": "http://dx.doi.org/10.1021/ol047829t",
"relation": {},
"ISSN": [
"1523-7060",
"1523-7052"
],
"issn-type": [
{
"value": "1523-7060",
"type": "print"
},
{
"value": "1523-7052",
"type": "electronic"
}
],
"subject": [
"Physical and Theoretical Chemistry",
"Organic Chemistry",
"Biochemistry"
]
}
For every DOI, I need to obtain the values of given and family key in the same cell of the same row of that DOI in the CSV/TSV format.
The expected output for the above json is (in CSV/TSV format):
|DOI| givenName|familyName|
|10.1021/ol047829t|Emmanuel; Frédéric; Bertrand; Daniel;|Allard; Oswald; Donnio; Guillon|
I am using the below command line but it is throwing error and when I try to alter I am unable to get CSV/TSV output at all.
cat new.json | jq -r "[.DOI, .publisher, .author[] | .given] | #tsv" > manage.tsv
The same logic applies for subject key also. I am using the below command line to output values of subject key to CSV but it is throwing only the first element (in this case only: "Physical and Theoretical Chemistry")
cat new.json | jq -c -r "[.DOI, .publisher, .subject[0]] | #csv" > manage.csv
Any pointers for right jq command line will be of great help.
Join given and family names by semicolons separately, then pass resulting strings as fields to the TSV filter.
["DOI", "givenName", "familyName"],
(inputs | [.DOI, (.author | map(.given), map(.family) | join("; "))])
| #tsv
Online demo
Note that you need to invoke JQ with -r and -n flags for this to work and produce a valid TSV output.
Related
I have a JSON file who I am trying to convert to CSV using jq, but I've been having a lot of problems, this is the JSON:
{
"nhits": 2,
"parameters": {
"dataset": "real-time-bezettingen-fietsenstallingen-gent",
"rows": 10,
"start": 0,
"facet": [
"facilityname"
],
"format": "json",
"timezone": "UTC"
},
"records": [
{
"datasetid": "real-time-bezettingen-fietsenstallingen-gent",
"recordid": "d471594688a931ba8d81f8d883874a08cee84775",
"fields": {
"id": "48-2",
"freeplaces": 71,
"facilityname": "Braunplein",
"geo_point_2d": [
51.05406845807926,
3.723722319130363
],
"time": "2022-11-10T12:18:01+00:00",
"totalplaces": 116,
"occupiedplaces": 45,
"bezetting": 38
},
"geometry": {
"type": "Point",
"coordinates": [
3.723722319130363,
51.05406845807926
]
},
"record_timestamp": "2022-11-10T12:18:04.838Z"
},
{
"datasetid": "real-time-bezettingen-fietsenstallingen-gent",
"recordid": "d0121748cf31c7e1c02d99712bdf07cb33156689",
"fields": {
"id": "48-1",
"freeplaces": 65,
"facilityname": "Korenmarkt",
"geo_point_2d": [
51.05388288288933,
3.7214177570400473
],
"time": "2022-11-10T12:18:01+00:00",
"totalplaces": 235,
"occupiedplaces": 170,
"bezetting": 72
},
"geometry": {
"type": "Point",
"coordinates": [
3.7214177570400473,
51.05388288288933
]
},
"record_timestamp": "2022-11-10T12:18:04.838Z"
}
],
"facet_groups": [
{
"name": "facilityname",
"facets": [
{
"name": "Braunplein",
"count": 1,
"state": "displayed",
"path": "Braunplein"
},
{
"name": "Korenmarkt",
"count": 1,
"state": "displayed",
"path": "Korenmarkt"
}
]
}
]
}
I only want to have the columns facilityname, time, totalplaces, occupiedplaces and bezetting, I tried converting using the following command:
jq -r '["naam", "tijd", "totaalAantalPlaatsen", "bezettePlaatsen", "bezetting"] , .records[] | (.fields[] | [.facilityname, .time, .totalplaces, .occupiedplaces, .bezetting]) | #csv' data.json
But I get the error:
jq: error (at data.json:0): Cannot index array with string "fields"
Does anyone know what I'm doing wrong?
You just need some parentheses around the .records[] ... part
jq -r '
["name", "tijd", "totaalAantalPlaatsen", "bezettePlaatsen", "bezetting"],
(.records[].fields | [.facilityname, .time, .totalplaces, .occupiedplaces, .bezetting])
| #csv
' file.json
I am using aws ecs query to get list of properties being used by the current running task.
command -
cft = "aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b
I am storing this in an output variable
output= $( eval $cft)
Output:
"tasks": [
{
"attachments": [
{
"id": "da8a1312-8278-46d5-8e3b-6b6a1d96f820",
"type": "ElasticNetworkInterface",
"status": "ATTACHED",
"details": [
{
"name": "subnetId",
"value": "subnet-0a151f2eb959ad4"
},
{
"name": "networkInterfaceId",
"value": "eni-081948e3666253f"
},
{
"name": "macAddress",
"value": "02:2a:9i:5c:4a:77"
},
{
"name": "privateDnsName",
"value": "ip-172-56-17-177.us-west-2.compute.internal"
},
{
"name": "privateIPv4Address",
"value": "172.56.17.177"
}
]
}
],
"availabilityZone": "us-west-2a",
"clusterArn": "arn:aws:ecs:us-west-2:4984314772:cluster/secrets",
"containers": [
{
"taskArn": "arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b",
"name": "nginx",
"image": "nginx",
"lastStatus": "PENDING",
"networkInterfaces": [
{
"attachmentId": "da8a1312-8278-46d5-6b6a1d96f820",
"privateIpv4Address": "172.31.17.176"
}
],
"healthStatus": "UNKNOWN",
"cpu": "0"
}
],
"cpu": "256",
"createdAt": "2020-12-10T18:00:16.320000+05:30",
"desiredStatus": "RUNNING",
"group": "family:nginx",
"healthStatus": "UNKNOWN",
"lastStatus": "PENDING",
"launchType": "FARGATE",
"memory": "512",
"overrides": {
"containerOverrides": [
{
"name": "nginx"
}
],
"inferenceAcceleratorOverrides": []
},
"platformVersion": "1.4.0",
"tags": [],
"taskArn": "arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b",
"taskDefinitionArn": "arn:aws:ecs:us-west-2:4984314772:task-definition/nginx:17",
"version": 2
}
],
"failures": []
}
now if do an echo of $output.tasks[0].containers[0] nothing happens it prints the entire thing again, i want to store the result in output variable and refer different parameter like we do in json format.
You will need to use a json parser such as jq and so:
eval $cft | jq '.tasks[].containers[]'
To avoid using eval you could simple pipe the aws command into jq and so:
aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b | jq '.tasks[].containers[]'
or:
cft=$(aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b | jq '.tasks[].containers[]')
echo $cft | jq '.tasks[].containers[]'
I have been playing around with jq to format a json file but I am having some issues trying to solve a particular transformation. Given a test.json file in this format:
[
{
"name": "A", // This would be the first key
"number": 1,
"type": "apple",
"city": "NYC" // This would be the second key
},
{
"name": "A",
"number": "5",
"type": "apple",
"city": "LA"
},
{
"name": "A",
"number": 2,
"type": "apple",
"city": "NYC"
},
{
"name": "B",
"number": 3,
"type": "apple",
"city": "NYC"
}
]
I was wondering, how can I format it this way using jq?
[
{
"key": "A",
"values": [
{
"key": "NYC",
"values": [
{
"number": 1,
"type": "a"
},
{
"number": 2,
"type": "b"
}
]
},
{
"key": "LA",
"values": [
{
"number": 5,
"type": "b"
}
]
}
]
},
{
"key": "B",
"values": [
{
"key": "NYC",
"values": [
{
"number": 3,
"type": "apple"
}
]
}
]
}
]
I have followed this thread Using jq, convert array of name/value pairs to object with named keys and tried to group the json using this expression
jq '. | group_by(.name) | group_by(.city) ' ./test.json
but I have not been able to add the keys in the output.
You'll want to group the items at the different levels and building out your result objects as you want.
group_by(.name) | map({
key: .[0].name,
values: (group_by(.city) | map({
key: .[0].city,
values: map({number,type})
}))
})
Just keep in mind that group_by/1 yields groups in a sorted order. You'll probably want an implementation that preserves that order.
def group_by_unsorted(key_selector):
reduce .[] as $i ({};
.["\($i|key_selector)"] += [$i]
)|[.[]];
I want to convert a complex JSON file into a simple JSON file using JQ. However, the query I'm using generates an incorrect output.
My (cut down) JSON file:
[
{
"id": 100,
"foo": [
{
"bar": [
{"type": "read"},
{"type": "write"}
],
"users": ["admin_1"],
"groups": []
},
{
"bar": [
{"type": "execute"},
{ "type": "read"}
],
"users": [],
"groups": ["admin_2"]
}
]
},
{
"id": 101,
"foo": [
{
"bar": [
{"type": "read"}
],
"users": [
"admin_3"
],
"groups": []
}
]
}
]
I need to generate a flatter JSON file and combine the users and groups into one field, similar to this:
[
{
"id": 100,
"users_groups": [
"admin_1",
"admin_2"
],
"bar": ["read"]
},
{
"id": 100,
"users_groups": ["admin_1"],
"bar": ["write"]
},
{
"id": 100,
"users_groups": ["admin_2"],
"bar": ["execute"]
},
{
"id": 101,
"users_groups": ["admin_3"],
"bar": ["read"]
}
]
Everything I try in JQ results in me getting an incorrect output (where admin_1 incorrectly has bar=execute and admin_2 incorrectly has bar=write), similar to the following:
[
{
"id": 100,
"users_groups": [
"admin_1",
"admin_2"
],
"bar": ["read", "write", "execute"]
},
{
"id": 101,
"users_groups": ["admin_3"],
"bar": ["read"]
}
]
I have tried many vairiats of this query - any idea what I should be doing instead?
cat file.json | jq -r '[.[] | select(has("foo")) |{"id", "users":(.foo[] | .users), "groups":(.foo[] | .groups), "bar":([.foo[].bar[] | .type])} ] '
The following filter groups by "type" as the question seems to require:
map(.id as $id
| [.foo[]
| {id: $id, bar: .bar[].type} +
{"users_groups": (.users + .groups)[]} ]
| group_by(.bar)
| map(.[0] + {"users_groups": [.[].users_groups]}) )
Output
[
[
{
"id": 100,
"bar": "execute",
"users_groups": [
"admin_2"
]
},
{
"id": 100,
"bar": "read",
"users_groups": [
"admin_1",
"admin_2"
]
},
{
"id": 100,
"bar": "write",
"users_groups": [
"admin_1"
]
}
],
[
{
"id": 101,
"bar": "read",
"users_groups": [
"admin_3"
]
}
]
]
Variations
To achieve the array-of-objects output format, simply tack on | [.[][]];
it would similarly be trivially easy to ensure that .bar is array-valued, though that might be pointless given that the grouping is by .type.
I'm trying to extract the sids, ll, state, name, smry values in my JSON file using jq and export to a csv.
JSON File (out.json):
{
"data": [
{
"meta": {
"uid": 74529,
"ll": [
-66.9333,
47.0667
],
"sids": [
"CA008102500 6"
],
"state": "NB",
"elev": 1250,
"name": "LONG LAKE"
},
"smry": [
[
"42",
"1955-02-23"
]
]
},
{
"meta": {
"uid": 74534,
"ll": [
-67.2333,
45.9667
],
"sids": [
"CA008103425 6"
],
"state": "NB",
"elev": 150.9,
"name": "NACKAWIC"
},
"smry": [
[
"40",
"1969-02-23"
]
]
},
{
"meta": {
"uid": 74549,
"ll": [
-67.4667,
47.4667
],
"sids": [
"CA008104933 6"
],
"state": "NB",
"elev": 794,
"name": "ST QUENTIN"
},
"smry": [
[
"M",
"M"
]
]
},
{
"meta": {
"uid": 74550,
"ll": [
-67.2667,
45.1833
],
"sids": [
"CA008104936 6"
],
"state": "NB",
"elev": 36.1,
"name": "ST STEPHEN"
},
"smry": [
[
"48",
"1900-02-23"
]
]
},
{
"meta": {
"uid": 74554,
"ll": [
-67.25,
47.2667
],
"sids": [
"CA008105000 6"
],
"state": "NB",
"elev": 915.4,
"name": "SISSON DAM"
},
"smry": [
[
"35",
"1955-02-23"
]
]
}
]
}
Terminal Code:
jq '.data | [ {sids, ll, state, name, smry} ]' out.json
I am getting the following errors:
assertion "cb == jq_util_input_next_input_cb" failed: file "/usr/src/ports/jq/jq-1.5-3.x86_64/src/jq-1.5/util.c", line 371, function: jq_util_input_get_position
Aborted (core dumped)
Example Expected Output:
sids, ll, state, name, smry
CA008102500, -66.9333, 47.0667, NB, LONG LAKE, 42,1955-02-23
CA008103425, -67.2333, 45.9667, NB, NACKAWIC, 35,1955-02-23
What am I doing wrong?
It's a bit more complex because you need to flatten sids, ll and smry before you can flatten the whole record. I recommend to create a jq file:
foo.jq:
.data[]|{
"sids":(.meta.sids[0]|split(" ")[0]),
"ll":(.meta.ll|map(tostring)|join(",")),
"state":.meta.state,
"name":.meta.name,
"smry":(.smry[]|join(","))
}|join(",")
# or, for robust csv output
# } | #csv
And then call:
jq -rf foo.jq file.json
Output:
CA008102500,-66.9333,47.0667,NB,LONG LAKE,42,1955-02-23
CA008103425,-67.2333,45.9667,NB,NACKAWIC,40,1969-02-23
CA008104933,-67.4667,47.4667,NB,ST QUENTIN,M,M
CA008104936,-67.2667,45.1833,NB,ST STEPHEN,48,1900-02-23
CA008105000,-67.25,47.2667,NB,SISSON DAM,35,1955-02-23