I'm trying to get the total amount for a specific value in a array.
The input would be something like
[{
"name": "mars",
"runner": [
{
"foo": null,
"idle": true
},
{
"foo": null,
"idle": true
},
{
"foo": null,
"idle": false
}
],
"name": "june",
"runner": [
{
"foo": null,
"idle": true
},
{
"foo": null,
"idle": true
},
{
"foo": null,
"idle": true
}
]
}]
The desired output
[ {"name" : "mars", "idle" : 2"},{"name" : "june", "idle" : 1"} ]
I tried using select and map, but I'm not understanding exactly how jq is working, for instance, I tried the following query
jq ' .[] | select(.runner[].idle == true) | {name: .name}'
The result was
{ "nome": "mars" } { "nome": "mars" } { "nome": "june" } {
"nome": "june" } { "nome": "june" }
(3x true in june and 2x in mars) I can keep parsing the json and get to the result I want, but it doesn't seems right.
Your input data doesn't seem to correspond exactly with your posting. I'll assume you meant:
[{"name":"mars",
"runner":[{"foo":null,"idle":true},{"foo":null,"idle":true},{"foo":null,"idle":false}]},
{"name":"june",
"runner":[{"foo":null,"idle":true},{"foo":null,"idle":true},{"foo":null,"idle":true}]}]
length
A reasonable approach would be to rely on the builtin length function:
map( { name, idle: (.runner | map(select(.idle)) | length)} )
count
A better (e.g. more efficient) solution would be to define a function that can count:
def count(s): reduce s as $i (0; .+1);
Here s is any filter that produces a stream of values. A solution to the problem at hand could then be written as follows:
map( {name, idle: count(.runner[] | select(.idle))} )
Output
The output in both cases:
[
{
"name": "mars",
"idle": 2
},
{
"name": "june",
"idle": 3
}
]
Related
Please find the attached Groovy code which I am using to get the particular filed from the response body.
Query 1 :
It is retrieving the results when the I am using the correct Index value like if the data.RenewalDetails[o], will give output as Value 1 and if the data.RenewalDetails[1], output as Value 2.
But in my real case, I will never know about number of blocks in the response, so I want to get all the values that are satisficing the condition, I tried data.RenewalDetails[*] but it is not working. Can you please help ?
Query 2:
Apart from the above condition, I want to add one more filter, where "FamilyCode": "PREMIUM" in the Itemdetails, Can you help on the same ?
def BoundId = new groovy.json.JsonSlurper().parseText('{"data":{"RenewalDetails":[{"ExpiryDetails":{"duration":"xxxxx","destination":"LHR","from":"AUH","value":2,"segments":[{"valudeid":"xxx-xx6262-xxxyyy-1111-11-11-1111"}]},"Itemdetails":[{"BoundId":"Value1","isexpired":true,"FamilyCode":"PREMIUM","availabilityDetails":[{"travelID":"AAA-AB1234-AAABBB-2022-11-10-1111","quota":"X","scale":"XXX","class":"X"}]}]},{"ExpiryDetails":{"duration":"xxxxx","destination":"LHR","from":"AUH","value":2,"segments":[{"valudeid":"xxx-xx6262-xxxyyy-1111-11-11-1111"}]},"Itemdetails":[{"BoundId":"Value2","isexpired":true,"FamilyCode":"PREMIUM","availabilityDetails":[{"travelID":"AAA-AB1234-AAABBB-2022-11-10-1111","quota":"X","scale":"XXX","class":"X"}]}]}]},"warnings":[{"code":"xxxx","detail":"xxxxxxxx","title":"xxxxxxxx"}]}')
.data.RenewalDetails[0].Itemdetails.find { itemDetail ->
itemDetail.availabilityDetails[0].travelID.length() == 33
}?.BoundId
println "Hello " + BoundId
Something like this:
def txt = '''\
{
"data": {
"RenewalDetails": [
{
"ExpiryDetails": {
"duration": "xxxxx",
"destination": "LHR",
"from": "AUH",
"value": 2,
"segments": [
{
"valudeid": "xxx-xx6262-xxxyyy-1111-11-11-1111"
}
]
},
"Itemdetails": [
{
"BoundId": "Value1",
"isexpired": true,
"FamilyCode": "PREMIUM",
"availabilityDetails": [
{
"travelID": "AAA-AB1234-AAABBB-2022-11-10-1111",
"quota": "X",
"scale": "XXX",
"class": "X"
}
]
}
]
},
{
"ExpiryDetails": {
"duration": "xxxxx",
"destination": "LHR",
"from": "AUH",
"value": 2,
"segments": [
{
"valudeid": "xxx-xx6262-xxxyyy-1111-11-11-1111"
}
]
},
"Itemdetails": [
{
"BoundId": "Value2",
"isexpired": true,
"FamilyCode": "PREMIUM",
"availabilityDetails": [
{
"travelID": "AAA-AB1234-AAABBB-2022-11-10-1111",
"quota": "X",
"scale": "XXX",
"class": "X"
}
]
}
]
}
]
},
"warnings": [
{
"code": "xxxx",
"detail": "xxxxxxxx",
"title": "xxxxxxxx"
}
]
}'''
def json = new groovy.json.JsonSlurper().parseText txt
List<String> BoundIds = json.data.RenewalDetails.Itemdetails*.find { itemDetail ->
itemDetail.availabilityDetails[0].travelID.size() == 33 && itemDetail.FamilyCode == 'PREMIUM'
}?.BoundId
assert BoundIds.toString() == '[Value1, Value2]'
Note, that you will get the BoundIds as a List
If you amend your code like this:
def json = new groovy.json.JsonSlurper().parse(prev.getResponseData()
you would be able to access the number of returned items as:
def size = json.data.RenewalDetails.size()
as RenewalDetails represents a List
Just add as many queries you want using Groovy's && operator:
find { itemDetail ->
itemDetail.availabilityDetails[0].travelID.length() == 33 &&
itemDetail.FamilyCode.equals('PREMIUM')
}
More information:
Apache Groovy - Parsing and producing JSON
Apache Groovy: What Is Groovy Used For?
I could not find how to count occurrence of "title" grouped by "member_id"...
The json file is:
[
{
"member_id": 123,
"loans":[
{
"date": "123",
"media": [
{ "title": "foo" },
{ "title": "bar" }
]
},
{
"date": "456",
"media": [
{ "title": "foo" }
]
}
]
},
{
"member_id": 456,
"loans":[
{
"date": "789",
"media": [
{ "title": "foo"}
]
}
]
}
]
With this query I get loan entries for users with "title==foo"
jq '.[] | (.member_id) as $m | .loans[].media[] | select(.title=="foo") | {id: $m, title: .title}' member.json
{
"id": 123,
"title": "foo"
}
{
"id": 123,
"title": "foo"
}
{
"id": 456,
"title": "foo"
}
But I could not find how to get count by user (group by) for a title, to get a result like:
{
"id": 123,
"title": "foo",
"count": 2
}
{
"id": 456,
"title": "foo",
"count": 1
}
I got errors like jq: error (at member.json:31): object ({"title":"f...) and array ([[123]]) cannot be sorted, as they are not both arrays or similar...
When the main goal is to count, it is usually more efficient to avoid constructing an array if determining its length is the only reason for doing so. In the present case you could, for example, write:
def count(s): reduce s as $x (null; .+1);
"foo" as $title | .[] | {
id: .member_id,
$title,
count: count(.loans[].media[] | select(.title == $title))
}
group_by has its uses, but it is well to be aware that it is inefficient even for grouping, because its implementation involves a sort, which is not strictly necessary if the goal is to "group by" some criterion. A completely generic sort-free "group by" function is a bit tricky to implement, but often a simple but non-generic version is sufficient, such as:
# sort-free variant of group_by/1
# f must always evaluate to an integer or always to a string, which
# could be achieved by using `tostring`.
# Output: an array in the former case, or an object in the latter case
def GROUP_BY(f): reduce .[] as $x (null; .[$x|f] += [$x] );
Using group_by :
jq 'map(
(.member_id) as $m
| .loans[].media[]
| select(.title=="foo")
| {id: $m, title: .title}
)
|group_by(.id)[]
|.[0] + { count: length }
' input-file
i'm trying to convert the inspection_date field from string to date for every object inside my db.
Every object is built like this one.
"name": "$1 STORE",
"address": "5573 ROSEMEAD BLVD",
"city": "TEMPLE CITY",
"zipcode": "91780",
"state": "California",
"violations": [{
"inspection_date": "2015-09-29",
"description": " points ... violation_status\n62754 1 ... OUT OF COMPLIANCE\n62755 1 ... OUT OF COMPLIANCE\n62756 2 ... OUT OF COMPLIANCE\n\n[3 rows x 5 columns]",
"risk": "Risk 3 (Low)"
}, {
"inspection_date": "2016-08-18",
"description": " points ... violation_status\n338879 2 ... OUT OF COMPLIANCE\n\n[1 rows x 5 columns]",
"risk": "Risk 3 (Low)"
} //could be more than 2 or less then 2 object inside violations array//]}
How can i convert all of the inspection_date field avoiding doing it by myself one by one?
As suggested by #turivishal, you have to have to make use of $map and $dateFromString operators.
db.collection.aggregate([
{
"$addFields": {
"violations": {
"$map": {
"input": "$violations",
"in": {
"$mergeObjects": [
"$$this",
{
"inspection_date": {
"$dateFromString": {
"dateString": "$$this.inspection_date",
"format": "%Y-%m-%d",
"onError": null,
"onNull": null
}
}
}
],
},
}
}
}
},
])
Mongo Playground Sample Execution
I have a json file that I need to convert to a csv file, but I am a little wary of trusting a json-to-csv converter site as the outputted data seems to be incorrect... so I was hoping to get some help here!
I have the following json file structure:
{
"GroupName": "GrpName13",
"Number": 3,
"Notes": "Test Group ",
"Units": [
{
"UnitNumber": "TestUnit13",
"DataSource": "Factory",
"ContractNumber": "TestContract13",
"CarNumber": "2",
"ControllerTypeMessageId" : 4,
"NumberOfLandings": 4,
"CreatedBy": "user1",
"CommissionModeMessageId": 2,
"Details": [
{
"DetailName": "TestFloor13",
"DetailNumber": "5"
}
],
"UnitDevices": [
{
"DeviceTypeMessageId": 1,
"CreatedBy": "user1"
}
]
}
]
}
The issue I think Im seeing is that the converters seem to not be able to comprehend the many nested data values. And the reason I think the converters are wrong is because when I try to convert back to json using them, I dont receive the same structure.
Does anyone know how to manually format this json into csv format, or know of a reliable converter than can handle nested values?
Try
www.json-buddy.com/convert-json-csv-xml.htm
if not working for you then you can try this tool
http://download.cnet.com/JSON-to-CSV/3000-2383_4-76680683.html
should be helpful!
I have tried your json on this for url:
http://www.convertcsv.com/json-to-csv.htm
As a result:
UnitNumber,DataSource,ContractNumber,CarNumber,ControllerTypeMessageId,NumberOfLandings,CreatedBy,CommissionModeMessageId,Details/0/DetailName,Details/0/DetailNumber,UnitDevices/0/DeviceTypeMessageId,UnitDevices/0/CreatedBy
TestUnit13,Factory,TestContract13,2,4,4,user1,2,TestFloor13,5,1,user1
Because it could save the path of the key,like the 'DeviceTypeMessageId' in list 'UnitDevices': it will named the columns name with 'UnitDevices/0/DeviceTypeMessageId', this could avoid the same name mistake, so you can get the columns name by its converter rules.
Hope helpful.
Here is a solution using jq
If the file filter.jq contains
def denormalize:
def headers($p):
keys_unsorted[] as $k
| if .[$k]|type == "array" then (.[$k]|first|headers("\($p)\($k)_"))
else "\($p)\($k)"
end
;
def setup:
[
keys_unsorted[] as $k
| if .[$k]|type == "array" then [ .[$k][]| setup ]
else .[$k]
end
]
;
def iter:
if length == 0 then []
elif .[0]|type != "array" then
[.[0]] + (.[1:] | iter)
else
(.[0][] | iter) as $x
| (.[1:] | iter) as $y
| [$x[]] + $y
end
;
[ headers("") ], (setup | iter)
;
denormalize | #csv
and data.json contains (note extra samples added)
{
"GroupName": "GrpName13",
"Notes": "Test Group ",
"Number": 3,
"Units": [
{
"CarNumber": "2",
"CommissionModeMessageId": 2,
"ContractNumber": "TestContract13",
"ControllerTypeMessageId": 4,
"CreatedBy": "user1",
"DataSource": "Factory",
"Details": [
{
"DetailName": "TestFloor13",
"DetailNumber": "5"
}
],
"NumberOfLandings": 4,
"UnitDevices": [
{
"CreatedBy": "user1",
"DeviceTypeMessageId": 1
},
{
"CreatedBy": "user10",
"DeviceTypeMessageId": 10
}
],
"UnitNumber": "TestUnit13"
},
{
"CarNumber": "99",
"CommissionModeMessageId": 99,
"ContractNumber": "Contract99",
"ControllerTypeMessageId": 99,
"CreatedBy": "user99",
"DataSource": "Another Factory",
"Details": [
{
"DetailName": "TestFloor99",
"DetailNumber": "99"
}
],
"NumberOfLandings": 99,
"UnitDevices": [
{
"CreatedBy": "user99",
"DeviceTypeMessageId": 99
}
],
"UnitNumber": "Unit99"
}
]
}
then the command
jq -M -r -f filter.jq data.json
will produce
"GroupName","Notes","Number","Units_CarNumber","Units_CommissionModeMessageId","Units_ContractNumber","Units_ControllerTypeMessageId","Units_CreatedBy","Units_DataSource","Units_Details_DetailName","Units_Details_DetailNumber","Units_NumberOfLandings","Units_UnitDevices_CreatedBy","Units_UnitDevices_DeviceTypeMessageId","Units_UnitNumber"
"GrpName13","Test Group ",3,"2",2,"TestContract13",4,"user1","Factory","TestFloor13","5",4,"user1",1,"TestUnit13"
"GrpName13","Test Group ",3,"2",2,"TestContract13",4,"user1","Factory","TestFloor13","5",4,"user10",10,"TestUnit13"
"GrpName13","Test Group ",3,"99",99,"Contract99",99,"user99","Another Factory","TestFloor99","99",99,"user99",99,"Unit99"
Given the input of sizes:
[
{
"stock": 1,
"sales": 0,
"sizes": [
{
"countries": ["at", "be", "ch", "cy", "de", "ee", "es", "fi", "gr", "ie", "lu", "lv", "nl", "pl", "pt", "se", "si", "sk"],
"size": "EU 45,5"
},
{
"countries": ["it"],
"size": "EU 45,5"
},
{
"countries": ["fr"],
"size": "EU 45,5"
},
{
"countries": ["gb"],
"size": "EU 45,5"
}
]
}
]
I will like to get the same structure without the ones that countries hasn't "de" (Germany) and remove the field complete. Expected something like this:
[
{
"stock": 1,
"sizes": [
{
"size": "EU 45,5"
}
]
}
]
I tried this:
map(.sizes[] |= select(.countries | join(",") | contains("de"))) | map({ stock, sizes })
But the filter is not working properly, throwing jq: error (at <stdin>:48): Cannot iterate over null (null).
Tried has, in, contains, inside and nothing seems to work.
Also, how can I filter which field appears? With map({ stock, sizes }) countries still there. Can I do something like map({ stock, sizes: { size } })?
Here's a one-liner that answers your main question -- if you can't see how it works, try breaking it up into separate pieces:
map( .sizes |= map( select(.countries | index("de") ) | del(.countries) ))
Regarding the selection of fields, you can use del/1 as above, or sometimes simply using an expression such as {key1, key2} will do the trick. Consider also this function and the following example:
def query(queryobject):
with_entries( select( .key as $key | queryobject | has( $key ) ));
Example:
$ jq -c -n '{"a": 1, "b": null, "c":3} | query( {a,b,d} )'
{"a":1,"b":null}