I have the following 3 Json documents within a mongoDB collection:
{
'title': 'Best',
'array' : [
{'name' : '1',
'value': '2'
},
{'name' : '3',
'value': '4'
}
]
}
and :
{
'title': 'Best',
'array' : [
{'name' : '5',
'value': '6'
},
{'name' : '7',
'value': '8'
}
]
}
and:
{
'title': 'Worst',
'array' : [
{'name' : 'Not_needed',
'value': 'Not_needed'
},
{'name' : 'Not_needed',
'value': 'Not_needed'
}
]
}
I need a query that gives me:
{[
{'name' : '1',
'value': '2'
},
{'name' : '3',
'value': '4'
},
{'name' : '5',
'value': '6'
},
{'name' : '7',
'value': '8'
}
]}
How can I do that? Is that what people refer to as aggregation ? Could you please provide me with a mongoDB query for that?
Here is a query using aggregation framework that nearly generates the document you want.
db.test.aggregate([
{ "$unwind": "$array" },
{ "$group":
{
"_id": "$title",
"array": {"$push": "$array"}
}
},
{ "$project":
{
"array": 1,
"_id": 0
}
}
])
Related
New to jq here.
I just want to ask how to add the key of an object to each item of its value and convert it to an array of objects instead. I have the following JSON format:
{
"key1" : [
"key1item1",
"key1item2",
"key1item3",
"key1item4",
...
],
"key2" : [
"key2item1",
"key2item2",
...
]
}
What I want to achieve is this:
{
"key1" : [
{
'parent': 'key1',
'key': 'key1_key1item1',
'value': 'key1_item1',
},
{
'parent': 'key1',
'key': 'key1_key1item2',
'value': 'key1_item2',
}
{
'parent': 'key1',
'key': 'key1_key1item3',
'value': 'key1_item3',
}
],
"key2" : [
{
'parent': 'key2',
'key': 'key2_key2item1',
'value': 'key2_item1',
},
{
'parent': 'key2',
'key': 'key2_key2item2',
'value': 'key2_item2',
}
{
'parent': 'key2',
'key': 'key2_key2item3',
'value': 'key2_item3',
}
]
This should do it:
with_entries(
.key as $key
| .value |= map(
{parent: $key,
key: ($key + (tostring)),
value: .}) )
I have a nested json with an arbitrary depth level :
json_list = [
{
'class': 'Year 1',
'room': 'Yellow',
'students': [
{'name': 'James', 'sex': 'M', 'grades': {}},
]
},
{
'class': 'Year 2',
'info': {
'teachers': {
'math': 'Alan Turing',
'physics': []
}
},
'students': [
{ 'name': 'Tony', 'sex': 'M', 'age': ''},
{ 'name': 'Jacqueline', 'sex': 'F' },
],
'other': []
}
]
I want to remove any element that its value meet certain criteria.
For example:
values_to_drop = ({}, (), [], '', ' ')
filtered_json = clean_json(json_list, values_to_drop)
filtered_json
Expected Output of clean_json:
[
{
'class': 'Year 1',
'room': 'Yellow',
'students': [
{'name': 'James', 'sex': 'M'},
]
},
{
'class': 'Year 2',
'info': {
'teachers': {
'math': 'Alan Turing',
}
},
'students': [
{ 'name': 'Tony', 'sex': 'M'},
{ 'name': 'Jacqueline', 'sex': 'F'},
]
}
]
I thought of something like first converting the object to string using json.dumps and then looking in the string and replacing each value that meets the criteria with some kind of flag to filter it after before reading it again with json.loads but I couldn't figure it out and I don't know if this is the way to go
I managed to get the desired output by tweaking this answer a bit:
def clean_json(json_obj, values_to_drop):
if isinstance(json_obj, dict):
json_obj = {
key: clean_json(value, values_to_drop)
for key, value in json_obj.items()
if value not in values_to_drop}
elif isinstance(json_obj, list):
json_obj = [clean_json(item, values_to_drop)
for item in json_obj
if item not in values_to_drop]
return json_obj
I have below big json file
{
"sections": [
{
"facts": [
{
"name": "Server",
"value": "<https://xxxxxxx:18443/collector/pipeline/v1_allagents>"
},
{
"name": "Environment",
"value": "dev"
},
{
"name": "Issue",
"value": "Server is [EDITED]"
}
]
},
{
"facts": [
{
"name": "Server",
"value": "<https://xxxxx:18443/collector/pipeline/customer-characterstics-v1>"
},
{
"name": "Environment",
"value": "dev"
},
{
"name": "Issue",
"value": "Server is [STOPPED]"
}
]
}
{'facts':
[
{'name': 'Server', 'value': u'<https://xxxxxx:18443/collector/pipeline/soap-post-v1_relations>'},
{'name': 'Environment', 'value': u'dev'}, {'name': 'Issue', 'value': u' status is [STOPPED]'}
]
},
{'facts':
[
{'name': 'Server', 'value': u'<https://xxxxxxx.134:18443/collector/pipeline/characterstics-v1_allagents>'},
{'name': 'Environment', 'value': u'dev'}, {'name': 'Issue', 'value': u' status is [EDITED]'}
]
},
{'facts':
[
{'name': 'Server', 'value': u'<https://xxxxxxx:18443/collector/pipeline/ab23-8128b7c9fcf2>'},
{'name': 'Environment', 'value': u'dev'}, {'name': 'Issue', 'value': u'status is [EDITED]'}
]
}
]
}
....
now I'm struggling to split above file as below and dump into another new files:
{
"text": "Status",
"themeColor": "#FF0000",
{
"sections": [
{
"facts": [
{
"name": "Server",
"value": "<https://xxxxxxx:18443/collector/pipeline/v1_allagents>"
},
{
"name": "Environment",
"value": "dev"
},
{
"name": "Issue",
"value": "Server is [EDITED]"
}
]
}
]
}
}
what I could able to achieve so far is print each tags under facts, but not the way I expect as above.
so, I'm having trouble adding those extra lines prior the final ones and then dump it to another file.
How should I approach this? not using JQ.
each splitted file should have same header and then exactly same pattern for key sections and facts .
edit:
As per Andrej's solution it works perfectly alright for one split at a time.
But how to split the file based on n size, let's say I want to split my original big file where 5 facts exists 2 facts per file.n = 2
so, it should create 3 json files , where first 2 contains 2 blocks of facts and last one should be only one since that's left.
Then final output should be:
{'text': ' Status', 'themeColor': '#FF0000', 'sections':
[
{'facts':
[
{'name': 'Server', 'value': u'<https://xxxxxx:18443/collector/pipeline/soap-post-v1>'},
{'name': 'Environment', 'value': u'dev'},
{'name': 'Issue', 'value': u' status is [STOPPED]'}
]
},
{'facts':
[
{'name': 'Server', 'value': u'<https://xxxxx:18443/collector/pipeline/be9694085a70>'},
{'name': 'Environment', 'value': u'dev'},
{'name': 'Issue', 'value': u' status is [STOPPED]'}
]
}
]
}
and
{'text': ' Status', 'themeColor': '#FF0000', 'sections':
[
{'facts':
[
{'name': 'Server', 'value': u'<https://xxxxxx:18443/collector/pipeline/soap-post-v1_relations>'},
{'name': 'Environment', 'value': u'dev'}, {'name': 'Issue', 'value': u' status is [STOPPED]'}
]
},
{'facts':
[
{'name': 'Server', 'value': u'<https://xxxxxxx.134:18443/collector/pipeline/characterstics-v1_allagents>'},
{'name': 'Environment', 'value': u'dev'}, {'name': 'Issue', 'value': u' status is [EDITED]'}
]
}
]}
as per above one block of fact from original file, hence it will create it's own json
{'text': ' Status', 'themeColor': '#FF0000', 'sections':
[
{'facts':
[
{'name': 'Server', 'value': u'<https://xxxxxxx:18443/collector/pipeline/ab23-8128b7c9fcf2>'},
{'name': 'Environment', 'value': u'dev'}, {'name': 'Issue', 'value': u'status is [EDITED]'}
]
}
]}
You can load the big file json into dictionary using json module. Then treat the loaded data as classical Python dict.
If your file contains the string in question, then this example:
import json
with open('YOUR_JSON_FILE.json', 'r') as f_in:
data = json.load(f_in)
for i, fact in enumerate(data['sections'], 1):
with open('data_out_{}.json'.format(i), 'w') as f_out:
d = {}
d['text'] = 'Status'
d['themeColor'] = '#FF0000'
d['sections'] = fact
json.dump(d, f_out, indent=4)
This creates two files data_out_1.json and data_out_2.json containing:
{
"text": "Status",
"themeColor": "#FF0000",
"sections": {
"facts": [
{
"name": "Server",
"value": "<https://xxxxxxx:18443/collector/pipeline/v1_allagents>"
},
{
"name": "Environment",
"value": "dev"
},
{
"name": "Issue",
"value": "Server is [EDITED]"
}
]
}
}
and
{
"text": "Status",
"themeColor": "#FF0000",
"sections": {
"facts": [
{
"name": "Server",
"value": "<https://xxxxx:18443/collector/pipeline/customer-characterstics-v1>"
},
{
"name": "Environment",
"value": "dev"
},
{
"name": "Issue",
"value": "Server is [STOPPED]"
}
]
}
}
EDIT:
To chunk the JSON file, you can use this example:
import json
def chunk(lst, n):
for i in range(0, len(lst), n):
yield lst[i:i + n]
with open('YOUR_JSON_FILE.json', 'r') as f_in:
data = json.load(f_in)
for i, fact in enumerate(chunk(data['sections'], 2), 1): # <-- change 2 to your chunk size
with open('data_out_{}.json'.format(i), 'w') as f_out:
d = {}
d['text'] = 'Status'
d['themeColor'] = '#FF0000'
d['sections'] = fact
json.dump(d, f_out, indent=4)
import json
with open('/tmp/json_response_output.json') as datafile:
datastore = json.load(datafile)
for n, details in enumerate(datastore['sections']):
split_json = datastore.copy()
split_json['sections'] = [details]
with open(f'json_response_output_part{n}.json', 'w') as f:
json.dump(split_json, f, indent=4, ensure_ascii=False)
I want to give routerlink or path or any action to the tree submenu, But I failed, I don't know how to do it please, someone helps me to find out a solution for this,
Here is the .ts file code
public treeData: Object[] = [
{
nodeId: '1', nodeText: 'DashBoard'
},
{
nodeId: '2', nodeText: 'Trip Management',
nodeChild: [
{ nodeId: '21', nodeText: 'Location' },
{ nodeId: '22', nodeText: 'Routes' },
{ nodeId: '23', nodeText: 'Ticket Price' }
]
},
{
nodeId: '3', nodeText: 'Booking',
nodeChild: [
{ nodeId: '31', nodeText: 'Add Booking' },
{ nodeId: '32', nodeText: 'List' }
]
},
{
nodeId: '4', nodeText: 'Users',
},
{
nodeId: '5', nodeText: 'Reports',
nodeChild: [
{ nodeId: '51', nodeText: 'User' },
{ nodeId: '52', nodeText: 'Booking' }
]
},
{
nodeId: '6', nodeText: 'settings',
nodeChild: [
{ nodeId: '61', nodeText: 'General Setting' },
{ nodeId: '62', nodeText: 'Pages' },
{ nodeId: '63', nodeText: 'Email Formate' },
]
},
];
public treeFields: Object = {
dataSource: this.treeData,
id: 'nodeId',
text: 'nodeText',
Link:'routerLink',
child: 'nodeChild',
};
.html file is,
<ejs-treeview id="myTree" [fields]="treeFields"
(click)="event($event)"></ejs-treeview >
I have a JSON data something like:
'1':
{ code: '1',
type: 'car',
},
'2':
{ code: '2',
type: 'bike'
},
...
I dont want to parse 1 and 2 in the parent. Only I need to have
[{code: '1',
type: 'car',
},
{ code: '2',
type: 'bike'
},
...]
How can I do that?
jsonArray = [];
for (var i in JsonData) {
jsonArray.push(jsonData[i]);
}