I have an elastic search index
like
{
"title": "A",
"comments": [
{
"id": "1"
},
{
"id": "2"
}
]
},
{
"title": "B",
"comments": [
{
"id": "1"
},
{
"id": "3"
}
]
},
{
"title": "C",
"comments": [
{
"id": "7"
},
{
"id": "3"
}
]
}
I want to collapse is the group by the nested object. In the above JSON, I want to group it by Id.
So the output will be like
hits:[{
"title": "A",
"comments": [
{
"id": "1"
},
{
"id": "2"
}
]
},
inner_hits {[
{
"title": "A",
"comments": [
{
"id": "1"
},
{
"id": "2"
}
]
},
{
"title": "B",
"comments": [
{
"id": "1"
},
{
"id": "3"
}
]
}
]}
}]
Baiscally I need collapse bases on the nested object property.
Tried this
/_search?track_total_hits=true
{
"collapse": {
"field": "comments.id",
"inner_hits": {
"name": "id",
"size": 10
},
"max_concurrent_group_searches": 3
}
}
But its always returing first object only in the inner hits
Within the mapping of the object comments , you should remove the nested type.
Related
In my current JSON, I am getting an empty JSON object {} inside dummy_var5. The empty object is inside an array which is inside an object itself.
{
"dummy_var1": "abc",
"dummy_var2": [
{
"item": {
"action": "test",
"po": {
"id": "abc"
},
"ot": "test1",
"id": "1"
}
}
],
"dummy_var3": {
"dummy_var4": [
{
"name": "test",
"value": "test1"
},
{
"name": "test",
"value": "test1"
}
],
"name": "test2"
},
"dummy_var5": [
{
"ref": "test",
"name": "test1",
"type": null
},
{}
],
"dummy_var6": [
{
"role": "test",
"ref": "test1",
"partyDescription": "test2"
}
]
}
Considering this structure does not change, Is there any way to remove this via jolt. We tried using third party tool 'atlasmap' but couldn't achieve the desired result.
Expected output :-
{
"dummy_var1": "abc",
"dummy_var2": [
{
"item": {
"action": "test",
"po": {
"id": "abc"
},
"ot": "test1",
"id": "1"
}
}
],
"dummy_var3": {
"dummy_var4": [
{
"name": "test",
"value": "test1"
},
{
"name": "test",
"value": "test1"
}
],
"name": "test2"
},
"dummy_var5": [
{
"ref": "test",
"name": "test1",
"type": null
}
],
"dummy_var6": [
{
"role": "test",
"ref": "test1",
"partyDescription": "test2"
}
]
}
You can use this single shift transformation spec
[
{
"operation": "shift",
"spec": {
"*": "&", // the attributes other than "dummy_var5"
"dummy_var5": {
"*": {
"*": "&2.[&1].&"
}
}
}
}
]
the match "*":"&" of the line "*": "&2.[&1].&" returns the null value from the leaf node for this level, and so removes the innermost null object {}
I have the following data frame, df1:
A B C
123 B1 C1
456 B2 C2
And data frame df2:
A
[
{
"id": "123",
"details": {
"id": "123",
"color": null,
"param_1": {
"name": "mike"
},
"location": "US",
"items": [
{
"item_1": "#227858",
"offer_id": null,
"item_details": {
"detials_1": [{ "notes": "other:", "quantity": 1 }]
}
}
],
"version": 1,
}
}
]
[
{
"id": "456",
"details": {
"id": "456",
"color": null,
"param_1": {
"name": "james"
},
"location": "KR",
"items": [
{
"item_1": "#2221",
"offer_id": null,
"item_details": {
"detials_1": [{ "notes": "other", "quantity": 1 }]
}
}
],
"version": 2,
}
}
]
I want to find all values in df1[A] inside the JSON found inside df2[A] under the first instance of the id parameter. Once found, I want to replace the NULL values inside the color parameter with the df1[B] and offer_id with df1[C].
The output should create a new column with the appended values:
df2[B]:
[
{
"id": "123",
"details": {
"id": "123",
"color": B1,
"param_1": {
"name": "mike"
},
"location": "US",
"items": [
{
"item_1": "#227858",
"offer_id": C1,
"item_details": {
"detials_1": [{ "notes": "other:", "quantity": 1 }]
}
}
],
"version": 1,
}
}
]
[
{
"id": "456",
"details": {
"id": "456",
"color": B2,
"param_1": {
"name": "james"
},
"location": "KR",
"items": [
{
"item_1": "#2221",
"offer_id": C2,
"item_details": {
"detials_1": [{ "notes": "other", "quantity": 1 }]
}
}
],
"version": 2,
}
}
]
I just started researching how to approach this, but I need guidance on the most efficient way. Any insight would be greatly appreciated.
I want to make a search from object JSON but when a I found the element I want to be returned with his previous parents.
[
{
"name": "level1",
"children": [
{
"name": "level2",
"children": [
{
"name": "level3a",
"children": [
{
"name": "text",
"more": "info"
},
{
"name": "text Abc",
"more": "info"
}
]
},
{
"name": "level3b",
"children": [
{
"name": "text-C",
"more": "info"
},
{
"name": "search",
"more": "info"
}
]
},
{
"name": "level3c",
"children": [
{
"name": "info-C",
"more": "info"
},
{
"name": "search",
"more": "info"
}
]
}
]
}
]
},
{
"name": "level1",
"children": [
{
"name": "level2a",
"children": [
{
"name": "level3",
"children": [
{
"name": "text A",
"more": "info"
}
]
},
]
},
{
"name": "level2b",
"children": [
{
"name": "level3",
"children": [
{
"name": "text X",
"more": "info"
}
]
},
]
}
]
}
]
For example: if want to search "text" I want result like this
[
{
"name": "level1",
"children": [
{
"name": "level2",
"children": [
{
"name": "level3a",
"children": [
{
"name": "text",
"more": "info"
},
{
"name": "text Abc",
"more": "info"
}
]
},
{
"name": "level3b",
"children": [
{
"name": "text-C",
"more": "info"
}
]
}
]
}
]
},
{
"name": "level1",
"children": [
{
"name": "level2a",
"children": [
{
"name": "level3",
"children": [
{
"name": "text A",
"more": "info"
}
]
},
]
},
{
"name": "level2b",
"children": [
{
"name": "level3",
"children": [
{
"name": "text X",
"more": "info"
}
]
},
]
}
]
}
]
Things to considerer
only "search" in level 3 children by name property
if one o more childrens matches the search, return all of them
Basically I need to filter by the last level of object with elements matches with the search and return them with them parents.
I have the following node.js code where I want to extract the value "abc" chosen by user as a name. I get result as undefined when I run this code:
let input= [
{
"param": [
{
"id": "name",
"choice": [
{
"label": "abc",
"value": "abc",
"valueId": "abc"
}
]
},
{
"id": "alias",
"choice": [
{
"label": "dsf",
"value": "dsf",
"valueId": "dsf"
}
]
},
{
"id": "description",
"choice": [
{
"label": "",
"value": "",
"valueId": ""
}
]
},
{
"id": "Key",
"choice": [
{
"label": "K",
"value": "K",
"valueId": "K"
}
]
},
{
"id": "tagKey",
"choice": [
{
"label": "",
"value": "",
"valueId": ""
}
]
},
{
"id": "tagValue",
"choice": [
{
"label": "",
"value": "",
"valueId": ""
}
]
},
{
"id": "multiquantity",
"choice": [
{
"label": "1",
"valueId": "1",
"value": "1"
}
]
}
],
"old": [],
"current": null
}
]
let result = (_.find(input.param, {id: "name"})).choice[0].valueId;
console.log("value"+result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.11/lodash.min.js"></script>
Seems that the logic is not able to extract the correct value. I need the output as abc. please help
just checking before i build the wheel
I need a hackjob to present an api endpoint in a database. It doesn't need to do anything fancy, just convert what the rest api spits out into a single column of json. A new row at each iteration at the root/top level would be nice but a single varchar or whatever would be ok too
Does an ODBC wrapper exist out there anywhere? Googling just brings up hits for doing the opposite (exposing databases as an api). I'm not interested in the simba etc paid stuff. The consumer is SQL server so i can just use xp_cmdshell with curl as a last resort
so for instance the output of this : http://jsonapiplayground.reyesoft.com/v2/authors could come out as a table like so (a row for each author)
|data |
---------------------------------------------------
|{
"type": "authors",
"id": "1",
"attributes": {
"name": "Madge Mohr DVM 2",
"date_of_birth": "1977-08-21",
"date_of_death": "2009-09-14"
},
"relationships": {
"photos": {
"data": []
},
"books": {
"data": [
{
"type": "books",
"id": "41"
}
]
}
},
"links": {
"self": "/v2/authors/1"
}
} |
---------------------------------------------------
|{
"type": "authors",
"id": "3",
"attributes": {
"name": "Zelma Ortiz DDS",
"date_of_birth": "1992-09-06",
"date_of_death": "2000-12-19"
},
"relationships": {
"photos": {
"data": [
{
"type": "photos",
"id": "3"
}
]
},
"books": {
"data": [
{
"type": "books",
"id": "36"
},
{
"type": "books",
"id": "48"
}
]
}
},
"links": {
"self": "/v2/authors/3"
}
}|
----------
|{
"type": "authors",
"id": "4",
"attributes": {
"name": "Fermin Barrows Sr.",
"date_of_birth": "1991-03-18",
"date_of_death": "1975-11-07"
},
"relationships": {
"photos": {
"data": [
{
"type": "photos",
"id": "4"
}
]
},
"books": {
"data": [
{
"type": "books",
"id": "1"
},
{
"type": "books",
"id": "26"
},
{
"type": "books",
"id": "44"
},
{
"type": "books",
"id": "46"
}
]
}
},
"links": {
"self": "/v2/authors/4"
}
}|
----------
|{
"type": "authors",
"id": "5",
"attributes": {
"name": "Terry Durgan",
"date_of_birth": "2011-03-06",
"date_of_death": "2017-04-13"
},
"relationships": {
"photos": {
"data": [
{
"type": "photos",
"id": "5"
}
]
},
"books": {
"data": [
{
"type": "books",
"id": "6"
},
{
"type": "books",
"id": "16"
},
{
"type": "books",
"id": "50"
}
]
}
},
"links": {
"self": "/v2/authors/5"
}
}|
----------
|{
"type": "authors",
"id": "6",
"attributes": {
"name": "Annalise Walsh",
"date_of_birth": "2004-11-27",
"date_of_death": "1997-07-20"
},
"relationships": {
"photos": {
"data": [
{
"type": "photos",
"id": "6"
}
]
},
"books": {
"data": [
{
"type": "books",
"id": "4"
},
{
"type": "books",
"id": "5"
},
{
"type": "books",
"id": "21"
}
]
}
},
"links": {
"self": "/v2/authors/6"
}
}|
---------