ElasticSearch api parse nested string json dashboard - json

I am using ElasticSearch GET to get the json file of a dashbaord:
for example: http://ES_IP:9200/kibana-int/dashboard/my_Dashboard/
This returns me a json file like:
{"_index":"kibana-int","_type":"dashboard","_id":"my_Dashboard","_version":5,"found":true,"_source":{ "user":"guest", "group":"guest", "title":"my_Dashboard", "dashboard":"{ \"title\": \"My Dashboard\", \"services\": { \"query\": { \"list\": { \"0\": { \"id\": 0, \"type\": \"lucene\", \"query\": \"type:dh AND severity:ERROR AND (response.baseUrl:\"/rm/recordings/*\" OR request.baseUrl:\"/rm/recordings/*\")\", \"alias\": \"DH errors rcc\",.......
Here is where I need your help, how can I get the value of the key "dashboard" but without the escaped '\' character in the key/val pair not affecting the escaped that are part of the values?
The output that I need should be something like:
{ "title": "My Dashboard", "services": { "query": { "list": { "0": { "id": 0, "type": "lucene", "query": "type:dh AND severity:ERROR AND (response.baseUrl:\"/rm/recordings/*\" OR request.baseUrl:\"/rm/recordings/*\")", "alias": "DH errors rcc",.......
Pay attention in the query key, in its value, there are some \" that shouldn't be affected, since they are part of the value.
I need that output to then parse that json with jq in a some bash script I have.
Does ElasticSearch api have some filter to provide me that output?
Or do you know another external method to get what I need?
Thanks a lot for the help.

fromjson is your friend. For example:
def data: {
"_index": "kibana-int",
"_type": "dashboard",
"_id": "my_Dashboard",
"_version": 5,
"found": true,
"_source": {
"user": "guest",
"group": "guest",
"title": "my_Dashboard",
"dashboard": "{ \"title\": \"My Dashboard\", \"services\": { \"query\": { \"list\": { \"0\": \"foobar\" }}}}"
}
};
data | ._source.dashboard | fromjson
Output:
$ jq -n -f elastic.jq
{
"title": "My Dashboard",
"services": {
"query": {
"list": {
"0": "foobar"
}
}
}
}

Related

Json Extractor in JMeter

I am using JSON extractor in JMeter. Below is my Response Body. I am using the Json path expression to capture the value, which is working fine.
Apart from the above condition, I need to add one more condition.
If the "travelID" length is equal to 33, then only I need to get the BoundID.
Example : AAA-AB1234-AAABBB-2022-11-10-1111
Total length or count of the above travelID is 33, but sometime I used to get 31,32 also but I need to capture the Bound ID only when the length is 33. Is that feasible ? Please help on the same
PFB sample response body.
{
"data": {
"RenewalDetails": [
{
"ExpiryDetails": {
"duration": "xxxxx",
"destination": "XXX",
"from": "XXX",
"value": 2,
"segments": [
{
"valudeid": "xxx-xx6262-xxxyyy-1111-11-11-1111"
}
]
},
"Itemdetails": [
{
"BoundId": "xxx-1-xxx1-111111111111-1",
"isexpired": true,
"FamilyCode": "PREMIUM",
"availabilityDetails": [
{
"travelID": "AAA-AB1234-AAABBB-2022-11-10-1111",
"quota": "X",
"scale": "XXX",
"class": "X"
}
]
}
]
}
]
},
"warnings": [
{
"code": "xxxx",
"detail": "xxxxxxxx",
"title": "xxxxxxxx"
}
]
}
I don't think it's possible with JSON Extractor, I would rather suggest going for JSR223 PostProcessor and the following Groovy code:
def BoundId = new groovy.json.JsonSlurper().parse(prev.getResponseData())
.data.RenewalDetails[0].Itemdetails.find { itemDetail ->
itemDetail.availabilityDetails[0].travelID.length() == 33
}?.BoundId
vars.put('BoundId', BoundId ?: 'Not Found')
You will be able to refer extracted value as ${BoundId} later on where required.

jq map object fields from TeemIP IPAM databese to fields for Kea DHCP server

How to transform data about PCs from TeemIP IPAM database, to feed it to Kea DHCP server as reservations with jq.
This is the data I would like to transform
{
"objects": {
"PC::8": {
"code": 0,
"message": "",
"class": "PC",
"key": "8",
"fields": {
"name": "ntb",
"macaddress": "50:74:9d:b5:5f:5d",
"ipaddress_id_friendlyname": "10.1.1.6"
}
},
"PC::7": {
"code": 0,
"message": "",
"class": "PC",
"key": "7",
"fields": {
"name": "pc",
"macaddress": "00:11:c0:92:ab:0e",
"ipaddress_id_friendlyname": "10.1.70.70"
}
}
},
"code": 0,
"message": "Found: 2"
}
to this output
{
"hostname": "ntb",
"hw-address": "50:74:9d:b5:5f:5d",
"ip-address": "10.1.1.6"
},
{
"hostname": "pc",
"hw-address": "00:11:c0:92:ab:0e",
"ip-address": "10.1.70.70"
}
I've tried something like
cat pcs.json |jq '[.[]]|.[]|.[]|.[]|.[]|map(.)|{hostname: .name, hw-address: .macaddress, ip-address: .ipaddress_id_friendlyname}'
But I was not successful by any means. I'm total noob with json. Please help.
Navigate to and iterate over the target items using .objects[].fields, and construct your objects:
jq '
.objects[].fields | {
hostname: .name,
"hw-address": .macaddress,
"ip-address": .ipaddress_id_friendlyname
}
'
{
"hostname": "ntb",
"hw-address": "50:74:9d:b5:5f:5d",
"ip-address": "10.1.1.6"
}
{
"hostname": "pc",
"hw-address": "00:11:c0:92:ab:0e",
"ip-address": "10.1.70.70"
}
Demo
This produces a so-called stream of objects (no commas in between). If you rather wanted an array of objects (enclosed in square brackets, and its items delimited by commas), just surround the whole filter with a pair of brackets.

How to feed a value into a field in a json array in Gatling?

I am using Gatling to test an API that accepts a json body like below:
{
"data": {
"fields": [
{
"rank": 1
},
{
"name": "Jack"
}
]
}
}
I have created a file feeder.json that contains array of json objects like above.
Below is the feeder.json
[
{
"data": {
"fields": [
{
"rank": 1
},
{
"name": "Jack"
}
]
}
}
]
I have created another file template.txt that contains the template of above json.
Below is the template.txt
{
"data": {
"fields": [
{
"rank": ${data.fields[0].rank} //this is not working
},
{
"name": "Jack"
}
]
}
}
val jsonFeeder = jsonFile("feeder.json").circular
scenario("Test scenario")
.feed(jsonFeeder)
.exec(http("API call test")
.post("/data")
.body(ElFileBody("template.txt"))
.asJson
.check(status is 200))
I am feeding the feeder.json and also sending json body from template.json. The 'rank' property values should get set from feeder into the json body. But I am getting an error 'Map named 'data' does not contain key 'fields[0]'. Stuck with this.
Access by index syntax uses parens, not square braces.
#{data.fields(0).rank}

Creating a new element in an json object using jq

Is there a way to create a new element in an existing json object using jq? Example below:
Let's say I have this json object and would like to add a new element to foo:
json='{
"id": "<id>>",
"name": "<name>",
"properties": {
"State": "<state>",
"requests": [],
"foo": [
{
"id": "<id1>",
"bar1": [
{
"baz1": "*"
}
]
},
{
"id": "<id2>",
"bar2": [
{
"baz2": "*"
}
]
}
]
}
}'
This command works to do that:
json2=$($json1 | jq '.properties.foo += [ { "id": "<id3>", "bar3": [ { "baz3": "*"} ] } ]')
However, running that same command without a preexisting foo element fails (example array below):
json3='{
"id": "<id>>",
"name": "<name>",
"properties": {
"State": "<state>",
"requests": []
}
}'
Is there a way in jq to create that element in the json object if one already does not exist?
Thanks!
There is nothing wrong with your jq program, which can be seen by running:
jq '.properties.foo += [ { "id": "<id3>", "bar3": [ { "baz3": "*"} ] } ]' <<< "$json3"
It looks like the problem is with your invocation but since it's not clear what $json1 is, I'll just guess that the above is sufficient for you to resolve the issue.

Add or Update a field in one JSON file from another JSON file based on matching field

I have two JSON files a.json and b.json. The contents in a.json file is a JSON object and inside b.json its an array.I want to add/update status field in each mappings in a.json by retrieving the value from b.json file.
a.json:
{
"title": 25886,
"data": {
"request": {
"c": 46369,
"t1": 1562050127.376641
},
},
"rs": {
"mappings": {
"12345": {
"id": "12345",
"name": "test",
"customer_id": "11228",
},
"45678": {
"id": "45678",
"name": "abc",
"customer_id": "11206",
}
}
}}
b.json:
[
{
"status": "pending",
"extra": {
"name": "test"
},
"enabled": true,
"id": "12345"
},
{
"status": "not_started",
"extra": {
"name": "abc"
},
"enabled": true,
"id": "45678"
}
]
Below is my expected output:
{
"title": 25886,
"data": {
"request": {
"c": 46369,
"t1": 1562050127.376641
},
},
"rs": {
"mappings": {
"12345": {
"id": "12345",
"name": "test",
"customer_id": "11228",
"status":"pending"
},
"45678": {
"id": "45678",
"name": "abc",
"customer_id": "11206",
"status":"not_started"
}
}
}}
In this expected JSON file we have status field whose value is retrieved from b.json file based on a matching id value. How to do this using jq ?
For the purposes of this problem, b.json essentially defines a dictionary, so for simplicity, efficiency and perhaps elegance,
it make sense to start by using the builtin function INDEX to create the relevant dictionary:
INDEX( $b[] | {id, status}; .id )
This assumes an invocation of jq along the lines of:
jq --argfile b b.json -f update.jq a.json
(Yes, I know --argfile has been deprecated. Feel free to choose another way to set $b to the contents of b.json.)
Now, to perform the update, it will be simplest to use the "update" operator, |=, in conjunction with map_values. (Feel free to check the jq manual :-)
Putting everything together:
INDEX( $b[] | {id, status}; .id ) as $dict
| .rs.mappings |= map_values( .status = $dict[.id].status )