I'm trying to get the values of these keys from this json response:
{
"wlan": {
"channel": 1,
"ssid": "WLAN-25UR7J",
"mac": "00:17:91:80:22:96",
"inet": [
{
"netmask": "255.255.255.0",
"ip": "192.168.2.112",
"family": "IPv4"
}
],
"stationinfo": {
"signal": -48,
"channelwidth": 0,
"bitrate": 72.2
},
"mode": "station",
"encryption": "WPA2-PSK"
}
}
I could get the values from wlan key by using this method.
json.decode(response.body)['wlan']['channel']
But I didn't work on the rest values like getting netmask or bitrate for example.
For netmask, you can access it like this:
json.decode(response.body)['wlan']['inet'][0]['netmask']
For bitrate, you can access it by:
json.decode(response.body)['wlan']['stationinfo']['bitrate']
If values inside any key is an array, either access them via index like value[0] or you can simply iterate the array and do the necessary action.
Related
I have a JSON response that looks like this:
{
"results": [
{
"entityType": "PERSON",
"id": 679,
"graphId": "679.PERSON",
"details": [
{
"entityType": "PERSON",
"id": 679,
"graphId": "679.PERSON",
"parentId": 594,
"role": "Unspecified Person",
"relatedEntityType": "DOCUMENT",
"relatedId": 058,
"relatedGraphId": "058.DOCUMENT",
"relatedParentId": null
}
]
},
{
"entityType": "PERSON",
"id": 69678,
"graphId": "69678.PERSON",
"details": [
{
"entityType": "PERSON",
"id": 678,
"graphId": "678.PERSON",
"parentId": 594,
"role": "UNKNOWN",
"relatedEntityType": "DOCUMENT",
"relatedId": 145,
"relatedGraphId": "145.DOCUMENT",
"relatedParentId": null
}
]
}
The problem with this JSON response is that $.results[0] is not always the same, and it can have dozens of results. I know I can do individual JSON Assertion calls where I do the JSON with a wild card
$.results[*].details[0].entityType
$.results[*].details[0].relatedEntityType
etc
However I need to verify that both "PERSON" and "DOCUMENT" match correctly in the same path on one api call since the results come back in a different path each time.
Is there a way to do multiple calls in one JSON Assertion or am I using the wrong tool?
Thanks in advance for any help.
-Grav
I don't think JSON Assertion is flexible enough, consider switching to JSR223 Assertion where you have the full flexibility in terms of defining whatever pass/fail criteria you need.
Example code which checks that:
all attributes values which match $.results[*].details[0].entityType query are equal to PERSON
and all attributes values which match $.results[*].details[0].relatedEntityType are equal to DOCUMENT
would be:
def entityTypes = com.jayway.jsonpath.JsonPath.read(prev.getResponseDataAsString(), '$.results[*].details[0].entityType').collect().find { !it.equals('PERSON') }
def relatedEntityTypes = com.jayway.jsonpath.JsonPath.read(prev.getResponseDataAsString(), '$.results[*].details[0].relatedEntityType').collect().find { !it.equals('DOCUMENT') }
if (entityTypes.size() != 1) {
SampleResult.setSuccessful(false)
SampleResult.setResponseMessage('Entity type mismatch, one or more entries are not "PERSON" ' + entityTypes)
}
if (relatedEntityTypes.size() != 1) {
SampleResult.setSuccessful(false)
SampleResult.setResponseMessage('Entity type mismatch, one or more entries are not "DOCUMENT" ' + relatedEntityTypes)
}
More information:
SampleResult class JavaDoc
Groovy: Working with collections
Scripting JMeter Assertions in Groovy - A Tutorial
{
"metadata": {
"id": "2",
"uri": "3",
"type": "2"
},
"Number": "2323600002913",
"Date": "04/21/2009",
"postingDate": "00/00/0000",
"ata": {
"results": [
{
"metadata": {
"id": "r",
"uri": "e2",
"type": "s2"
},
"item": "000010",
"data":"ad"
}
]
}
}
want to remove metadata property from above json message and output should be like below
{
"Number": "2323600002913",
"Date": "04/21/2009",
"postingDate": "00/00/0000",
"ata": {
"results": [
{
"item": "000010",
"data":"ad"
}
]
}
}
I tried with removeProperty() which is working for root level metadata but inside metadata not removed.
how to use replace() in this case or anything else to only remove metadata.
The simplest way is use inline code, cause even with removeProperty() expression to remove the metadata under results, it will return the results array data not the whole json data. Then you will have to combine them, it's not a convenient way.
And with inline code you could refer to my below picture. The variable json is the value from triggerbody, then just delete the node or key and return the json variable. And with this way, even you want to delete many metadata in the array, you could add a for loop to delete it, just think of it as plain js code.
Update:if you want to get value from variable,cause no support expression to get value from variable so use the below expression.
var json =wworkflowContext.actions.Initialize_variable.inputs.variables[0].value;
And about how to loop the array in the json refer to my below pic.
I have following flow in NIFI , JSON has (1000+) objects in it.
invokeHTTP->SPLIT JSON->putMongo
Flow works fine, till I receive some keys in json with "." in the name. e.g. "spark.databricks.acl.dfAclsEnabled".
my current solution is not optimal, I have jotted down bad keys, and using multiple replace text processor to replace "." with "_". I am not using REGEX, I am using string literal find/replace. So each time I am getting failure in putMongo processor, I am inserting new replaceText processor.
This is not maintainable. I am wondering if I can use JOLT for this? couple of info regarding input JSON.
1) no set structure, only thing that is confirmed is. everything will be in events array. But event object itself is free form.
2) maximum list size = 1000.
3) 3rd party JSON, so I cant ask for change in format.
Also, key with ".", can appear anywhere. So I am looking for JOLT spec that can cleanse at all level and then rename it.
{
"events": [
{
"cluster_id": "0717-035521-puny598",
"timestamp": 1531896847915,
"type": "EDITED",
"details": {
"previous_attributes": {
"cluster_name": "Kylo",
"spark_version": "4.1.x-scala2.11",
"spark_conf": {
"spark.databricks.acl.dfAclsEnabled": "true",
"spark.databricks.repl.allowedLanguages": "python,sql"
},
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS3_v2",
"autotermination_minutes": 10,
"enable_elastic_disk": true,
"cluster_source": "UI"
},
"attributes": {
"cluster_name": "Kylo",
"spark_version": "4.1.x-scala2.11",
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS3_v2",
"autotermination_minutes": 10,
"enable_elastic_disk": true,
"cluster_source": "UI"
},
"previous_cluster_size": {
"autoscale": {
"min_workers": 1,
"max_workers": 8
}
},
"cluster_size": {
"autoscale": {
"min_workers": 1,
"max_workers": 8
}
},
"user": ""
}
},
{
"cluster_id": "0717-035521-puny598",
"timestamp": 1535540053785,
"type": "TERMINATING",
"details": {
"reason": {
"code": "INACTIVITY",
"parameters": {
"inactivity_duration_min": "15"
}
}
}
},
{
"cluster_id": "0717-035521-puny598",
"timestamp": 1535537117300,
"type": "EXPANDED_DISK",
"details": {
"previous_disk_size": 29454626816,
"disk_size": 136828809216,
"free_space": 17151311872,
"instance_id": "6cea5c332af94d7f85aff23e5d8cea37"
}
}
]
}
I created a template using ReplaceText and RouteOnContent to perform this task. The loop is required because the regex only replaces the first . in the JSON key on each pass. You might be able to refine this to perform all substitutions in a single pass, but after fuzzing the regex with the look-ahead and look-behind groups for a few minutes, re-routing was faster. I verified this works with the JSON you provided, and also JSON with the keys and values on different lines (: on either):
...
"spark_conf": {
"spark.databricks.acl.dfAclsEnabled":
"true",
"spark.databricks.repl.allowedLanguages"
: "python,sql"
},
...
You could also use an ExecuteScript processor with Groovy to ingest the JSON, quickly filter all JSON keys that contain ., perform a collect operation to do the replacement, and re-insert the keys in the JSON data if you want a single processor to do this in a single pass.
Is it best practice in JSON to give objects in an array an id similar to below?. Im trying to decide on a JSON format for a restful service im implementing and decide include it or not... If it is to be modified by CRUD operations is it a good idea?
{
"tables": [
{
"id": 1,
"tablename": "Table1",
"columns": [
{
"name": "Col1",
"data": "-5767703747778052096"
},
{
"name": "Col2",
"data": "-5803732544797016064"
}
]
},
{
"id": 2,
"tablename": "Table2",
"columns": [
{
"name": "Col1",
"data": "-333333"
},
{
"name": "Col2",
"data": "-44444"
}
]
}
]
}
Client-Generated IDs
A server MAY accept a client-generated ID along with a request to
create a resource. An ID MUST be specified with an "id" key, the value
of which MUST be a universally unique identifier. The client SHOULD
use a properly generated and formatted UUID as described in RFC 4122
[RFC4122].
jsonapi.org
I am trying to create a domain and uploading a sample data which is like :
[
{
"type": "add",
"id": "1371964",
"version": 1,
"lang": "eng",
"fields": {
"id": "1371964",
"uid": "1200983280",
"time": "2013-12-23 13:00:26",
"orderid": "1200983280",
"callerid": "66580662",
"is_called": "1",
"is_synced": "1",
"is_sent": "1",
"allcaller": [
{
"sno": "1085770",
"uid": "1387783883.30547",
"lastfun": null,
"callduration": "00:00:46",
"request_id": "1371964"
}
]
}
}]
when I am uploading sample data while creating a domain, cloudsearch is not taking it.
If I remove allcaller array then it takes it smoothly.
If cloudsearch does not allowing object arrays, then how should I format this json??
Just found after searching on aws forums, cloudsearch doesnot allow nested json (object arrays) :(
https://forums.aws.amazon.com/thread.jspa?messageID=405879񣅷
Time to try Elastic search.