Is updating an object nested inside an array via JsonPatchDocument possible? - json

I'm using Microsoft.AspNetCore.JsonPatch V2.1.1.
I have an object structure like this:
{
"key" : "value",
"nested" : [
{ "key" : "value" }
]
}
Now I want to update the key in the nested object within the array:
[
{"op" : "replace", "path" : "/nested/0/key", "value" : "test" }
]
But I get this exception:
JsonPatchException: The target location specified by path segment '0' was not found.
Do I have to explicitly make an endpoint for the inner object with its own PATCH method?

Related

Groovy JSON property Rename

I am using groovy json and want to replace the below key fields name . Please help me how to do .
{
"ID": "a3a98abd-e4a8-4667-81e4-123f641fd772",
"name" " "test"
"subProjects": [
{
"ID": "0c0f1a7d-4f5f-491f-a53b-7bdccd97407f",
"name" : "subproject"
}]
How can I replace the name of first "ID" to "ProjectID" and also position should not be changed ?

Stop Sending Requests By Checking the Response Body in JMeter

I want to keep calling to a REST API until the Response Body contains a 100 elements.
Here is the example:
request : someurl/get/data
response :
1st API call : (2elements included)
{
"items" : [
{
"name": "abc",
"id" : "ajdiw123"
},
{
"name": "abc",
"id" : "ajdiw123345"
}
]
}
2nd API call : (4 elements)
{
"items" : [
{
"name": "abc",
"id" : "ajdiw123"
},
{
"name": "def",
"id" : "ajdiw145"
},
{
"name": "afc",
"id" : "ajdiw113"
},
{
"name": "bbc",
"id" : "ajdiw199"
}
]
}
like this response body included elements can changed. At some point it will return 100 elements with 100 different ids there. How I can identified that and stop sending the requests to the endpoint using JMeter.
It can be achieved in multiple ways depending on your test plan.
Add a JSON Extractor and a JSR223 Assertion to your request.
JSON Extractor Settings will be like:
JSR223 Assertion code will be like:
String totalIDs = vars.get("id_matchNr");
Integer result = Integer.valueOf(totalIDs);
if (result == 100){
AssertionResult.setFailure(true);
}
After that, just add a Result Status Action Handler to that request so it will stop the execution for the specific thread:
Try this:
Add a post processor in your request. A JSON Extractor to extract any of the unique array attribute say ID. JSON extractor should have a match number field set to -1 and name set as requestid and expression as $..id
Debug sampler
All this should be inside a While Controller which will have a condition
${__jexl3(${requestid_matchNr} != 100)}. "requestid_matchNr" will come from Debug Sampler response.

How can i match fields with wildcards using jq?

I have a JSON object of the following form:
{
"Task11c-0-20181209-12:59:30-65611" : {
"attributes" : {
"configname" : "Task11c",
"datetime" : "20181209-12:59:30",
"experiment" : "Task11c",
"inifile" : "lab1.ini",
"iterationvars" : "",
"iterationvarsf" : "",
"measurement" : "",
"network" : "Manhattan1_1C",
"processid" : "65611",
"repetition" : "0",
"replication" : "#0",
"resultdir" : "results",
"runnumber" : "0",
"seedset" : "0"
},
......
},
......
"Task11b-12-20181209-13:03:17-65612" : {
....
....
},
.......
}
I reported only the first part, but in general I have many other sub-objects which match a string like Task11c-0-20181209-12:59:30-65611. They all have in common the initial word Task. I want to extract the processid from each sub-object. I'm trying to use a wildcard like in bash, but it seems not to be possible.
I also read about the match() function, but it works with strings and not json objects.
Thanks for the support.
Filter keys that start with Test and get only the attribute of your choice using the select() expression
jq 'to_entries[] | select(.key|startswith("Task")).value.attributes.processid' json

read.json only reading the first object in Spark

I have a multiLine json file, and I am using spark's read.json to read the json, the problem is that it is only reading the first object from that json file
val dataFrame = spark.read.option("multiLine", true).option("mode", "PERMISSIVE").json(path)
dataFrame.rdd.saveAsTextFile("DataFrame")
Sample json:
{
"_id" : "589895e123c572923e69f5e7",
"thing" : "54eb45beb5f1e061454c5bf4",
"timeline" : [
{
"reason" : "TRIP_START",
"timestamp" : "2017-02-06T17:20:18.007+02:00",
"type" : "TRIP_EVENT",
"location" : [
11.1174091,
69.1174091
],
"endLocation" : [],
"startLocation" : []
},
"reason" : "TRIP_END",
"timestamp" : "2017-02-06T17:25:26.026+02:00",
"type" : "TRIP_EVENT",
"location" : [
11.5691428,
48.1122443
],
"endLocation" : [],
"startLocation" : []
}
],
"__v" : 0
}
{
"_id" : "589895e123c572923e69f5e8",
"thing" : "54eb45beb5f1e032241c5bf4",
"timeline" : [
{
"reason" : "TRIP_START",
"timestamp" : "2017-02-06T17:20:18.007+02:00",
"type" : "TRIP_EVENT",
"location" : [
11.1174091,
50.1174091
],
"endLocation" : [],
"startLocation" : []
},
"reason" : "TRIP_END",
"timestamp" : "2017-02-06T17:25:26.026+02:00",
"type" : "TRIP_EVENT",
"location" : [
51.1174091,
69.1174091
],
"endLocation" : [],
"startLocation" : []
}
],
"__v" : 0
}
I get only the first entry with id = 589895e123c572923e69f5e7.
Is there something that I am doing wrong?
Are you sure multiple multi line JSON is supported?
Each line must contain a separate, self-contained valid JSON object... For a regular multi-line JSON file, set the multiLine option to true
http://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets
Where a "regular JSON file" means the entire file is a singular JSON object / array, however, simply putting {} around your data won't work because you need a key for every object, and so you'd need a top level key, maybe say "objects". Similarly, you can try an array, but wrapping with []. Either way, these will only work if every object in that array or object is separated by commas.
tl;dr - the whole file needs to be one valid JSON object when multiline=true
You're only getting one object because it parses the first set of brackets, and that's it.
If you have full control over the JSON file, the indented layout is purely for human consumption. Just flatten the objects and let Spark parse it as the API is intended to be used
Keep one line and one JsValue in file, remove .option("multiLine", true).
like this:
{"name":"Michael"}
{"name":"Andy", "age":30}
{"name":"Justin", "age":19}

Avoid repetition in JSON file

I am not familiar with JSON object and I want to use them with Python . I have a JSON object like this
{
"a" : {"value" : 20200212, "conversion":"{"fun":["strptime"], "module":["datetime"], "extra_args":["%Y%m%d]} },
"b" : {"value":"something here"},
"c" : {"value" : 20211121,"conversion":{"fun":["strptime"], "module":["datetime"], "extra_args":["%Y%m%d]} }
}
My question is it possible to not repeat this in this file?
{"fun":["strptime"], "module":["datetime"], "extra_args":["%Y%m%d]}