I need to check whether or not an entry is present in the data output from a REST call. The JSON output looks something like this:
{
"entity": {
"entries":[
{
"ID": "1",
"Pipeline": "Pipeline_1",
"State":"Completed"
}
],
"duration":1074,
"create_time":"2010-10-10"
}
}
I want to check if for example, Pipeline_1 is missing, then I want the pipeline to print out that 'Pipeline_1 is missing', if not - null. I have tried using the ternary (?) expression:
!$Pipeline.contains ("Pipeline_1") ? "Pipeline_1 is missing" : null && !$Pipeline.contains ("Pipeline_2") ? "Pipeline_2 is missing" : null
I'm having problems with the syntax and I just can't get it right using this method, because it only processes the first query.
I have also tried using the match method, but haven't had success with it either:
match $Pipeline {
$Pipeline!=("Pipeline_1") => 'Pipeline_1 is missing',
$Pipeline!=("Pipeline_2") => 'Pipeline_2 is missing',
_ => 'All of the pipelines have been executed successfully'
}
I have to check for multiple conditions. Any suggestions on how I should nest the conditional expressions? Thank you in advance.
Assuming that you are not splitting the array $entity.entries[*] and processing the incoming document as is, following is a possible solution.
Test Pipeline:
Input:
{
"entity": {
"entries": [
{
"ID": "1",
"Pipeline": "Pipeline_1",
"State": "Completed"
}
],
"duration": 1074,
"create_time": "2010-10-10"
}
}
Expression:
{
"Pipeline_1": $entity.entries.reduce((a, c) => c.Pipeline == "Pipeline_1" || a, false),
"Pipeline_2": $entity.entries.reduce((a, c) => c.Pipeline == "Pipeline_2" || a, false)
}.values().reduce((a, c) => c && a, true) ? "All pipelines executed successfully" : "Pipeline(s) missing"
Output:
If you don't want to do it in a single expression, then you can use a Conditional snap like as follows.
Following is the output of the Conditional snap.
Then you can process it as you please.
Related
Given the query
#.records[*].issues[].[type, message]
on the JSON
{
"records":[
{
"id":"db7bb828-60e2-5fa8-048c-06542abd98d2",
"parentId":"3dc8fd7e-4368-5a92-293e-d53cefc8c4b3",
"type":"Task",
"name":"PublishBuildArtifacts",
"startTime":"2022-09-28T14:06:41.3266667Z",
"finishTime":"2022-09-28T14:06:41.3266667Z",
"currentOperation":null,
"percentComplete":null,
"state":"completed",
"result":"skipped",
"resultCode":"Evaluating: SucceededNode()\r\nResult: False\r\n",
"changeId":29,
"lastModified":"0001-01-01T00:00:00",
"workerName":"AgentSalam7WithPat",
"order":19,
"details":null,
"errorCount":0,
"warningCount":0,
"url":null,
"log":null,
"task":{
"id":"2ff763a7-ce83-4e1f-bc89-0ae63477cebe",
"name":"PublishBuildArtifacts",
"version":"1.158.3"
},
"attempt":1,
"identifier":null
},
{
"id":"d56f7c92-f706-53be-685b-17b89c98baa6",
"parentId":"3dc8fd7e-4368-5a92-293e-d53cefc8c4b3",
"type":"Task",
"name":"SonarQubePublish",
"startTime":"2022-09-28T14:06:31.7066667Z",
"finishTime":"2022-09-28T14:06:41.31Z",
"currentOperation":null,
"percentComplete":null,
"state":"completed",
"result":"failed",
"resultCode":null,
"changeId":31,
"lastModified":"0001-01-01T00:00:00",
"workerName":"AgentSalam7WithPat",
"order":11,
"details":null,
"errorCount":1,
"warningCount":0,
"url":null,
"log":{
"id":14,
"type":"Container",
"url":"https://azuredevops2k19.salam.net/Sierac-Utilities/6f9f1b22-cd2b-4ed4-a2c9-37822128b7c6/_apis/build/builds/201/logs/14"
},
"task":{
"id":"291ed61f-1ee4-45d3-b1b0-bf822d9095ef",
"name":"SonarQubePublish",
"version":"5.0.1"
},
"attempt":1,
"identifier":null,
"issues":[
{
"type":"error",
"category":"General",
"message":"[SQ] Task failed with status FAILED, Error message: Fail to extract report AYOEa2gdtfNdJFd6edM9 from database",
"data":{
"type":"error",
"logFileLineNumber":"9"
}
},
{
"type":"warning",
"category":"General",
"message":"Unable to get default branch, defaulting to 'master': Error: enable to verify the first certificate",
"data":{
"type":"warning",
"logFileLineNumber":"10"
}
}
]
}
]
}
I get the resulting JSON:
[
[
"error",
"[SQ] Task failed with status FAILED, Error message: Fail to extract report AYOEa2gdtfNdJFd6edM9 from database"
],
[
"warning",
"Unable to get default branch, defaulting to 'master': Error: enable to verify the first certificate"
]
]
Now I need to add a filter like [type = error], so I only get the messages of type error.
How can this be achieved? In the documentation, this is not very clear to me.
Filtering and multiselect lists do need a question mark in the array notation brackets – [?this > `that`] – and the equality test is a double equal sign – ==.
So your query should be:
#.records[*].issues[?type == `error`].[type, message]
Which gives the resulting JSON:
[
[
[
"error",
"[SQ] Task failed with status FAILED, Error message: Fail to extract report AYOEa2gdtfNdJFd6edM9 from database"
]
]
]
Should you need to flatten the multiple arrays of arrays, you can use the flatten operator, and with the query:
#.records[*].issues[?type == `error`].[type, message][][]
You will, then, end up with this resulting JSON:
[
"error",
"[SQ] Task failed with status FAILED, Error message: Fail to extract report AYOEa2gdtfNdJFd6edM9 from database"
]
So i have a json:
{
"code": "Q0934X",
"name": "PIDBA",
"longlat": "POINT(23.0 33.0)",
"altitude": 33
}
And i want to change the column code to Identifier
The wished output is this
{
"Identifier": "Q0934X",
"name": "PIDBA",
"longlat": "POINT(23.0 33.0)",
"altitude": 33
}
How can i do in the shortest way? Thanks
It appears that both "the json" you have and your desired result are JSON strings. If the one you have is json_str you can write:
json = JSON.parse(json_str).tap { |h| h["Identifier"] = h.delete("code") }.to_json
puts json
#=> {"name":"PIDBA","longlat":"POINT(23.0 33.0)","altitude":33,"Identifier":"Q0934X"}
Note that Hash#delete returns the value of the key being removed.
Perhaps transform_keys is an option.
The following seems to work for me (ruby 2.6):
json = JSON.parse(json_str).transform_keys { |k| k === 'code' ? 'Identifier' : k }.to_json
But this may work for Ruby 3.0 onwards (if I've understood the docs):
json = JSON.parse(json_str).transform_keys({ 'code': 'Identifier' }).to_json
I would like to get the value of the "currentApproversStr:" based on the condition "status":"Ready for Review" from the below JSON Response body of a HTTP sampler and pass to following HTTP sampler.
I tried the below but it's not working
Names of created variables: currentApproversStr
JSON Path expressions: $.[?((#.currentApproversStr == "Validation, Civa" || #.currentApproversStr == "Validation, Darla" || #.currentApproversStr == "Validation, Bittl" || #.currentApproversStr == "Validation, Cha" || #.currentApproversStr == "Validation, Barnett" ) && #.status== "Ready for Review")]
Match No: -1 OR 1
But Dummy Sampler returns the Results
We can't guarantee the order of the "timecardId" block with the "status":"Ready for Review" i.e some times 2 nd place, some times last. In this it's 2nd block. So not sure Match No: what should i give
[
{
"timecardId": 170803,
"entryHeaderId": "db9341a9-32e8-4d45-a858-a88b75a42cef",
"startsOn": "2021-10-24T00:00:00",
"endsOn": "2021-10-30T00:00:00",
"worksightStatus": "SignedOff",
"projectId": 1977,
"userId": 60874,
"status": "Submitted for Approval",
"batchId": 39814,
"emergencyType": "",
"htgDealMemoId": "0d0ff42b-5c4b-4695-b527-34dfc64585e5",
"unionId": "1c77c660-28fc-4e40-b557-132f3da39597",
"currentApproversStr": "Perf, PA",
"commentStr": "",
"commentUserName": "",
"commentCreatedAt": "1900-01-01T00:00:00",
"occupationCode": "TECHNICIAN",
"activeApprovalFlowId": 166669,
"isAllowanceOnly": false,
"departmentId": null,
"datePosted": null
},
{
"timecardId": 170807,
"entryHeaderId": "c9809446-b01f-4f42-add6-9b441c3d0114",
"startsOn": "2021-10-17T00:00:00",
"endsOn": "2021-10-23T00:00:00",
"worksightStatus": "Outstanding",
"projectId": 1977,
"userId": 60874,
"status": "Ready for Review",
"batchId": 39815,
"emergencyType": "",
"htgDealMemoId": "0d0ff42b-5c4b-4695-b527-34dfc64585e5",
"unionId": "1c77c660-28fc-4e40-b557-132f3da39597",
"currentApproversStr": "Validation, Civa",
"commentStr": "",
"commentUserName": "",
"commentCreatedAt": "1900-01-01T00:00:00",
"occupationCode": "TECHNICIAN",
"activeApprovalFlowId": 166674,
"isAllowanceOnly": false,
"departmentId": null,
"datePosted": null
},
{
"timecardId": 170802,
"entryHeaderId": "db9341a9-32e8-4d45-a858-a88b75a42cef",
"startsOn": "2021-10-24T00:00:00",
"endsOn": "2021-10-30T00:00:00",
"worksightStatus": "SignedOff",
"projectId": 1977,
"userId": 60874,
"status": "Submitted for Approval",
"batchId": 39814,
"emergencyType": "",
"htgDealMemoId": "0d0ff42b-5c4b-4695-b527-34dfc64585e5",
"unionId": "1c77c660-28fc-4e40-b557-132f3da39597",
"currentApproversStr": "Perf, PA",
"commentStr": "",
"commentUserName": "",
"commentCreatedAt": "1900-01-01T00:00:00",
"occupationCode": "TECHNICIAN",
"activeApprovalFlowId": 166669,
"isAllowanceOnly": false,
"departmentId": null,
"datePosted": null
}
]
PROBLEM:
The reason is that you misunderstand the way JSON extractor works. This feature allows you extract many variables in one setting, but number of Names of created variables = number of JSON Path expressions = number of Default Values.
For example, you want to extract 2 variables:
Names of created variables: var_name_1; var_name_2
JSON Path expressions: json_expression_1; json_expression_2
Default Values: default_1; default_2
(Note: remember using semicolon (;) to separate values)
But you set 1 Variable, 1 json expression with MANY default values --> mismatch.
SOLUTION:
You can setup like this:
Names of created variables: currentApproversStr
JSON Path expressions: $.[?(#.status== "Ready for Review")].currentApproversStr
Match No: -1
Default Values: NOT_FOUND
Result:
currentApproversStr_1=Validation, Civa
currentApproversStr_matchNr=1
"Match No" works as follows: if your query returns more than 1 result:
0 - returns random result
-1 - returns ALL results in form of:
currentApproversStr_1 - first match
currentApproversStr_2 - second match
etc.
currentApproversStr_matchNr - total number of matches
any positive integer - returns the given match
It applies not only to JSON Extractor but to all other JMeter PostProcessors which extract values from responses.
You can see generated JMeter Variables using Debug Sampler and View Results Tree listener combination:
I have a table called api_details where i dump the below JSON value into the JSON column raw_data.
Now i need to make a report from this JSON string and the expected output is something like below,
action_name. sent_timestamp Sent. Delivered
campaign_2475 1600416865.928737 - 1601788183.440805. 7504. 7483
campaign_d_1084_SUN15_ex 1604220248.153903 - 1604222469.087918. 63095. 62961
Below is the sample JSON OUTPUT
{
"header": [
"#0 action_name",
"#1 sent_timestamp",
"#0 Sent",
"#1 Delivered"
],
"name": "campaign - lifetime",
"rows": [
[
"campaign_2475",
"1600416865.928737 - 1601788183.440805",
7504,
7483
],
[
"campaign_d_1084_SUN15_ex",
"1604220248.153903 - 1604222469.087918",
63095,
62961
],
[
"campaign_SUN15",
"1604222469.148829 - 1604411016.029794",
63303,
63211
]
],
"success": true
}
I tried like below, but is not getting the results.I can do it using python by lopping through all the elements in row list.
But is there an easy solution in PostgreSQL(version 11).
SELECT raw_data->'rows'->0
FROM api_details
You can use JSONB_ARRAY_ELEMENTS() function such as
SELECT (j.value)->>0 AS action_name,
(j.value)->>1 AS sent_timestamp,
(j.value)->>2 AS Sent,
(j.value)->>3 AS Delivered
FROM api_details
CROSS JOIN JSONB_ARRAY_ELEMENTS(raw_data->'rows') AS j
Demo
P.S. in this case the data type of raw_data is assumed to be JSONB, otherwise the argument within the function raw_data->'rows' should be replaced with raw_data::JSONB->'rows' in order to perform explicit type casting.
I have been looking for over 4 days now but I havent been able to find much support on code for lua based json schema compiler. Mainly I have been dealing with
ljsonschema (https://github.com/jdesgats/ljsonschema)
rjson (https://luarocks.org/modules/romaboy/rjson)
But either of the above have not been straight forward to use.
After dealing with issues on the luarocks, I finally got ljsonschema working but the JSON syntax looks different than normal JSON structure - For ex: equals in place of semi colon, no double quotes for key names etc.
ljsonschema supports
{ type = 'object', properties = {
foo = { type = 'string' },
bar = { type = 'number' },},}
I require :
{ "type" : "object",
"properties" : {
"foo" : { "type" : "string" },
"bar" : { "type" : "number" }}}
With rjson there is an issue with the installation location itself. Though the installation goes fine, it is never able to find the .so file while running the lua code. Plus there is not much development support that I could find.
Please help point in the right direction, in case I am missing something.
I have the json schema & a sample json, I just need a lua code to help write a program around it.
This is to write a custom JSON Validation Plugin for Kong CE.
UPDATED:
I would like the below code to work with ljsonschema:
local jsonschema = require 'jsonschema'
-- Note: do cache the result of schema compilation as this is a quite
-- expensive process
local myvalidator = jsonschema.generate_validator{
"type" : "object",
"properties" : {
"foo" : { "type" : "string" },
"bar" : { "type" : "number" }
}
}
print(myvalidator { "foo":"hello", "bar":42 })
But I get the error : '}' expected (to close '{' at line 5) near ':'
it looks like the argument to generate_validator and myvalidator are lua tables, not raw json strings. You'll want to parse the json first:
> jsonschema = require 'jsonschema'
> dkjson = require('dkjson')
> schema = [[
>> { "type" : "object",
>> "properties" : {
>> "foo" : { "type" : "string" },
>> "bar" : { "type" : "number" }}}
>> ]]
> s = dkjson.decode(schema)
> myvalidator = jsonschema.generate_validator(s)
>
> json = '{ "foo": "bar", "bar": 42 }'
> print(myvalidator(json))
false wrong type: expected object, got string
> print(myvalidator(dkjson.decode(json)))
true
Ok, I think rapidjason came to be helpful:
Refer the link
Here is a sample working code :
local rapidjson = require('rapidjson')
function readAll(file)
local f = assert(io.open(file, "rb"))
local content = f:read("*all")
f:close()
return content
end
local jsonContent = readAll("sampleJson.txt")
local sampleSchema = readAll("sampleSchema.txt")
local sd = rapidjson.SchemaDocument(sampleSchema)
local validator = rapidjson.SchemaValidator(sd)
local d = rapidjson.Document(jsonContent)
local ok, message = validator:validate(d)
if ok then
print("json OK")
else
print(message)
end