The query I run is going to return a response which I split into two schemas:
* def tagsSchema =
"""
{
"lifecycle-status": "#string",
"infrastructure-environment": "#string",
"managed-by": "#string",
"supported-by": "#string",
"operated-by": "#string"
}
"""
and this schema is integrated into the my content schema:
* def contentSchema =
"""
{
"phase": "##string",
"managedBy": "##string",
"assetId":"##string",
"isValid": ##boolean,
"name": "#string",
"supportedBy": "##string",
"links": '#[] linksSchema',
"ownedBy": "##string",
"cmdbInstanceId":"#string",
"tags": "##object? karate.match(_,tagsSchema).tags",
}
"""
The tagsSchema is optional which I have covered by the ##object. When I run the query now it fails as I do have additional values in tagsSchema.
getList.feature:159 - path: $[0].tags, actual: {"technicalreferant":"email1","billingowner":"xyz","responsibleManager":"email1","environment":"abc","application":"tbd","consumer":"cdr","cr":"12345678"}, expected: '##object? karate.match(_,tagsSchema).tags', reason: did not evaluate to 'true'
The issue is coming from the karate.match but there is no karate.contains. How do I have to modify the schema to avoid this error. The values in the tagsSchema are mandatory while the others can be created by the user at any time and we don't have a policy for them. I don't want to adjust the code every run-time and only rely on mandatory values.
I'm not sure why you see the need to use karate.match() and you need to read the documentation. Here's a simple example below:
* def innerSchema = { foo: '#string' }
* def outerSchema = { bar: '#string', baz: '##(innerSchema)' }
* def response1 = { bar: 'x' }
* match response1 == outerSchema
* def response2 = { bar: 'x', baz: { foo: 'y' } }
* match response2 == outerSchema
Related
Want to validate in Karate framework
For the below Json What I want to validate is,
if "isfilter_regex":0 then "msgtype": "##regex ^[A-Za-z0-9_.]-/*"
or if "isfilter_regex":1 then "msgtype": "#string"
(when isfilter_regex = 1 then msgtype must be a regular expression)
In my case number of candidate s in candidates array is 180+
I tried lot of things I ended up failing can anybody help me here?
{
"candidates":[
{
"candidate":{
"name":"Alex",
"category":[
{
"category_name":"APCMRQ",
"filters":[
{
"isfilter_regex":0,
"msgtype":"APCMRQ"
}
]
},
{
"category_name":"BIDBRQ",
"filters":[
{
"isfilter_regex":1,
"msgtype":"'(AMSCNQ(_[A-Za-z0-9]{1,3}){0,3})'"
}
]
}
]
}
}
]
}
I tried below way which works only when idex specified, but what to do if I want to do this check for entire array?,
* def msgRegexValue = response.candidates[150].candidate.category[0].filters[0].isfilter_regex
* def actualFilter = response.candidates[150].candidate.category[0].filters[0]
* def expectedFilter = actualFilter
* def withRegex =
"""
{
"isfilter_regex": 0,
"msgtype": '##regex ^[A-Za-z0-9_.]*-*/*'
}
"""
* def withoutRegex =
"""
{
"isfilter_regex": 1,
"msgtype": '##string'
}
"""
* def expected = msgRegexValue == 0 ? withRegex: withoutRegex
* match actualFilter == expected
How do you capture a JSON object as a prettified string using a Jenkins declarative-syntax pipeline?
pipeline {
agent any
stages {
stage( "Set up" ) {
steps {
script {
hostname = "bld-machine"
reply_email = "jenkins#${hostname}.company.com"
actor_email = "user#company.com"
status_json = initStatusJson()
}
}
}
/** Try figure out the difference between "global" and "env." variables. */
stage( "Capture variables" ) {
steps {
script {
status_json.env["var"] = "${env.var}" as String
status_json.env["var2"] = "${var}" as String
}
}
}
}
post {
always {
script {
def pretty_json = writeJSON( returnText: true, json: status_json )
}
emailext( subject: "CI/CD | ${currentBuild.currentResult}",
from: "${reply_email}",
to: "${actor_email}",
mimeType: "text/plain",
body: "${pretty_json}" )
}
}
}
def initStatusJson() {
def json_obj = readJSON text: '{}'
json_obj.job = readJSON text: '{}'
json_obj.env = [:]
json_obj.job.name = "${JOB_BASE_NAME}" as String
json_obj.job.number = "${BUILD_ID}" as String
json_obj.job.server = "${JENKINS_URL}" as String
json_obj.job.visualization = "${JENKINS_URL}/blue/organizations/jenkins/${JOB_BASE_NAME}/detail/${JOB_BASE_NAME}/${BUILD_ID}/pipeline" as String
return json_obj
}
The def pretty_json =... statement in the above Jenkinsfile triggers the following error:
WARNING: Unknown parameter(s) found for class type WARNING: Unknown parameter(s) found for class type 'org.jenkinsci.plugins.pipeline.utility.steps.json.WriteJSONStep': returnText
[Pipeline] }
[Pipeline] // script
Error when executing always post condition:
java.lang.IllegalArgumentException: You have to provided a file for writeJSON.
at org.jenkinsci.plugins.pipeline.utility.steps.json.WriteJSONStepExecution.run(WriteJSONStepExecution.java:61)
at org.jenkinsci.plugins.pipeline.utility.steps.json.WriteJSONStepExecution.run(WriteJSONStepExecution.java:43)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
What I have tried:
The def pretty_json = writeJSON( returnText: true, json: status_json ) statement is inspired by these resources:
Jenkinsfile pipeline construct JSON object and write to file
https://www.jenkins.io/doc/pipeline/steps/pipeline-utility-steps/#writejson-write-json-to-a-file-in-the-workspace
I also tried def pretty_json = writeJSON returnText: true, json: status_json which resulted in an identical error.
status_json.toString() returns a valid, but non-prettified JSON string.
I tried def pretty_json = JsonOutput.toJson(status_json) based on Create JSON strings from Groovy variables in Jenkins Pipeline, and it generates this error:
Error when executing always post condition:
groovy.lang.MissingPropertyException: No such property: JsonOutput for class: groovy.lang.Binding
Tried def pretty_json = groovy.json.JsonOutput.prettyPrint(status_json) based on https://gist.github.com/osima/1161966, and it generated this error:
Error when executing always post condition:
groovy.lang.MissingMethodException: No signature of method: java.lang.Class.prettyPrint() is applicable for argument types: (net.sf.json.JSONObject)
Update: Attempted #daggett's solution as follows:
post {
always {
script {
def pretty_json = status_json.toString(2)
}
emailext( subject: "CI/CD | ${currentBuild.currentResult}",
from: "${reply_email}",
to: "${actor_email}",
mimeType: "text/plain",
body: "${pretty_json}" )
}
}
...and also tried some variations like pretty_json = ... (instead of def pretty_json = ...) and also moving the pretty_json assignment outside the script{...} scope...but none worked.
Inside the script{...} context, the .toString(2) generated this error:
Scripts not permitted to use method net.sf.json.JSON toString int.
Outside the script{...} context, it generated what I interpret to be a "syntax error":
WorkflowScript: 79: Expected a step # line 79, column 7.
pretty_json = status_json.toString(2)
According to last error message
groovy.lang.MissingMethodException:
No signature of method: java.lang.Class.prettyPrint()
is applicable for argument types: (net.sf.json.JSONObject)
You have net.sf.json.JSONObject in status_json variable.
that's really strange - seems you are getting status_json not in a standard way for jenkins
however according to documentation of this class
http://json-lib.sourceforge.net/apidocs/jdk15/net/sf/json/JSONObject.html#toString(int)
just do following to make pretty json:
def pretty_json = status_json.toString(2)
If you have Scripts not permitted to use method XYZ exception:
for security reasons a lot of non-standard methods are disabled in jenkins.
refer this answer to resolve this kind of issue: https://stackoverflow.com/a/39412951/1276664
and finally - almost every case from your question should work:
writeJSON( returnText: true, json: status_json ) :
update pipeline-utility-steps jenkins plugin to the latest version to support returnText parameter
the same as above
...
JsonOutput.toJson(status_json) : JsonOutput class located in groovy.json package. you could import this package at t
he beginning of the script import groovy.json or call it like this: groovy.json.JsonOutput.toJson(status_json). note that this method returns non-formatted json.
groovy.json.JsonOutput.prettyPrint(status_json) : check the documentation for JsonOutput.prettyPrint - it could be called for string and not for object. so this could work: groovy.json.JsonOutput.prettyPrint(status_json.toString()) but only in case when status_json.toString() returns a valid json and JsonOutput.prettyPrint allowed to be called in jenkins admin.
I just did a test and it gave results :
def pretty_json = writeJSON( returnText: true, json: status_json , pretty: 4)
Note : Ensure you have the plugin Pipeline Utility Steps installed. Or reinstall it again.
Below is the script example:
#!groovy
import hudson.model.Result
import groovy.json.*
pipeline
{
agent any
stages
{
stage ('Set up')
{
steps
{
script
{
hostname = "bld-machine"
reply_email = "jenkins#${hostname}.company.com"
actor_email = "user#company.com"
status_json = initStatusJson()
println (status_json)
}
}
}
stage ('Capture variables')
{
steps
{
script
{
// Added just for test
status_json.env["var"] = "Alt" as String
status_json.env["var2"] = "su" as String
println (status_json)
}
}
}
}
post {
always {
script {
def pretty_json = writeJSON( returnText: true, json: status_json , pretty: 4)
println (pretty_json)
emailext( subject: "CI/CD | ${currentBuild.currentResult}",
from: "${reply_email}",
to: "${actor_email}",
mimeType: "text/plain",
body: "${pretty_json}" )
}
}
}
}
def initStatusJson() {
def json_obj = readJSON text: '{}'
json_obj.job = readJSON text: '{}'
json_obj.env = [:]
json_obj.job.name = "${JOB_BASE_NAME}" as String
json_obj.job.number = "${BUILD_ID}" as String
json_obj.job.server = "${JENKINS_URL}" as String
json_obj.job.visualization = "${JENKINS_URL}/blue/organizations/jenkins/${JOB_BASE_NAME}/detail/${JOB_BASE_NAME}/${BUILD_ID}/pipeline" as String
return json_obj
}
Output log :
Is possible to match each element of a nested array response (using contains) using just one schema?
I have a set of yml files with request params and response schemas, like this one:
response:
appId: '#string'
attributes: '#array'
login: '#string'
permissions: '#array'
metadata:
roles: '##array'
userData:
description: '#string'
employeeId: '#string'
employeeNumber: '##string'
id: '#string'
login: '#string'
mail: '#string'
name: '#string'
and then, in a reusable feature:
* def req = read(<testDataFile>)
* match response contains req.response
I can match nested objects with just one schema but I'm not sure if it's possible use the schema to match nested arrays
maybe like:
response:
appId: '##string'
attributes: '##array'
attributes[*]:
key: '#string'
or any other expression
Thanks a lot
You can't do this with a single "schema" and you have to declare the repeating element separately, as a Karate variable: https://github.com/intuit/karate#schema-validation
* def foo = { a: '#number' }
* def bar = { baz: [{ a: 1 }, { a: 2 }, { a: 3 }] }
* match each bar.baz == foo
* match bar == { baz: '#[] foo' }
I would like to check response from GET/birds request with a json schema. In my feature:
* def abs = read (birds.json)
* match response == abs.birdsSchema
I need to put the schema in a json file and not in the feature.
I have to check additional values depending on gender. Ex: if gender is male then check if the color is blue and the tail is long or short. if gender is female then check if "sings" is true or false and number of eggs.
So I put in birds.json:
"birdsSchema":{
"id": "#string",
"owner": "#number",
"town": "#? _ == 'New York' || _ == 'Washington'",
"type": "object",
"oneOf": [
{
"properties": {
"gender": {"enum": ["male"]},
"color":"blue",
"tail": "#? _ == 'long' || _ == 'short'"
}
},
{
"properties": {
"gender": {"enum": ["female"]},
"sings" : "#? _ == true || _ == false"
"eggs": "##number"
}
}
]
}
But it doesn't work. Error: com.intuit.karate.exception.KarateException: path: $[0].type, actual: 'female', expected: 'object', reason: not equal.
How I can do this conditional check in my json file?
Let's acknowledge that this is extra hard because if I understand your question correctly, the JSON keys you are looking for are dynamic.
Part of the fun of Karate is that there are at least 5 different ways I can think of to solve this elegantly. Here is just one:
* def response = { color: 'black', aaa: 'foo' }
* def schema = { color: '#? _ == "black" || _ == "white"' }
* def extra = (response.color == 'black' ? { aaa: '#string' } : { fff: '#string' })
* match response contains schema
* match response contains extra
If you create a JS function based on the hint above, you can probably get a better solution. Keep in mind that in a JS function you can use methods like karate.set to dynamically create keys. So there are many possibilities :)
edit: looks like the example above is wrong, and the keys are not dynamic. Then it is easy, keep in mind that $ refers to the JSON root:
* def response = { color: 'black', extra: 'foo' }
* def schema = { color: '#? _ == "black" || _ == "white"', extra: '#($.color == "black" ? "foo" : "bar")' }
* match response == schema
I have been experimenting with the groovy Jsonbuilder as you can see below trying to look at different ways to build JSON objects and arrays. After things started to make sense, I tried expanding to what is shown below. The question I have is, why does "content" show up in the json pretty string output? I actually have another json object displaying this.class information in json string outputs.
Any ideas? I'm new to this, so it could definitely be an obvious one.
def tt = ["test", "test1"]
def jjj = "jason"
def js3 = new groovy.json.JsonBuilder()
def js2 = new groovy.json.JsonBuilder(tt);
js3 hello: "$jjj", "$jjj": tt
def js4 = new groovy.json.JsonBuilder()
def result = js4([sdn: js3, openflow: js2, type: 3])
println js4.toPrettyString();
outputs:
{
"sdn": {
"content": {
"hello": "jason",
"jason": [
"test",
"test1"
]
}
},
"openflow": {
"content": [
"test",
"test1"
]
},
"type": 3
}
The problem can be restated as...
why does this:
import groovy.json.*
def js3 = new JsonBuilder(["test", "test1"])
def js4 = new JsonBuilder(js3)
println js4.toString()
print:
{"content":["test","test1"]}
and this:
import groovy.json.*
def js3 = new JsonBuilder(["test", "test1"])
def js4 = new JsonBuilder(js3.content)
println js4.toString()
prints this (?) :
["test","test1"]
The short answer is that JsonBuilder has a member named content, which represents the payload. When one JsonBuilder absorbs another, we want to replace the payload, and not nest it. This line is the way to replace the payload:
def js4 = new JsonBuilder(js3.content)
Ultimately, this stems from the fact that JsonBuilder.toString() (code here) calls JsonOutput.toJson(object) (code here).
An exercise for the reader is to experiment with:
class MyBuilder {
def content
}
def myB = new MyBuilder(content: ["test", "test1"])
println JsonOutput.toJson(myB)
println JsonOutput.toJson(myB.content)