Json datatype constant in swagger - json

I have a swagger spec like this:
'''
loading_method:
type: "object"
title: "Loading Method"
oneOf:
- title: "Standard Inserts"
additionalProperties: false
required:
- "method"
properties:
method:
type: "string"
const: "Standard"
- title: "ABC XYZ"
additionalProperties: false
required:
- "method"
properties:
method:
type: "string"
const: "ABC XYZ"
order: 0
...
Now when I am passing my json in the request payload,
'loading_method': {'method': 'ABC XYZ'}, the JSON Validator keeps on failing with the below error:
Errors: json schema validation failed when comparing the data to the json schema. \\nErrors: $.loading_method.method: must be a constant value Standard, $.loading_method.method: must be a constant value ABC XYZ
Any idea how should I pass the json payload. I cannot control the swagger schema as it is 3rd party. It seems it is validating against both of the methods.

Related

Define exact custom Properties in openAPI 3.1

I have a JSON schema I am trying to describe, a JSON object which has a additionalProperties node which contains an array of key value pairs.
{
"additionalProperties": [
{
"key": "optionA",
"value": "1"
},
{
"key": "optionB",
"value": "0"
},
{
"key": "optionC",
"value": "1"
}
],
}
Whilst I can use quite a generic schema for this like this
additionalProperties:
properties:
key:
type: string
value:
type: string
required:
- key
- value
type: object
I ideally wish to explain what the various keys that can appear and what they mean. I.e. optionA means this and OptionB means that. Is there a way I can describe the exact options which will appear in the array?
The description field is used when you want to provide additional information or context to the reader that isn't necessarily explained by schema alone.
additionalProperties:
description: Your explanation goes here. Note that you can use markdown formatting if desired.
properties:
key:
type: string
value:
type: string
required:
- key
- value
type: object
You can also more accurately describe your options in the schema if they are all known values using oneOf, allOf, or anyOf. (Documentation here)
additionalProperties:
properties:
anyOf:
- $ref: '#/components/schemas/optionA'
- $ref: '#/components/schemas/optionB'
- $ref: '#/components/schemas/optionC'

Neither JsonOutput.toJson or writeJSON is producing strings with quotes(JSON)

I'm trying to convert a map in jenkinsfile to json for which I'm using
def pullRequestData = [
title: "Deployment-to-SIT",
description: "Pull request from",
state: "OPEN",
open: "true",
closed: "false",
fromRef: [
id: "refs/heads/${branchjsonObj.displayId}",
repository: [
slug: "${reposlug}",
name: "null",
project: [
key: "${projectkey}"
]
]
]
]
def jsonmap = JsonOutput.toJson(pullRequestData)
echo ${jsonmap}
gives the output as json formatted but without strings like,
{title:Deployment-to-SIT,description:Pull request from,state:OPEN,open:true,closed:false,fromRef:{id:refs/heads/deployment-to-SIT,repository:{slug:argocd-sample-chart,name:null,project:{key:TD}}}}
but the output i need is
{"title":"Deployment-to-SIT","description":"Pull request from","state":"OPEN","open":"true","closed":"false","fromRef":{"id":"refs/heads/deployment-to-SIT","repository":{"slug":"argocd-sample-chart","name":"null","project":{"key":"TD"}}}
Also tried,
def jsonpullRequestData = this.steps.writeJSON(file: 'jsonmap.json', json: pullRequestData)
But the behavior is same,o/p is without quotes.Any help will be much appreciated.
Instead of using json groovy lib.. as per your requirement you can directly use $pullRequestData.toString() method to get in string format with quotes.

Schema validation, how to enforce the existance of property at top level based on condition in lower level

I'm trying to write a schema to validate a yaml file after parsing it into JSON.
Supposing that this is my .yml file with 2 top level properties, cars and garage.
cars is optional while garage is required.
However, one of garage's sub-properties is cars. If cars under garage is defined, I want the schema to make sure that cars at the top level is also defined. Otherwise, the schema is not going to be valid
cars:
- BMW
- Mercedes-Benz
- Audi
garage:
location: Miami
cars:
- BMW
- Audi
My Schema:
{
properties: {
cars: {
type: 'array',
items: {
type: 'string'
}
},
garage: {
type: 'object',
properties: {
location: {
type: 'string'
},
cars: {
type: 'array'
}
},
required: ['garage']
}}
So I tried doing an if-else at the top level
{
if: { properties: { garage: { cars: {type: 'array'}}}},
then: {required:['cars']},
properties: {
cars: {
type: 'array',
items: {
type: 'string'
}
},
garage: {
type: 'object',
properties: {
location: {
type: 'string'
},
cars: {
type: 'array'
}
},
required: ['garage']
}}
But it seems that I'm doing it wrong or it doesn't serve that purpose.
Also doing anyOf at the top level to match sub-schemas didn't work for me..
Any help ?
You can specify the referential integrity constraint (together with all the other requirements) using the "JSON Extended Structure Schema" language, JESS.
Here is the complete JESS schema presented as a single JSON document:
[ "&",
["&",
{"::>=": {"garage": {"location": "string", "cars": [ "string" ] } } }
],
{"ifcond": { "has": "cars" },
"then": ["&", { "forall": ".[cars]", "schema": ["string"] } ]
},
{"setof": ".[garage]|.[cars][]", "subsetof": ".[cars][]"}
]
The first "&" introduces the conjunction of the three requirements, the last of which is the referential integrity constraint.
The JESS repository has a schema conformance checker, which I used to verify your sample (expressed as JSON) against the above schema.
The value of if must be a JSON Schema.
If you take the value of if as a JSON Schema on it's own and test the validation result of applying it to the correct location in your JSON instance, it may help you debug this type of issue.
In your if block, you need to put nest cars under properties, just like you've done in your main schema.
You may also want to make both garage and cars required in your if block.]
You cannot however define that you want the values from garage.cars to be included in your cars array.

How to document multiple content types in successful GET response in swagger

Let's say we have an example json swagger spec:
{
"swagger": "2.0",
"info": {
"version": "1.0.0",
"title": "Some API"
},
"basePath": "/api/v1",
"consumes": [
"application/json"
],
"produces": [
"application/json",
"text/csv"
],
"paths": {
"/some/endpoint": {
"get": {
"parameters": [
{
"in": "body",
"name": "body",
"required": false,
"schema": {
"$ref": "#/definitions/BodyParamsDefinition"
}
}
],
"responses": {
"200": { ?? } ...
There are two content types that can be produced:
application/json
text/csv
Default response for GET /some/endpoint is a csv file, but if the format query param is used like this /some/endpoint?format=json, the response would be in json format.
I have trouble finding how should I finish my specification with proper responses.
When I use this approach: https://swagger.io/docs/specification/describing-responses/ i get a validation error: ...get.responses['200'] should NOT have additional properties
You are almost there, you just need to define a schema for the response. This schema defines the response structure for all content types associated with this status code.
For example, if the operation returns this JSON:
[
{
"petType": "dog",
"name": "Fluffy"
},
{
"petType": "cat",
"name": "Crookshanks"
}
]
and this CSV:
petType,name
dog,Fluffy
cat,Crookshanks
you would use:
# YAML
responses:
200:
description: OK
schema:
type: array
items:
type: object
properties:
petType:
type: string
name:
type: string
More info: Describing Responses
In OpenAPI 3.0, content type definitions were improved and schemas can vary by content type:
openapi: 3.0.0
...
paths:
/some/endpoint:
get:
responses:
'200':
description: OK
content:
# JSON data is an object
application/json:
schema:
type: object
properties:
message:
type: string
# CSV data is a string of text
text/csv:
schema:
type: string
Default response for GET /some/endpoint is a csv file, but if the format query param is used like this /some/endpoint?format=json, the response would be in json format.
There's currently no way to map specific responses to specific operation parameters, but there are several related proposals in the OpenAPI Specification repository:
Accommodate legacy APIs by allowing query parameters in the path
Querystring in Path Specification
Support an operation to have multiple specifications per path
Overloading

Convert JSON to JSON Schema draft 4 compatible with Swagger 2.0

I've been given some JSON files generated by a REST API with plenty of properties.
I've created a Swagger 2.0 definition for this API and need to give it the corresponding schema for the response.
The main problem: this JSON file has loads of properties. It would take so much time and I would make many mistakes if I write the schema manually. And it’s not the only API I need to describe.
I know there are some tools to convert JSON to JSON schemas but, if I’m not mistaken, Swagger only has $refs to other objects definitions thus only has one level whereas the tools I’ve found only produce tree structured schemas.
My question: is there any tool to convert a JSON (or JSON Schema) to a Swagger 2.0 compatible one ?
Note: I'm working in YAML but I wouldn't be an issue, would it ?
For example, what I need:
List of Movements:
type: "array"
items:
$ref: "#/definitions/Movement"
Movement:
properties:
dateKey:
type: "string"
movement:
$ref: "#/definitions/Stock"
additionalProperties: false
Stock:
properties:
stkUnitQty:
type: "string"
stkDateTime:
type: "string"
stkUnitType:
type: "string"
stkOpKey:
type: "string"
additionalProperties: false
For my JSON document:
[
{
"dateKey": "20161110",
"stkLvls": [
{
"stkOpKey": "0",
"stkUnitType": "U",
"stkDateTime": "20161110T235010.240+0100",
"stkUnitQty": 30
}
]
},
{
"dateKey": "20161111",
"stkLvls": [
{
"stkOpKey": "0",
"stkUnitType": "U",
"stkDateTime": "20161111T231245.087+0100",
"stkUnitQty": 21
}
]
}
]
But, what http://jsonschema.net/#/ gives me:
---
"$schema": http://json-schema.org/draft-04/schema#
type: array
items:
type: object
properties:
dateKey:
type: string
stkLvls:
type: array
items:
type: object
properties:
stkOpKey:
type: string
stkUnitType:
type: string
stkDateTime:
type: string
stkUnitQty:
type: integer
required:
- stkOpKey
- stkUnitType
- stkDateTime
- stkUnitQty
required:
- dateKey
- stkLvls
I'm new to that, but curious, don't hesitate to explain deeply.
Thank you in advance for your help !
I also needed a converter tool and came across this. So far it seems to work pretty well. It does both JSON and YAML formats.
https://swagger-toolbox.firebaseapp.com/
Given this JSON (their sample):
{
"id": 1,
"name": "A green door",
"price": 12,
"testBool": false,
"tags": [
"home",
"green"
]
}
it generated this:
{
"required": [
"id",
"name",
"price",
"testBool",
"tags"
],
"properties": {
"id": {
"type": "number"
},
"name": {
"type": "string"
},
"price": {
"type": "number"
},
"testBool": {
"type": "boolean"
},
"tags": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
I know there are some tools to convert JSON to JSON schemas but, if
I’m not mistaken, Swagger only has $refs to other objects definitions
thus only has one level
You are mistaken. Swagger will respect any valid v4 JSON schema, as long as it only uses the supported subset.
The Schema Object...is based on the JSON Schema Specification Draft 4 and
uses a predefined subset of it. On top of this subset, there are extensions provided by this specification to allow for more complete
documentation.
It goes on to list the parts of JSON schema which are supported, and the bits which are not, and the bits which are extended by swagger.
You can directly goto https://bikcrum.github.io/Swagger-JSON-Schema-In-YAML_webversion/ for online conversion.
I wrote following python script to generate JSON schema in YAML format (preserving key order) that is used in Swagger.
import json
# input file containing json file
with open('data.json') as f:
json_data = json.load(f)
# json schema in yaml format
out = open('out.yaml','w')
def gettype(type):
for i in ['string','boolean','integer']:
if type in i:
return i
return type
def write(string):
print(string)
out.write(string+'\n')
out.flush()
def parser(json_data,indent):
if type(json_data) is dict:
write(indent + 'type: object')
if len(json_data) > 0:
write(indent + 'properties:')
for key in json_data:
write(indent + ' %s:' % key)
parser(json_data[key], indent+' ')
elif type(json_data) is list:
write(indent + 'type: array')
write(indent + 'items:')
if len(json_data) != 0:
parser(json_data[0], indent+' ')
else:
write(indent + ' type: object')
else:
write(indent + 'type: %s' % gettype(type(json_data).__name__))
parser(json_data,'')
Update: If you want YAML with sorted keys (which is by default) use YAML library
import json
import yaml
# input file containing json file
with open('data.json') as f:
json_data = json.load(f)
# json schema in yaml format
def gettype(type):
for i in ['string','boolean','integer']:
if type in i:
return i
return type
def parser(json_data):
d = {}
if type(json_data) is dict:
d['type'] = 'object'
for key in json_data:
d[key] = parser(json_data[key])
return d
elif type(json_data) is list:
d['type'] = 'array'
if len(json_data) != 0:
d['items'] = parser(json_data[0])
else:
d['items'] = 'object'
return d
else:
d['type'] = gettype(type(json_data).__name__)
return d
p = parser(json_data)
with open('out.yaml','w') as outfile:
yaml.dump(p,outfile, default_flow_style=False)