Here is the desired schema and json for illustration purpose. Please see the link below.
JSON Schema and JSON
{
"id": "123" ,
"ts": "1234567890",
"complex_rules":
[
{
"type":"admin",
"rule":{
"rights":"all",
"remarks": "some admin remarks"
}
},
{
"type":"guest",
"rights": "limited"
},
{
"type":"anonymous",
"rights": "blocked"
}
]
}
The 'complex_rules' is an array of json object:
With type either be a : "admin", "guest", "anonymous" and the 'type' attribute is MANDATORY.
Each object in array can have its own structure, but the type can be either of: "admin", "guest", "anonymous" only. No other type attribute is acceptable.
The conditions to evaluate:
The type of object in the array cannot re-occur in the array. (I know this seems to be not possible, so we can ignore this)
If attribute "rights" in the {type=admin object} with any value, then we cannot have "rights": "limited" or any value in {type=guest object}. The JSON Schema validation must complain about this.
Another twist, either object {type":"guest"}or {type":"anonymous"} can exist. Both types cannot coexist along with other types.
----Update
The above link is the solution this question.
In regards to 1 and 2:
You need to use a combination of if, then, and not keywords to construct the logic you require with the correct level of applicability.
In regards to 3:
The type of object in the array cannot re-occur in the array. (I know
this seems to be not possible, so we can ignore this)
Right, that's correct, it's not possible as of draft-7 JSON Schema.
Related
I'm trying to create a JSON Schema for something very dynamic. Say I have two pieces of data, and I want one (the source) to determine the validity of the other (the target). Both can change over time, but both will always be an array of objects with known properties. For example:
source.json
[
{ "id": 23, "active": true },
{ "id": 9, "active": false },
{ "id": 6, "active": true }
]
target.json
[
{ "identifier": 6 }
]
The schema I'm trying to create is this: For each active object in the source array, there should be an equivalent object in the target array. A little more formally, given an object in the source array where "active" equals true and "id" equals x, there should be an object in the target array where "identifier" equals x.
In the example above, the target would be invalid because it's missing an object like { "identifier": 23 }.
However, I want to statically define this schema (or something capable of generating it) in a JSON file ahead of time, and this feels pretty tough since the source array can change. I'm using Ajv, and I'm aware that it supports the $data reference, but I'm not sure that's enough to help me here. The other option I could see is creating some kind of schema-generator definition? In concept, it too would be a JSON object I define ahead of time, but at runtime it would be used to safely generate arbitrary schemas based on runtime data such as the source array. However, if a mechanism like this doesn't already exist, trying to implement it myself sounds like a great way to give myself a code-injection vulnerability.
Thanks for your time!
Problem:
The requirement to allow only those property Names in an object which are part of an array value of another property in the schema (another property's value dependant property names).
Detailed Explanation:
I have the following JSON:
{
"validResources":["ip","domain","url"],
"resources":
{
"ip" : "192.168.1.1",
"domain" : "www.example.com",
}
}
I would want to write a JSON schema that allows only those keys in "resources" which are part of the array list value of "validResources".
The above JSON is a valid JSON as the "ip" and "domain" keys are actually part of the array items which is a value of the property "validResources".
However, the below JSON should return an error as "file" is not a valid resource as it is not part of the "validResorces" array.
{
"validResources":["ip","domain","url"],
"resources":
{
"ip" : "192.168.1.1",
"file" : "file://etc/passwd" <= No such resource in "validResources"
}
}
What I have tried ?
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type" : "object",
"properties" : {
"validResources" : { "type" : "array",
"minItems" : 1,
"uniqueItems" : true
},
"resources":{
"type":"object",
===Unable to proceed beyond this ===
}
},
}
Other Searches:
I checked propertyNames, however, it can derive only from another schema or have a regex pattern defined as part of its schema. But in this case, the propertyNames / keys within "resources" are dependent on values of the "validResources" property which are not known prior hand and "resources" should allow only those strings/names as its properties which are part of values in array list of "validResources".
There is a pending issue somewhat similar to this question here.
Such a problem cannot be solved as on date with json schema where draft 7 being the latest draft while writing this post.
This question also relates to multiple issues already mentioned in github issues of json-schema spec. Apparently a proposal related to solving such issue is being tracked actively here.
Since this deals with lookup from the value of instance of json schema and not the structural validation alone (which has been the primary motive of existence of json schema standard as on date), this probably has to be dealt differently as of now until next draft comes or the mentioned issue is taken up.
I have a struct such as this one:
type Data struct {
Id string
Value string
Custom customtype1
Special customtype2
TimeStamp Time
}
var model Data
I am reading data from a JSON object. Because the JSON is structured very differently, I can't just directly unmarshall the JSON into the struct. So I am trying to "match" the fields from the JSON objects to those of the struct one by one. I don't actually need to properly unmarshall the JSON data into the struct, all I really need is to be able to assign, for each field, the proper type to its value.
So I unmarshall the JSON to a generic interface, then convert it to a map[string]interface{} and iterate over that. For each field, I try to find a match among the field names in the model variable which I get using reflect.
Now this all works fine, but the problem arises when I try to get the right type for the values.
I can get the Type for a certain field from the model using reflect, but then I can't use that to cast the type of the value I get from the JSON because that is not a type. I can't use a switch statement either, because this is a simplified version of the situation and in reality I'm dealing with 1000+ different possible types. How can I convert the values I have for each field into their proper type ?
The only I can think of solving this would be to recreate a json string that matches the format of the struct and then unmarshall that into its proper struct, but that seems way to convoluted. Surely there must be a simpler way?
Here's a sample JSON (I can not change this structure, unless I rework it within my Go program):
{
"requestId": 101901,
"userName": "test",
"options": [1, 4],
"request": {
"timeStamp": {
"Value1": "11/02/2018",
"Value2": "11/03/2018"
},
"id": {
"Value1": "123abcd",
"Value2": "0987acd",
"Value3": "a9c003"
},
"custom": {
"Value1": "customtype1_value",
"Value2": "customtype1_value"
}
}
}
I'd advise against your current approach. You haven't provided enough context to tell us why you're choosing to unmarshall things one by one, but Go's support for JSON is good enough that I'd guess it is capable of doing what you want.
Are you aware of Marshall's support for struct tags? Those might serve the purpose you're looking for. Your struct would then look something more like:
type Data struct {
Id string `json:"id"`
Value string `json:"value"`
Custom customtype1 `json:"custom_type"`
Special customtype2 `json:"special_type"`
TimeStamp Time `json:"timestamp"`
}
If your problem is that the custom types don't know how to be unmarshalled, you can define custom unmarshalling functions for them.
This would then enable you to unmarshall an object like the following:
{
"id": "foo",
"value": "bar",
"custom_type": "2342-5234-4b24-b23a",
"special_type": "af23-af2f-rb32-ba23",
"timestamp": "2018-05-01 12:03:41"
}
According to the specification (http://json-schema.org/schema) there is no mutual exclusion among schema keywords.
For example I could create the following schema:
{
"properties" : {
"foo" : {"type" : "string"}
}
"items" : [
{"type" : "integer" },
{"type" : "number" }
]
}
Would this schema validate against both objects and arrays?
If so it would imply an "OR" relationship between keyword.
But if we consider the following schema:
{
"anyOf" : [
{ "type" : "string",},
{ "type" : "integer"}
]
"not" : {
{ "type" : "string",
"maxLength" : 5
}
}
}
The most practical way to interpret it would be an "AND" relationship between anyOf and not keywords.
I could not find any indication in the draft v4 on how keywords logically interact. Can anyone point me to a documentation/standard that would answer this question?
Keywords are always an "AND" relationship. Data must satisfy all keywords from a schema.
The properties and items keywords don't specify the type of the object (you have to use type for that). Instead, they only have meaning for particular types, and are ignored otherwise. So properties actually means:
If the data is an object, then the following property definitions apply...
This means that {"properties":{...}} will match any string, because properties is ignored for values that aren't objects. And items actually means:
If the data is an array, then the following item definitions apply...
So the AND combination looks like:
(If the data is an object, then properties applies) AND (if the data is an array, then items applies)
As the spec clearly dictates, some keywords are only relevant for one particular type of JSON value, or all of them.
So, for instance, properties only applies if the JSON value you validate is a JSON Object. On any JSON value which is NOT an object, it will not apply (another way to understand it is that if the JSON value to validate is not a JSON Object, validation against this keyword will always succeed).
Similarly, items will only apply if the JSON value is a JSON Array.
Now, some other keywords apply for all types; among these are enum, allOf, anyOf, oneOf, type. In each case, the validation rules are clearly defined in the specification.
In short: you should consider what type of value is expected. The easiest way to force a value to be of a given type in a schema is to use type, as in:
"type": "integer"
BUT this keyword will nevertheless be applied INDEPENDENTLY of all others in the validation process. So, this is a legal schema:
{
"type": "integer",
"minItems": 1
}
If an empty JSON Array is passed for validation, it will fail for both keywords:
type because the value is not an array;
minItems because the value is an array but it has zero elements, which is illegal for this particular keyword since it expects one element in the array at least.
Note that the result of validation is totally independent of the order in which you evaluate keywords. That is a fundamental property of JSON Schema. And it is pretty much a requirement that it be so, since the order of members in a JSON Object is irrelevant ({ "a": 1, "b": 2 } is the same JSON Object as { "b": 2, "a": 1 }).
And of course, if only ONE keyword causes validation to fail, the whole JSON value is invalid against the schema.
We are currently investigating JSON as a potential API data transfer language for our system and a question about using JSON Reference came up.
Consider the following example:
{
"invoice-address" : { "street": "John Street", "zip": "12345", "city": "Someville" },
"shipping-address": { "$ref": "#/invoice-address" }
}
According to our research, this is a valid usage of JSON Reference. We replace the instance of an object with another object containing the reference pointing to a different object using a JSON Pointer fragment.
Now, a JSON Reference always consists of a key-value pair and thus has to be enclosed in an object. This would mean that in order to reference a non-object data type (e.g. the zip and city strings in the example above) you would have to do the following:
{
"invoice-address" : { "street": "John Street", "zip": "12345", "city": "Someville" },
"shipping-address": { "street": "Doe Street", "zip": { "$ref": "#/invoice-address/zip" }, "city": { "$ref": "#/invoice-address/city" } }
}
Even though the JSON Pointers now correctly point to string values, we had to change the data type of zip and city from string to object, which make them fail validation against our JSON Schema, because it declares them as strings.
However, the JSON Reference draft states:
Implementations MAY choose to replace the reference with the referenced value.
Does that mean that we are allowed to "preprocess" the file and replace the JSON Reference object with the resolved string value before validating against the JSON Schema? Or are references limited to object types only?
Thanks to anyone who can shed some light onto this.
I wouldn't expect most validators to resolve JSON References before validation. You could either:
resolve JSON References before validation
adapt the JSON Schemas to allow for JSON Reference objects in certain places
Personally, I think the first option is much neater.
You could end up with circular references I suppose - I don't know which validator/language you're using, but tv4 can definitely handle it.