I'm trying to create a JSON Schema for something very dynamic. Say I have two pieces of data, and I want one (the source) to determine the validity of the other (the target). Both can change over time, but both will always be an array of objects with known properties. For example:
source.json
[
{ "id": 23, "active": true },
{ "id": 9, "active": false },
{ "id": 6, "active": true }
]
target.json
[
{ "identifier": 6 }
]
The schema I'm trying to create is this: For each active object in the source array, there should be an equivalent object in the target array. A little more formally, given an object in the source array where "active" equals true and "id" equals x, there should be an object in the target array where "identifier" equals x.
In the example above, the target would be invalid because it's missing an object like { "identifier": 23 }.
However, I want to statically define this schema (or something capable of generating it) in a JSON file ahead of time, and this feels pretty tough since the source array can change. I'm using Ajv, and I'm aware that it supports the $data reference, but I'm not sure that's enough to help me here. The other option I could see is creating some kind of schema-generator definition? In concept, it too would be a JSON object I define ahead of time, but at runtime it would be used to safely generate arbitrary schemas based on runtime data such as the source array. However, if a mechanism like this doesn't already exist, trying to implement it myself sounds like a great way to give myself a code-injection vulnerability.
Thanks for your time!
Related
Here is the desired schema and json for illustration purpose. Please see the link below.
JSON Schema and JSON
{
"id": "123" ,
"ts": "1234567890",
"complex_rules":
[
{
"type":"admin",
"rule":{
"rights":"all",
"remarks": "some admin remarks"
}
},
{
"type":"guest",
"rights": "limited"
},
{
"type":"anonymous",
"rights": "blocked"
}
]
}
The 'complex_rules' is an array of json object:
With type either be a : "admin", "guest", "anonymous" and the 'type' attribute is MANDATORY.
Each object in array can have its own structure, but the type can be either of: "admin", "guest", "anonymous" only. No other type attribute is acceptable.
The conditions to evaluate:
The type of object in the array cannot re-occur in the array. (I know this seems to be not possible, so we can ignore this)
If attribute "rights" in the {type=admin object} with any value, then we cannot have "rights": "limited" or any value in {type=guest object}. The JSON Schema validation must complain about this.
Another twist, either object {type":"guest"}or {type":"anonymous"} can exist. Both types cannot coexist along with other types.
----Update
The above link is the solution this question.
In regards to 1 and 2:
You need to use a combination of if, then, and not keywords to construct the logic you require with the correct level of applicability.
In regards to 3:
The type of object in the array cannot re-occur in the array. (I know
this seems to be not possible, so we can ignore this)
Right, that's correct, it's not possible as of draft-7 JSON Schema.
I'm reading a JSON response from a third party and I'm finding that some of the properties return in the notation for a single object when there is only one object to be returned and when there is multiple objects for the property the value is returned as an array of objects.
Example of a single object in the response
{
"data": {
"property1":"value",
"property2":"value",
"property3":"value"
}
}
Example of an array of objects in the response
{
"data": [
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
}
]
}
Why would the two different response formats be acceptable from the same endpoint?
This question bothered me as well whenever I saw it happening. I never really liked having to check the value in order to know how to access it.
One could argue that doing this saves some space in the payload. You save two bytes omitting the [] when there's only a single value. But it's weak IMHO and manipulating the data is harder as we already know.
But looking at it in a different way, this seems to make some sense: it's optimizing for the more common result, a single value. I've seen my fair share of data formats where the structure was very strict. For example, a recursive dictionary-like structure where any property that would contain an object, must be an array of that object. So in a deeply nested object, accessing a value may look like this:
root.data[0].aparent[0].thechild[0].myvalue
vs:
root.data.aparent.thechild.myvalue
If there were actually multiple values, then using an array would be appropriate.
I don't necessarily buy this since you still have to do a check, you'd have to do some tests before consuming the data (like if a response didn't came back). This type of response might make more sense in languages that have some form of pattern matching.
I'm trying to convert a CSV (or XLS) file into JSON. I've come across a number of closed/resolved posts about this, but I am yet to find a solution to what I'm specifically trying to do - which is as follows.
If a column is called 'Colour' then I want to export a single JSON property, but if the column is called 'Features_Count' then I want to create a 'Features' property that is an object instead of a string, and that object contains a property called 'Count'. In other words, I want to be able to have nested properties. The JSON should therefore look something like this:
[
{
"id": 1,
"colour": "blue",
"features":{
"count": 1
}
}
{
"id": 2,
"colour": "red",
"features":null
}
]
Does anyone have any ideas how this can be done? Please do bear in mind I'm pretty much a beginner when it comes to this...
I'm quite new to AppSync (and GraphQL), in general, but I'm running into a strange issue when hooking up resolvers to our DynamoDB tables. Specifically, we have a nested Map structure for one of our item's attributes that is arbitrarily constructed (its complexity and form depends on the type of parent item) — a little something like this:
"item" : {
"name": "something",
"country": "somewhere",
"data" : {
"nest-level-1a": {
"attr1a" : "foo",
"attr1b" : "bar",
"nest-level-2" : {
"attr2a": "something else",
"attr2b": [
"some list element",
"and another, for good measure"
]
}
}
},
"cardType": "someType"
}
Our accompanying GraphQL type is the following:
type Item {
name: String!
country: String!
cardType: String!
data: AWSJSON! ## note: it was originally String!
}
When we query the item we get the following response:
{
"data": {
"genericItemQuery": {
"name": "info/en/usa/bra/visa",
"country": "USA:BRA",
"cardType": "visa",
"data": "{\"tourist\":{\"reqs\":{\"sourceURL\":\"https://travel.state.gov/content/passports/en/country/brazil.html\",\"visaFree\":false,\"type\":\"eVisa required\",\"stayLimit\":\"30 days from date of entry\"},\"pages\":\"One page per stamp required\"}}"
}}}
The problem is we can't seem to get the Item.data field resolver to return a JSON object (even when we attach a separate field-level resolver to it on top of the general Query resolver). It always returns a String and, weirdly, if we change the expected field type to String!, the response will replace all : in data with =. We've tried everything with our response resolvers, including suggestions like How return JSON object from DynamoDB with appsync?, but we're completely stuck at this point.
Our current response resolver for our query has been reverted back to the standard response after none of the suggestions in the aforementioned post worked:
## 'Before' response mapping template on genericItemQuery query; same result as the 'After' listed below **
#set($result = $ctx.result)
#set($result.data = $util.parseJson($ctx.result.data))
$util.toJson($result)
## 'After' response mapping template **
$util.toJson($ctx.result)
We're trying to avoid a situation where we need to include supporting types for each nest level in data (since it changes based on parent Item type and in cases like the example I gave it can have three or four tiers), and we thought changing the schema type to AWSJSON! would do the trick. I'm beginning to worry there's no way to get around rebuilding our base schema, though. Any suggestions to the contrary would be helpful!
P.S. I've noticed in the CloudWatch logs that the appropriate JSON response exists under the context.result.data response field, but somehow there's the following transformedTemplate (which, again, I find very unusual considering we're not applying any mapping template except to transform the result into valid JSON):
"arn": ...
"transformedTemplate": "{data={tourist={reqs={sourceURL=https://travel.state.gov/content/passports/en/country/brazil.html, visaFree=false, type=eVisa required, stayLimit=30 days from date of entry}, pages=One page per stamp required}}, resIds=USA:BRA, cardType=visa, id=info/en/usa/bra/visa}",
"context": ...
Apologies for the lengthy question, but I'm stumped.
AWSJSON is a JSON string type so you will always get back a string value (this is what your type definition must adhere to).
You could try to make a type for data field which contains all possible fields and then resolve fields to a corresponding to a parent type or alternatively you could try to implement graphQL interfaces
I'm using the JSONProvider from FSharp-Data to automatically create types for a webservice that I'm consuming using sample responses from the service.
However I'm a bit confused when it comes to types that are reused in the service, like for example there is one api method that return a single item of type X while another returns a list of X and so on. Do I really have to generate multiple definitions for this, and won't that mean that I will have duplicate types for the same thing?
So, I guess what I'm really asking, is there a way to create composite types from types generated from JSON samples?
If you call JsonProvider separately with separate samples, then you will get duplicate types for the same things in the sample. Sadly, there is not much that the F# Data library can do about this.
One option that you have would be to pass multiple samples to the JsonProvider at the same time (using the SampleIsList parameters). In that case, it tries to find one type for all the samples you provide - but it will also share types with the same structure among all the samples.
I assume you do not want to get one type for all your samples - in that case, you can wrap the individual samples with additional JSON object like this (here, the real samples are the records nested under "one" and "two"):
type J = JsonProvider<"""
[ { "one": { "person": {"name": "Tomas"} } },
{ "two": { "num": 42, "other": {"name": "Tomas"} } } ]""", SampleIsList=true>
Now, you can run the Parse method and wrap the samples in a new JSON object using "one" or "two", depending on which sample you are processing:
let j1 = """{ "person": {"name": "Tomas"} }"""
let o1 = J.Parse("""{"one":""" + j1 + "}").One.Value
let j2 = """{ "num": 42, "other": {"name": "Tomas"} }"""
let o2 = J.Parse("""{"two":""" + j2 + "}").Two.Value
The "one" and "two" records are completely arbitrary (I just added them to have two separate names). We wrap the JSON before parsing it and then we access it using the One or Two property. However, it means that o1.Person and o2.Other are now of the same type:
o1.Person = o2.Other
This returns false because we do not implement equality on JSON values in F# Data, but it type checks - so the types are the same.
This is fairly complicated, so I would probably look for other ways of doing what you need - but it is one way to get shared types among multiple JSON samples.