Newtonsoft JSON Schema - $ref is resolved, ignoring required though - json

I externalized a portion of my json schema into a separate schema file.
For example: "$ref": "http://schema.company.com/boroughschema.json"
Within this schema I specify required properties, when validating a known bad json file, it doesn't complain that a required property is missing.
"required": [
"Name",
"Representative",
"District"
]
I purposely leave off "District" in the source json, and there are no complaints when validating.
Using Newtonsoft.Json.Schema 3.0.11.
The original schema validates just fine, if I move the schema portion to a definitions that works as well.
private bool ValidateViaExternalReferences(string jsonstring)
{
JSchemaPreloadedResolver resolver = new JSchemaPreloadedResolver();
// load schema
var schemaText = System.IO.File.ReadAllText(COMPLEXSCHEMAFILE);
// Rather than rely on 100% http access, use a resolver and
// preload schema with http://schema.company.com/boroughschema.json
var schemaTextBorough = System.IO.File.ReadAllText(BOROUGHSCHEMAFILE);
resolver.Add(new Uri("http://schema.company.com/boroughschema.json"), schemaTextBorough);
JSchema schema = JSchema.Parse(schemaText, resolver);
JToken json = JToken.Parse(jsonstring);
// validate json
IList<ValidationError> errors;
bool valid = json.IsValid(schema, out errors);
if (!valid)
{
foreach (var validationerr in errors)
{
Append2Log(validationerr.ToString());
}
}
return valid;
}
Missing "District" yields no errors, I expect the same correct behavior when using the original schema.

I had placed the:
"required": [
"Name",
"Representative",
"District"
] fragment in the wrong place.
I apologize, it behaves as expected.

Related

Creating valid JSON for container_definitions in Terraform

I’m creating an aws_ecs_task_definition resource. Within that resource, I need a container_definitions, which needs to be a JSON string. I’d like to add multiple secrets to that definition from a list of strings; [“var1”, “var2”].
The output I need looks like:
“secrets”: [
{
“name” = “var1”,
“valueFrom” = “arn:somestuffvar1”
},
{
“name” = “var2”,
“valueFrom” = “arn:somestuffvar2”
}
],
I have tried string interpolation and templatefile, this is the section from my .tftpl
  "secrets": [
   %{ for myvar in myvars ~}
    {
      "name": "${myvar}",
     "valueFrom”: “arn:somestuff${myvar}"
    }
    %{ endfor }
  ],
The problem is the commas. the above gives me
[
{
“name” = “var1”,
“valueFrom” = “arn:somestuffvar1”
}
{
“name” = “var2”,
“valueFrom” = “arn:somestuffvar2”
}
],
with no commas between the braces, if I add a comma, then i get a trailing comma
[
{
},
{
},
],
I’ve tried a zillion syntax variations, I’ve tried jsonencode on the interpolated string, I’ve tried stripping the trailing comma. Nothing gives me valid JSON. What am I missing?
The templatefile function documentation has a section specifically about Generating JSON or YAML from a template, which explicitly discourages using string templating to try to build valid JSON from string fragments like this.
Instead, you should use the jsonencode function as the entire definition of your template, and thus let Terraform be the one to worry about generating valid JSON syntax. You then only need to worry about writing an expression that describes the data structure that remote system expects.
In your case, a template generating a JSON object with just this "secrets" property would look like this:
${jsonencode({
secrets = [
for myvar in myvars : {
name = myvar
valueFrom = "arn:somestuff${myvar}"
}
],
})}
Notice that the entire template consists of a single interpolation sequence ${ ... } and the expression inside it is one big call to the jsonencode function, with the argument describing the data structure to serialize. Therefore inside that argument we're using normal Terraform expression syntax (like you'd write in a resource argument in a .tf file) rather than the special template interpolation/repetition syntaxes. In particular, the value of "secrets" is defined using a for expression.
With your { myvars = ["var1", "var2"] } template variables, this will produce a minified version of the following JSON structure, which I'm showing with manually-added indentation and newlines just so you can read it:
{
"secrets": [
{
"name": "var1",
"valueFrom": "arn:somestuffvar1"
},
{
"name": "var2",
"valueFrom": "arn:somestuffvar2"
}
]
}
I understand that you're only showing a fragment of the template here and so the above won't include all of the other properties included in your template, but hopefully you can see how to use Terraform expression syntax to describe those properties as Terraform object attributes too, so that the overall result of this template will be a valid JSON serialization of the total data structure.

Node-RED parse json

I am trying to pull out the value "533"
{
"d": {
"ItemValue_1": 533
},
"ts": "2021-01-20T10:59:41.958591"
}
This does not work
var ItemValue_1 = msg.payload.ItemValue_1;
msg.payload = ItemValue_1;
return msg;
My result is unsuccessful
I was able to solve on my own, it works.
sensorMeasurement=JSON.parse(msg.payload);
msg.payload=sensorMeasurement.d;
var msgout = {payload : msg.payload.ItemValue_1}
return msgout;
The better way to do this is as follows:
Add a JSON node before the function node, this will turn a string payload in to a JSON object (assuming the string actually represents a JSON object).
Then if you are using a function node the following:
msg.payload = msg.payload.d.ItemValue_1;
return msg;
It is bad practice to create a new object as you did in your answer as it throws away any meta data that may be required later that is attached to the original object.
Rather than use a function node it would also be cleaner to use a change node and the Move mode to shift the msg.payload.d.ItemValue_1 to msg.payload

Return nested JSON in AWS AppSync query

I'm quite new to AppSync (and GraphQL), in general, but I'm running into a strange issue when hooking up resolvers to our DynamoDB tables. Specifically, we have a nested Map structure for one of our item's attributes that is arbitrarily constructed (its complexity and form depends on the type of parent item) — a little something like this:
"item" : {
"name": "something",
"country": "somewhere",
"data" : {
"nest-level-1a": {
"attr1a" : "foo",
"attr1b" : "bar",
"nest-level-2" : {
"attr2a": "something else",
"attr2b": [
"some list element",
"and another, for good measure"
]
}
}
},
"cardType": "someType"
}
Our accompanying GraphQL type is the following:
type Item {
name: String!
country: String!
cardType: String!
data: AWSJSON! ## note: it was originally String!
}
When we query the item we get the following response:
{
"data": {
"genericItemQuery": {
"name": "info/en/usa/bra/visa",
"country": "USA:BRA",
"cardType": "visa",
"data": "{\"tourist\":{\"reqs\":{\"sourceURL\":\"https://travel.state.gov/content/passports/en/country/brazil.html\",\"visaFree\":false,\"type\":\"eVisa required\",\"stayLimit\":\"30 days from date of entry\"},\"pages\":\"One page per stamp required\"}}"
}}}
The problem is we can't seem to get the Item.data field resolver to return a JSON object (even when we attach a separate field-level resolver to it on top of the general Query resolver). It always returns a String and, weirdly, if we change the expected field type to String!, the response will replace all : in data with =. We've tried everything with our response resolvers, including suggestions like How return JSON object from DynamoDB with appsync?, but we're completely stuck at this point.
Our current response resolver for our query has been reverted back to the standard response after none of the suggestions in the aforementioned post worked:
## 'Before' response mapping template on genericItemQuery query; same result as the 'After' listed below **
#set($result = $ctx.result)
#set($result.data = $util.parseJson($ctx.result.data))
$util.toJson($result)
## 'After' response mapping template **
$util.toJson($ctx.result)
We're trying to avoid a situation where we need to include supporting types for each nest level in data (since it changes based on parent Item type and in cases like the example I gave it can have three or four tiers), and we thought changing the schema type to AWSJSON! would do the trick. I'm beginning to worry there's no way to get around rebuilding our base schema, though. Any suggestions to the contrary would be helpful!
P.S. I've noticed in the CloudWatch logs that the appropriate JSON response exists under the context.result.data response field, but somehow there's the following transformedTemplate (which, again, I find very unusual considering we're not applying any mapping template except to transform the result into valid JSON):
"arn": ...
"transformedTemplate": "{data={tourist={reqs={sourceURL=https://travel.state.gov/content/passports/en/country/brazil.html, visaFree=false, type=eVisa required, stayLimit=30 days from date of entry}, pages=One page per stamp required}}, resIds=USA:BRA, cardType=visa, id=info/en/usa/bra/visa}",
"context": ...
Apologies for the lengthy question, but I'm stumped.
AWSJSON is a JSON string type so you will always get back a string value (this is what your type definition must adhere to).
You could try to make a type for data field which contains all possible fields and then resolve fields to a corresponding to a parent type or alternatively you could try to implement graphQL interfaces

Avro Json.ObjectWriter - "Not the Json schema" error

I'm writing a tool to convert data from a homegrown format to Avro, JSON and Parquet, using Avro 1.8.0. Conversion to Avro and Parquet is working okay, but JSON conversion throws the following error:
Exception in thread "main" java.lang.RuntimeException: Not the Json schema:
{"type":"record","name":"Torperf","namespace":"converTor.torperf",
"fields":[{"name":"descriptor_type","type":"string","
[... rest of the schema omitted for brevity]
Irritatingly this is the schema that I passed along and which indeed I want the converter to use. I have no idea what Avro is complaining about.
This is the relevant snippet of my code:
// parse the schema file
Schema.Parser parser = new Schema.Parser();
Schema mySchema;
// tried two ways to load the schema
// like this
File schemaFile = new File("myJsonSchema.avsc");
mySchema = parser.parse(schemaFile) ;
// and also like Json.class loads it's schema
mySchema = parser.parse(Json.class.getResourceAsStream("myJsonSchema.avsc"));
// initialize the writer
Json.ObjectWriter jsonDatumWriter = new Json.ObjectWriter();
jsonDatumWriter.setSchema(mySchema);
OutputStream out = new FileOutputStream(new File("output.avro"));
Encoder encoder = EncoderFactory.get().jsonEncoder(mySchema, out);
// append a record created by way of a specific mapping
jsonDatumWriter.write(specificRecord, encoder);
I replaced myJsonSchema.avsc with the one returned from the exception without success (and except whitespace and linefeeds they are the same). Initializing the jsonEncoder with org.apache.avro.data.Json.SCHEMA instead of mySchema didn't change anything either. Replacing the schema passed to Json.ObjectWriter with org.apache.avro.data.Json.SCHEMA leads to a NullPointerException at org.apache.avro.data.Json.write(Json.java:183) (which is a deprecated method).
From staring at org.apache.avro.data.Json.java it seems to me like Avro is checking my record schema against it's own schema of a Json record (line 58) for equality (line 73).
58 SCHEMA = Schema.parse(Json.class.getResourceAsStream("/org/apache/avro/data/Json.avsc"));
72 public void setSchema(Schema schema) {
73 if(!Json.SCHEMA.equals(schema))
74 throw new RuntimeException("Not the Json schema: " + schema);
75 }
The referenced Json.avsc defines the field types of a record:
{"type": "record", "name": "Json", "namespace":"org.apache.avro.data",
"fields": [
{"name": "value",
"type": [
"long",
"double",
"string",
"boolean",
"null",
{"type": "array", "items": "Json"},
{"type": "map", "values": "Json"}
]
}
]
}
equals is implemented in org.apache.avro.Schema, line 346:
public boolean equals(Object o) {
if(o == this) {
return true;
} else if(!(o instanceof Schema)) {
return false;
} else {
Schema that = (Schema)o;
return this.type != that.type?false:this.equalCachedHash(that) && this.props.equals(that.props);
}
}
I don't fully understand what's going on in the third check (especially equalCachedHash()) but I only recognize checks for equality in a trivial way which doesn't make sense to me.
Also I can't find any examples or notes about usage of Avro's Json.ObjectWriter on the InterWebs. I wonder if I should go with the deprecated Json.Writer instead because there are at least a few code snippets online to learn and glean from.
The full source is available at https://github.com/tomlurge/converTor
Thanks,
Thomas
A little more debugging proofed that passing org.apache.avro.data.Json.SCHEMA to Json.ObjectWriter is indeed the right thing to do. The object I get back written to System.out prints the JSON object that I expect. The null pointer exception though did not go away.
Probably I would not have had to setSchema() of Json.ObjectWriter at all since omitting the command alltogether leads to the same NullPointerException.
I finally filed a bug with Avro and it turned out that in my code I was handing an object of type "specific" to ObjectWriter which it couldn't handle. It did return silently though and an error was thrown only at a later stage. That was fixed in Avro 1.8.1 - see https://issues.apache.org/jira/browse/AVRO-1807 for details.

Can I get MOXy to not output an element when generating json?

An instance of my JAXB Object model contains an element that I want output when I generate Xml for the instance but not when I generate json
i.e I want
<release-group>
<type>Album</type>
<title>Fred</title>
</release-group>
and
"release-group" : {
"title" : "fred",
},
but have
"release-group" : {
"type" : "Album",
"title" : "fred"
},
Can I do this using the oxml.xml mapping file
This answer shows how I can do it for attributes using the transient keyword, Can I get MOXy to not output an attribute when generating json? but I cannot get that to work for an element.
Sorry problem solved, a bit of confusion on my part.
The example I gave above didn't actually match the true situation accurately, type was actually output as an attribute for Xml, but use of transient didnt work because it had been renamed in the JAXB
#XmlAttribute(name = "target-type", required = true)
#XmlSchemaType(name = "anyURI")
protected String targetType;
So adding
<java-type name="ReleaseGroup">
<java-attributes>
<xml-transient java-attribute="targetType"/>
</java-attributes>
</java-type>
worked, previously I was incorrectly doing
<java-type name="ReleaseGroup">
<java-attributes>
<xml-transient java-attribute="target-type"/>
</java-attributes>
</java-type>