I have to upgrade a javascript application that validates json with json schema. The old version is using tv4 to validate json schema draft 4. I need to use draft-7 in the new software.
I just replace a draft-7 json file in the current code. It worked fine at the beginning, but later the app started to show some errors related to tv4.
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"lastName": {
"type": "string"
},
...
}
My question is can I use tv4 with draft-7. Is there any draft-7 library to replace tv4?
I found that I use ajv library to replace tv4
Related
I'm editing an OpenAPI JSON spec in IntelliJ. The automatic validation and code completion work very nicely.
The OpenAPI version used is 3.0.3, which IntelliJ detects correctly. It seems that it uses "openapi30.json" internally for validation, and all is good.
However, the file is getting very large and it's time to move some commonly-used models out of it using $ref.
This is where things break. The main spec looks like this (snippet):
{
"openapi": "3.0.3",
"info": {
"title": "Cars REST API",
"description": "Calls, Responses and DTOs for REST",
"version": "1.0.0"
},
"components": {
"schemas": {
"car": {
"$ref": "car.json"
},
"car-group": {
"$ref": "car-group.json"
}
And when editing it, IntelliJ recognizes it as "openapi30".
However, the referenced documents are not recognized. For example, the car.json file looks like this:
{
"car": {
"required": [
"id",
"name"
],
"properties": {
"id": {
"type": "integer",
"format": "int64"
},
"name": {
"type": "string"
},
"tag": {
"type": "string"
}
}
}
}
And it's recognized simply as a JSON document, not an OpenAPI one, so there is no proper validation and no code completion, etc.
How does one tell IntelliJ that the file is part of an OpenAPI specification, to be validated as such? One should think this could be inferred from begin $ref'ed from the main spec, but this doesn't work.
Trying to add a $schema value in the referenced file had no effect (and probably isn't in line with the OpenAPI spec anyway).
Manually selecting the OpenAPI 3.0 file type for car.json is not helpful, because then validation (rightly) fails - as it doesn't have the top-level structure required (info, openapi, paths).
Perhaps some specific JSON schema mapping needs to be added in IntelliJ preferences? If that's the case, it would be actually a sub-schema or some tag in the main OpenAPI spec, how can that be done?
IntelliJ version is: IntelliJ IDEA 2021.3.2 (Ultimate Edition)
Any help would be greatly appreciated.
Ron!
Such functionality is not yet supported. Please vote for https://youtrack.jetbrains.com/issue/IDEA-284305
I am serializing C# objects to an avro file format using the Microsoft-Avro-Core nuget package. The issue I am having is that the avro schema contains a namespace property in the json schema definition, which is not included in the schema of the serialized avro file.
ex.
{
"name": "typeName",
"type": [
{
"type": "record",
"name": "recordName",
"namespace": "topLevelRecord.record_data",
"fields": [
]
},
"null"
]
}
after serializeing, to avro, the containing schema definition is written without the namespace like so:
{
"name": "typeName",
"type": [
{
"type": "record",
"name": "recordName",
"fields": [
]
},
"null"
]
}
This is creating an issue for us since the consuming code no longer contains the namespace. How can I tell the serializer to include the namespace property in the serialized avro file.
Setting the Name and Namespace properties of the DataContract attribute does not work. It simply prefixes the serialized name property with the namespace. Which is not what we need.
We are using the SequentialWriter to serialize a collection of records. The AvroSerializationSettings object does not contain a property to enforce the serialization of the namespace.
Any help would be much appreciated.
Thanks
So after some research, it turns out the RecordSchema serializer doesn't include the namespace. After reviewing the code on github, it uses the FullName property as the name and leaves out the namespace.
Microsoft.Avro-Core
We did not define the schema. But we have to use it. In order to be compliant the serialized schema must include the namespace.
I am trying to create a schema for a piece of JSON and have slimmed down an example of what I am trying to achieve.
I have the following JSON schema:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Set name",
"description": "The exmaple schema",
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"additionalProperties": false
}
The following JSON is classed as valid when compared to the schema:
{
"name": "W",
"name": "W"
}
I know that there should be a warning about the two fields having the same name, but is there a way to force the validation to fail if the above is submitted? I want it to only validate when there is only one occurrence of the field 'name'
This is outside of the responsibility of JSON Schema. JSON Schema is built on top of JSON. In JSON, the behavior of duplicate properties in an object is undefined. If you want to get warning about this you should run it through a separate validation step to ensure valid JSON before passing it to a JSON Schema validator.
There is a maxProperties constraint that can limit total number of properties in an object.
Though having data with duplicated properties is a tricky case as many json decoding implementions would ignore duplicate.
So your JSON schema validation lib would not even know duplicate existed.
I'm using Pentaho Data Integration (Kettle) for an ETL process, extracting from a MongoDB source.
My source has an ISODateField so the JSON returned from the extraction is like:
{ "_id" : { "$oid" : "533a0180e4b026f66594a13b"} , "fac_fecha" : { "$date" : "2014-04-01T00:00:00.760Z"} , "fac_fedlogin" : "KAYAK"}
So now, I have to unserialize this JSON with an AVRO Input. So I've defined the AVRO schema
like
{
"type": "record",
"name": "xml_feeds",
"fields": [
{"name": "fac_fedlogin", "type": "string"},
{"name": "fac_empcod", "type": "string"},
{"name": "fac_fecha", "type": "string"}
]
}
It would be ok that fac_fecha could be a date type but AVRO doesn't support this.
In execution time, AVRO Input rejects all rows as they have an error. This only ocurrs when I use the date field.
Any suggestions of how can I do this?
Kettle version: 4.4.0
Pentaho-big-data-plugin: 1.3.0
You can convert this date string to a long (milliseconds).
This can be done both in Java and Javascript.
And then you can convert back the long to Date if required.
The easiest solution I found for this problem was uprading The Pentaho Big Data Plugin to a newer version 1.3.3
With this new version expliciting the schema for the mongodb Input json is avoided. So the Final solution is shown as following:
global view:
And inside MongoDB Input:
The schema is decided automatically and it can me modified.
I'm writing my first Avro schema, which uses JSON as the schema language. I know you cannot put comments into plain JSON, but I'm wondering if the Avro tool allows comments. E.g. Perhaps it strips them (like a preprocessor) before parsing the JSON.
Edit: I'm using the C++ Avro toolchain
Yes, but it is limited. In the schema, Avro data types 'record', 'enum', and 'fixed' allow for a 'doc' field that contains an arbitrary documentation string. For example:
{"type": "record", "name": "test.Weather",
"doc": "A weather reading.",
"fields": [
{"name": "station", "type": "string", "order": "ignore"},
{"name": "time", "type": "long"},
{"name": "temp", "type": "int"}
]
}
From the official Avro spec:
doc: a JSON string providing documentation to the user of this schema (optional).
https://avro.apache.org/docs/current/spec.html#schema_record
An example:
https://github.com/apache/avro/blob/33d495840c896b693b7f37b5ec786ac1acacd3b4/share/test/schemas/weather.avsc#L2
Yes, you can use C comments in an Avro JSON schema : /* something */ or // something Avro tools ignores these expressions during the parsing.
EDIT: It only works with the Java API.
According to the current (1.9.2) Avro specification it's allowed to put in extra attributes, that are not defined, as metadata:
This allows you add comments like this:
{
"type": "record",
"name": "test",
"comment": "This is a comment",
"//": "This is also a comment",
"TODO": "As per this comment we should remember to fix this schema" ,
"fields" : [
{
"name": "a", "type": "long"
},
{
"name": "b", "type": "string"
}
]
}
No, it can't in the C++ nor the C# version (as of 1.7.5). If you look at the code they just shove the JSON into the JSON parser without any comment preprocessing - bizarre programming style. Documentation and language support appears to be pretty sloppy...