How to get all errors from lua json schema validation - json

I am able to work with lua json schema validators like ljsonschema &rapidjason but noticed none of them give all the errors & they abort on the 1st error.
Is it possible to get the complete list of errors if the input json has > than 1 validation issues ?
For ex:
For a schema like
{
"type" : "object",
"properties" : {
"foo" : { "type" : "string" },
"bar" : { "type" : "number" }
}
}
The sample json : { "foo": 12, "bar": "42" } should give 2 errors. However, I get only 1 error property "foo" validation failed: wrong type: expected string, got number.
How can I get both the below errors:
property "foo" validation failed: wrong type: expected string, got number
property "bar" validation failed: wrong type: expected number, got string
in the same run ?

Related

Cloudformation Lists & Strings :: Value of property SecurityGroupIds must be of type List of String

I am getting the following error:
"errorMessage": "Stack ID: stack-qenlamel5rn7p1icu Failure Reason: [Instance creation failed with reason: Value of property SecurityGroupIds must be of type List of String, stack-qenlamel5rn7p1icu creation failed with reason: The following resource(s) failed to create: [Instance].
The code I am using is:
"SecurityGroupIds": [{
"Fn::ImportValue": {
"Fn::Sub": "${EnvIdentifier}-EC2SecurityGroup"
}
}, {
"Ref": "SecurityGroups"
}],
So as you can see I'm using both Ref and Fn::ImportValue together to create a single 'List of String'. I've tried both separate and they work. Namely:
1. Works:
"SecurityGroupIds" : { "Ref": "SecurityGroups" },
2. Works:
"SecurityGroupIds" : [ { "Fn::ImportValue" : {"Fn::Sub": "${EnvIdentifier}-EC2SecurityGroup" } } ]
Together I get the above error. Tried various things like join etc.
Since this works:
Works: "SecurityGroupIds" : { "Ref": "SecurityGroups" },
I speculate that the SecurityGroups is a Parameter which takes a list of security groups. Thus using SecurityGroups and ImportValue at once will fail, as one is a list and the other is a string.
Thus you have to construct a joined list of strings. One way is shown here in yaml though. You would have to modify it to json format.
In YAML syntax we do the following:
Value: !Join
- ','
- !Ref SubnetId
This produces comma-separated list of subnet ids.

How can i match fields with wildcards using jq?

I have a JSON object of the following form:
{
"Task11c-0-20181209-12:59:30-65611" : {
"attributes" : {
"configname" : "Task11c",
"datetime" : "20181209-12:59:30",
"experiment" : "Task11c",
"inifile" : "lab1.ini",
"iterationvars" : "",
"iterationvarsf" : "",
"measurement" : "",
"network" : "Manhattan1_1C",
"processid" : "65611",
"repetition" : "0",
"replication" : "#0",
"resultdir" : "results",
"runnumber" : "0",
"seedset" : "0"
},
......
},
......
"Task11b-12-20181209-13:03:17-65612" : {
....
....
},
.......
}
I reported only the first part, but in general I have many other sub-objects which match a string like Task11c-0-20181209-12:59:30-65611. They all have in common the initial word Task. I want to extract the processid from each sub-object. I'm trying to use a wildcard like in bash, but it seems not to be possible.
I also read about the match() function, but it works with strings and not json objects.
Thanks for the support.
Filter keys that start with Test and get only the attribute of your choice using the select() expression
jq 'to_entries[] | select(.key|startswith("Task")).value.attributes.processid' json

read.json only reading the first object in Spark

I have a multiLine json file, and I am using spark's read.json to read the json, the problem is that it is only reading the first object from that json file
val dataFrame = spark.read.option("multiLine", true).option("mode", "PERMISSIVE").json(path)
dataFrame.rdd.saveAsTextFile("DataFrame")
Sample json:
{
"_id" : "589895e123c572923e69f5e7",
"thing" : "54eb45beb5f1e061454c5bf4",
"timeline" : [
{
"reason" : "TRIP_START",
"timestamp" : "2017-02-06T17:20:18.007+02:00",
"type" : "TRIP_EVENT",
"location" : [
11.1174091,
69.1174091
],
"endLocation" : [],
"startLocation" : []
},
"reason" : "TRIP_END",
"timestamp" : "2017-02-06T17:25:26.026+02:00",
"type" : "TRIP_EVENT",
"location" : [
11.5691428,
48.1122443
],
"endLocation" : [],
"startLocation" : []
}
],
"__v" : 0
}
{
"_id" : "589895e123c572923e69f5e8",
"thing" : "54eb45beb5f1e032241c5bf4",
"timeline" : [
{
"reason" : "TRIP_START",
"timestamp" : "2017-02-06T17:20:18.007+02:00",
"type" : "TRIP_EVENT",
"location" : [
11.1174091,
50.1174091
],
"endLocation" : [],
"startLocation" : []
},
"reason" : "TRIP_END",
"timestamp" : "2017-02-06T17:25:26.026+02:00",
"type" : "TRIP_EVENT",
"location" : [
51.1174091,
69.1174091
],
"endLocation" : [],
"startLocation" : []
}
],
"__v" : 0
}
I get only the first entry with id = 589895e123c572923e69f5e7.
Is there something that I am doing wrong?
Are you sure multiple multi line JSON is supported?
Each line must contain a separate, self-contained valid JSON object... For a regular multi-line JSON file, set the multiLine option to true
http://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets
Where a "regular JSON file" means the entire file is a singular JSON object / array, however, simply putting {} around your data won't work because you need a key for every object, and so you'd need a top level key, maybe say "objects". Similarly, you can try an array, but wrapping with []. Either way, these will only work if every object in that array or object is separated by commas.
tl;dr - the whole file needs to be one valid JSON object when multiline=true
You're only getting one object because it parses the first set of brackets, and that's it.
If you have full control over the JSON file, the indented layout is purely for human consumption. Just flatten the objects and let Spark parse it as the API is intended to be used
Keep one line and one JsValue in file, remove .option("multiLine", true).
like this:
{"name":"Michael"}
{"name":"Andy", "age":30}
{"name":"Justin", "age":19}

Mapping definition for [suggest] has unsupported parameters: [payloads : true]

I am using an example right from ElasticSearch documentation here using the Completion Suggestor but I am getting an error saying payloads: true is an unsupported parameter. Which obviously is supported unless the docs are wrong? I have the latest Elasticsearch app install (5.3.0).
Here is my cURL:
curl -X PUT localhost:9200/search/pages/_mapping -d '{
"pages" : {
"properties": {
"title": {
"type" : "string"
},
"suggest" : {
"type" : "completion",
"analyzer" : "simple",
"search_analyzer" : "simple",
"payloads" : true
}
}
}
}';
And the error:
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "Mapping definition for [suggest] has unsupported parameters: [payloads : true]"
}
],
"type" : "mapper_parsing_exception",
"reason" : "Mapping definition for [suggest] has unsupported parameters: [payloads : true]"
},
"status" : 400
}
The payloadparameter has been removed in ElasticSearch 5.3.0 by the following commit: Remove payload option from completion suggester . Here is the comit message:
The payload option was introduced with the new completion
suggester implementation in v5, as a stop gap solution
to return additional metadata with suggestions.
Now we can return associated documents with suggestions
(#19536) through fetch phase using stored field (_source).
The additional fetch phase ensures that we only fetch
the _source for the global top-N suggestions instead of
fetching _source of top results for each shard.

Invalid request error in AWS::Route53::RecordSet when creating stack with AWS CloudFormation json

Invalid request error in AWS::Route53::RecordSet when creating stack with AWS CloudFormation json. Here is the error:
CREATE_FAILED AWS::Route53::RecordSet ApiRecordSet Invalid request
Here is the ApiRecordSet:
"ApiRecordSet" : {
"Type" : "AWS::Route53::RecordSet",
"Properties" : {
"AliasTarget" :{
"DNSName": {"Fn::GetAtt" : ["RestELB", "CanonicalHostedZoneName"]},
"HostedZoneId": {"Fn::GetAtt": ["RestELB", "CanonicalHostedZoneNameID"]}
},
"HostedZoneName" : "some.net.",
"Comment" : "A records for my frontends.",
"Name" : {"Fn::Join": ["", ["api",{"Ref": "Env"},".some.net."]]},
"Type" : "A",
"TTL" : "300"
}
}
What is wrong/invalid in this request?
The only thing I see immediately wrong is that you are using both an AliasTarget and TTL at the same time. You can't do that since the record uses the TTL defined in the AliasTarget. For more info check out the documentation on RecordSet here.
I also got this error and fixed it by removing the "SetIdentifier" field on record sets where it was not needed.
It is only needed when the "Name" and "Type" fields of multiple records are the same.
Documentation on AWS::Route53::RecordSet