I am pretty new to node, so might be the case I am using the JSON Schema not correctly, please correct me if I am wrong.
I have been using the npm module named jsonschema.
And for using validation I am using it like:
var Validator = require('jsonschema').Validator;
var v = new Validator();
var instance = {
"user_id" : "jesus",
"password" : "password"
};
var schema = {
"id" : "Login_login",
"type" : "object",
"additionalProperties" : false,
"properties" : {
"user_id": {
"type" : "string",
"required": true,
"minLenth" : 8,
"maxLength" : 10,
"description": "User id to login."
},
"password" : {
"type" : "string",
"required": true,
"minLength" : 8,
"maxLength" : 10,
"description" : "password to login."
}
}
};
var result = v.validate(instance, schema);
console.log('>>>>>> ' + result);
But the point is result is not having an error although the the minLength of user_id is kept as 8 but I have passed 5 characters, so if I am not wrong result should throw an error for the same, Why is it so
:(
The schema itself needs validation. Its "user_id" condition "minLength" spelled without a "g".
Related
I have a Json file like this :
{
"type" : "record",
"name" : "test",
"fields" : [ {
"name" : "y",
"type" : {
"type" : "array",
"items" : "double"
},
"doc" : "Type inferred from '[1.0,0.9766205557100867,0.907575419670957,0.7960930657056438,0.6473862847818277,0.46840844069979015,0.26752833852922075,0.05413890858541761,-0.16178199655276473,-0.37013815533991445,-0.5611870653623823,-0.7259954919231308,-0.8568571761675893,-0.9476531711828025,-0.9941379571543596,-0.9941379571543596,-0.9476531711828025,-0.8568571761675892,-0.7259954919231307,-0.5611870653623825,-0.37013815533991445,-0.16178199655276476,0.05413890858541758,0.267528338529221,0.4684084406997903,0.6473862847818279,0.796093065705644,0.9075754196709569,0.9766205557100867,1.0]'"
}, {
"name" : "x",
"type" : {
"type" : "array",
"items" : "double"
},
"doc" : "Type inferred from '[0.0,0.21666156231653746,0.43332312463307493,0.6499846869496124,0.8666462492661499,1.0833078115826873,1.2999693738992248,1.5166309362157622,1.7332924985322997,1.9499540608488373,2.1666156231653746,2.383277185481912,2.5999387477984497,2.8166003101149872,3.0332618724315243,3.249923434748062,3.4665849970645994,3.683246559381137,3.8999081216976745,4.116569684014212,4.333231246330749,4.549892808647287,4.766554370963824,4.983215933280362,5.199877495596899,5.416539057913437,5.6332006202299745,5.849862182546511,6.066523744863049,6.283185307179586]'"
} ]
}
and i want to create an sql query from this file. So so I wrote this code
def schema= new JsonSlurper().parseText(myAttr)
//build create table statement
def createTable = "create table if not exists ${schema.name} (" +
schema.fields.collectMany{ "\n ${it.name.padRight(39)} ${typeMap[it.type.collectMany{it.items}]}" }.join(',') +
"\n)"
but I think that I do not access the value items correctly. Can somebody help me please
Ok, guessing that typeMap is similar to:
def typeMap = [
double: 'DOUBLE'
]
You can change your code to:
String createTable = """create table if not exists ${schema.name} (
|${schema.fields.collect { " ${it.name.padRight(39)} ${typeMap[it.type.items]}" }.join(',\n')}
|)""".stripMargin()
I try to transform an Avro file to an SQL request. My file is like this :
{
"type" : "record",
"name" : "warranty",
"doc" : "Schema generated by Kite",
"fields" : [ {
"name" : "id",
"type" : "long",
"doc" : "Type inferred from '1'"
}, {
"name" : "train_id",
"type" : "long",
"doc" : "Type inferred from '21691'"
}, {
"name" : "siemens_nr",
"type" : "string",
"doc" : "Type inferred from 'Loco-001'"
}, {
"name" : "uic_nr",
"type" : "long",
"doc" : "Type inferred from '193901'"
}, {
"name" : "Configuration",
"type" : "string",
"doc" : "Type inferred from 'ZP28'"
}, {
"name" : "Warranty_Status",
"type" : "string",
"doc" : "Type inferred from 'Out_of_Warranty'"
}, {
"name" : "Warranty_Data_Type",
"type" : "string",
"doc" : "Type inferred from 'Real_based_on_preliminary_acceptance_date'"
}
and my code is :
import groovy.json.JsonSlurper
def ff = session.get()
if(!ff)return
//parse afro schema from flow file content
def schema = ff.read().withReader("UTF-8"){ new JsonSlurper().parse(it) }
//define type mapping
def typeMap = [
"string" : "varchar(255)",
"long" : "numeric(10)",
[ "null", "string" ]: "varchar(255)",
[ "null", "long" ] : "numeric(10)",
]
//build create table statement
def createTable = "create table ${schema.name} (" +
schema.fields.collect{ "\n ${it.name.padRight(39)} ${typeMap[it.type]}" }.join(',') +
"\n)"
//execute statement through the custom defined property
//SQL.mydb references http://docs.groovy-lang.org/2.4.10/html/api/groovy/sql/Sql.html object
SQL.mydb.execute(createTable)
//transfer flow file to success
REL_SUCCESS << ff
And i got this error :
ERROR nifi.processors.script.ExecuteScript ExecuteScript[id=e65b733e-0161-1000-45f0-3264d6fb51dd] ExecuteSc$ Possible solutions: getId(), find(), grep(), each(groovy.lang.Closure), find(groovy.lang.Closure), grep(java.lang.Object); rolling back session: {} org.apache.nifi.processor.exception.ProcessException: javax.script.ScriptException: javax.script.ScriptException: groovy.lang.MissingMethodException: No signature of m$ Possible solutions: getId(), find(), grep(), each(groovy.lang.Closure), find(groovy.lang.Closure), grep(java.lang.Object)
Can someone help me plz
This references a script from another SO post, I commented there and I provided an answer on a different forum, which I will copy here for completeness:
The variable createTable is a GString, not a Java String. This causes invocation of Sql.execute(GString), which converts the embedded expressions into parameters, and you can't use a parameter for a table name. Use the following instead:
SQL.mydb.execute(createTable.toString())
This will cause the invocation of Sql.execute(String), which does not try to parameterize the statement.
The title of the question is self explanatory. I want to know what differences are there in JSON Document A which comes from API request and JSON Document B which is already in Mongo DB.how to get changes column name and data also.. i am creating log..that's why i want...
Below is the code of what I'm trying:
NodeJS APICode//
function Updatejob(req, res) {
return function (jobSchedule) {
var obj = new Date();
CompareJSON(req, mongodbjson);
return Job.create(req.body).then(.....)
}
Already Data in Mongodb before Update Record
{
"_id" : ObjectId("586d1032aef194155028e9c7"),
"history" : [
{
"_id" : ObjectId("586d1032aef194155028e9c4"),
"updated_by" : "",
"details" : "Job Created",
"changetype" : "Created",
"datetime" : ISODate("2017-01-04T15:09:38.465Z")
}
],
"current_status" : "Pending",
"time" : 0
}
//REQUEST FOR UPDATE DATA
{
"_id" : ObjectId("586d1032aef194155028e9c7"),
"history" : [
{
"_id" : ObjectId("586d1032aef194155028e9c4"),
"updated_by" : "",
"details" : "Job Completed",
"changetype" : "Completed",
"datetime" : ISODate("2017-01-04T15:09:38.465Z")
}
],
"current_status" : "Completed",
"time" : 0
}
You can use jsondiffpatch:
var delta = jsondiffpatch.diff(object1, object2);
See:
https://www.npmjs.com/package/jsondiffpatch
As the title describes, I am trying to find out if I can instantiate a new mongoose model and schema with the JSON of a existing MongoDB document? It seems like I could do it as long as I have retrieve the document before Mongoose runs.
Is this possible? Ideally I would like to do it on the fly where I do NOT need to restart node.
**** Edit ****** Here is what I have tried.
note: the setTimeout is in there incase the schema creation is async. This is a down and dirty mockup just to see if this concept works. :).
THIS Does NOT Work!!!
function initModels(models) {
for (var i = 0, l = models.length; i < l; i++) {
console.log(models[i])
exports[models[i].name + "Schema"] = mongoose.Schema(models[i].model, {collection: "ModelData"});
(function(itrModel){
var model = itrModel;
setTimeout(function(){
exports[model.name] = mongoose.model(model.name.toUpperCase(), model.name + "Schema");
},2000)
})(models[i])
}
}
exports.getModels = function () {
DataConnection.Connect(function (db) {
var collectionName = "Models";
var collection = db.collection(collectionName);
collection.find().toArray(function (err, models) {
initModels(models);
})
})
};
Here is the model that is stored in the db
{
"name" : {
"type" : "String",
"required" : true
},
"email" : {
"type" : "String",
"required" : false,
"index" : true,
"index" : true
},
"password" : {
"type" : "String",
"required" : true
},
"role" : {
"type" : "String",
"required" : true
},
"createDate" : {
"type" : "Number",
"required" : true
}
}
I have a mongo json object as follows
{
"_id" : new BinData(3, "RDHABb22XESWvP83FplqJw=="),
"name" : "NEW NODE",
"host" : null,
"aet" : null,
"studies" : ["1.3.12.2.1107.5.99.3.30000008061114424970500000589"],
"testcases" : [new BinData(3, "Zhl+zIXomkqAd8NIkRiTjQ==")],
"sendentries" : [{
"_id" : "1.3.12.2.1107.5.99.3.30000008061114424970500000589",
"Index" : 0,
"Type" : "Study"
}, {
"_id" : "cc7e1966-e885-4a9a-8077-c3489118938d",
"Index" : 1,
"Type" : "TestCase"
}]
}
The fields "Studies" and "TestCases" are now obsolete and I am now storing that information in a new field called SendEntries. I would like to get rid of the Studies and TestCases from the old entries and unmap those fields going forward. I want to know how I can update my current collections to get rid of the Studies and TestCases fields.
I'm just few weeks into Mongo.
You can use the $unset operator with update.
db.collection.update({},
{ $unset: {
"studies": "",
"testcases": ""
},
{ "upsert": false, "muti": true }
)
And that will remove all of the fields from all of your documents in your collection
Use $unset, there's a manual page e.g.
db.yourCollection.update( { },
{ $unset: {
Studies: "",
testcases: ""
}
},
{ multi: true }
)