I want to formally define a schema for a JSON-based protocol.
I have two criteria for the schema:
1. I want to be able to use tools to create parser/serializer (php and .net).
2. Result JSON should be easy to human-read
Here is the context. The schema will be describing a game character, as an example I will take one aspect of the profile - professions.
A character can have up to 2 professions (from a list of 10), each profession is described by a name and a level, e.g.:
Skinning - level 200
Blacksmith - level 300
To satisfy criterion #1 it really helps to have XSD schema (or JSON Schema) to drive code generator or a parser library. But that means that my JSON must look something like:
character : {
professions : [
{ profession : "Skinning", level : 525 }
{ profession : "Blacksmith", level : 745 }
]
}
but it feels too chatty, I would rather have JSON look like (notice that profession is used as a key):
character {
professions : {
"Skinning" : 525,
"Blacksmith" : 745
}
}
but the later JSON can not be described with XSD without having to define an element for each profession.
So I am looking for a solution for my situation, here are options I have identified:
shut up and make JSON XSD-friendly (first snippet above)
shut up and make JSON human-friendly and hand-code the parser/serializer.
but I would really like to find a solution that would satisfy both criteria.
Note: I am aware that Newton-King's JSON library would allow me to parse professions as a Dictionary - but it would require me to hand code the type to map this JSON to. Therefore so far I am leaning towards option #2, but I am open to suggestions.
Rename profession to name so it'd be like:
character : {
professions : [
{ name : "Skinning", level : 525 }
{ name : "Blacksmith", level : 745 }
]
}
Then after it's serialized on the client model would be like this:
profession = character.professions[0]
profession.name
=> "Skinning"
Your options are, as you said...
1 shut up and use xml
2 shut up and build your own
OR maybe 3... http://davidwalsh.name/json-validation
I would do #1
- because xml seems to be a fairly common way to transform stuff from X => Y formats
- I prefer to work in C# not JS
- many people use XML, its an accepted standard, there are many resources out there to help you along the way
Related
I have my output structure in COBOL - from which I try to generate to a JSON structure through DFHJS2LS - IBM tools. All the fields change to be required - this giving trouble when generating classes in .Net as all the fields are not present.
Question: How and where (in COBOL or DFHJS2LS) to define fields as optional in order to get them generated properly avoiding null pointer exception.
According to the documentation you can define your COBOL data items with...
data description OCCURS n TIMES
...and use mapping level 4.1 or higher and specify TRUNCATE-NULL-ARRAYS = ENABLED. There is a reference to "structured arrays" which I take to mean you would need to do something like...
05 Something Occurs 1 Times.
10 Something-Real PIC X(8).
...so you get...
"type":"array"
"maxItems":1
"minItems":0
"items":{ ... }
You could also specify mapping level 4.0 or higher and use...
data description OCCURS n TO m TIMES DEPENDING ON t
...to obtain...
"field-name":{
"type":"array",
"maxItems":m
"minItems":n
"items":{ ... }
}`
Mapping level is specified by...
//INPUT.SYSUT1 DD *
[...other control statements...]
MAPPING-LEVEL=4.3
[...other control statements...]
We have an incoming JSON message and would like to add some additional JSON data (JSON object with some fields) to the original message. How can I add the JSON Object "GlossDef" to the position outlined below?
{
"glossary":{
"title":"example glossary",
"GlossDiv":{
"title":"S",
"GlossList":{
"GlossEntry":{
"ID":"SGML",
"SortAs":"SGML",
"GlossTerm":"Standard Generalized Markup Language",
"Acronym":"SGML",
"Abbrev":"ISO 8879:1986",
*** "GlossDef":{
*** "para":"A meta-markup language, used to create markup languages such as DocBook.",
*** "GlossSeeAlso":[
*** "GML",
*** "XML"
*** ]
*** },
"GlossSee":"markup"
}**
}
}
}
}
Take a look at the 'addProperty' method in the expressions tab. Here is a question on the powerusers platform regarding this.
https://powerusers.microsoft.com/t5/Building-Flows/How-to-add-a-new-property-to-an-object-type-variable-in-Apply-to/td-p/155685
I validated it in a test example with the following steps:
Step 1 - This is the initial object where ever you are getting it from.
Step 2 - This is just initializing a variable with the object to add, you may have to do this in some dynamic fashion but the concept is still the same.
Step 3 - Parse the object from step one, so we can extract the sub object we want to append to.
Step 4 - Extract the sub object in this case we will choose 'GlossEntry' from the Dynamic Content list coming from the parse json.
Step 5 - Using a compose, use the expression tab and use the 'appProperty' to add 'ObjectToAdd' into 'ChildObject'. Look like this: addProperty(variables('ChildObject'), 'GlossDef', variables('ObjectToAdd'))
That should get you on the right path.
Struggling to understand MongoDBs handling of ids. Right now, I have a JSON file which I would like to put into a MongoDB Database. The file looks like this, roughly:
{
id: 'HARRYPOTTER-1',
title: 'Harry Potter and the Philosophers Stone',
price: 10
}
I would now like to put this file into MongoDB. Will my id attribute get lost? Will MongoDB want to overwrite it with its own unique id?
I have made sure that my id attributes are unique and I am making use of them elsewhere, so I am a little worried now. But maybe I understood things incorrectly.
Thanks a lot in advance!
Mongodb creates an _id field any element that doesn't have it.
If _id already there, it won't overwrite it. (and throws an error instead).
If id is there, mongodb doesn't care. It won't modify, and will follow 1 and 2.
Let's run an example in the mongo shell:
> db.random.insert({
... id: 'HARRYPOTTER-1',
... title: 'Harry Potter and the Philosophers Stone',
... price: 10
... })
WriteResult({ "nInserted" : 1 })
And now inspect the inserted document
> db.random.findOne()
{
"_id" : ObjectId("5f954cc93b09d63a06f7a4a9"),
"id" : "HARRYPOTTER-1",
"title" : "Harry Potter and the Philosophers Stone",
"price" : 10
}
You can see the _id has been created. It doesn't matter id and is not overwritten.
PS: The right binary you should look for to put that json in a MongoDB database is mongoimport (not mongorestore).
For more details refer to the docs.
Background
I'm converting old 4D Database code to use new ORDA concepts introduced in v17. However, I've noticed an oddity. When I have an entitySelection that I created using ds[$vtTableName].query(), and I convert that entitySelection to a collection (using .toCollection(), the order of the fields that I specify isn't honored.
Example Code:
C_OBJECT($voSelection)
$voSelection:=ds.Users.query("Active = 'True'")
C_COLLECTION($vcUsers)
$vcUsers:=$voSelection.toCollection("FirstName, LastName, DTLastSignin")
Expected Output
I would expect $vcUsers to be a collection of objects, and that each object would look like:
{ "FirstName" : "John", "LastName" : "Smith", "DTLastSignin" : "2019-10-12T32:23:00" }
Actual Output
Instead, I'm getting a different order:
{ "DTLastSignin" : "2019-10-12T32:23:00", "FirstName" : "John", "LastName" : "Smith" }
This has broken some of my API consumers because they expect to be able to specify field order, which the old way (Selection to JSON) respects. However, toCollection() doesn't appear to.
I can't find any sort of documentation about field order and if it is even suppose to. The official documentation shows the fields respecting the order, but maybe it's just a coincidence.
The answer to this is that you can't, because the toCollection() field list is just a filter. Under the hood, the Entity's properties are being looped through in their natural order, and being filtered whether or not the field is one of the specified filter fields.
That is the reason that .toCollection is faster than the clasic Selection to JSON method.
One way around this potentially would be to use use a .map() function. However, that would degrade performance, probably. I haven't done any profiling, so I'm not sure.
I have an attributes file that looks like this:
default['ftp_provision']['vsftpd']['pasv_ip'] = "192.168.0.10"
where the first attribute is the cookbook name, the second is the program, and the third is the option I want to change, implemented in a template .erb file as:
pasv_ip=<%node['ftp_provision']['vsftpd']['pasv_ip']%>
This is working correctly as expected.
However, I would like to add a role to change these attributes as required for several nodes. I'm using knife role create ftp_node1 to do that doing something like:
"default_attributes": {
"ftp_provision" => {"ftp_provision" => "vsftpd" => "pasv_ip" => "192.168.0.10"}
},
I keep getting syntax errors. All the examples I've been able to see have referenced making JSON files from Ruby DSL with only one level deep of attributes (e.g. default['key']['value']) so I'd like to know how to do this correctly per role.
you'll need to use actual JSON for this, and not sure what you mean about one level deep. this will create a hash 3 or 4 levels deep, depending on how you count it. i haven't seen issues with going further with attributes, and see many cookbooks in the wild with default['really']['freakin']['long']['strings']['of'] = attributes
i took a look at chef's examples and they're using ruby's hash format there rather than json, and that method of creating hashes makes rubocop squawk and say it's been deprecated. i can certainly see how that example would mislead you.
use a linter when building json, here's one https://jsonlint.com/
also I think this may work for you:
{
"ftp_provision": {
"vsftpd": {
"pasv_ip": "192.168.0.10"
}
}
}