JSON data, properties are sometimes arrays sometimes objects - json

I'm reading a JSON response from a third party and I'm finding that some of the properties return in the notation for a single object when there is only one object to be returned and when there is multiple objects for the property the value is returned as an array of objects.
Example of a single object in the response
{
"data": {
"property1":"value",
"property2":"value",
"property3":"value"
}
}
Example of an array of objects in the response
{
"data": [
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
}
]
}
Why would the two different response formats be acceptable from the same endpoint?

This question bothered me as well whenever I saw it happening. I never really liked having to check the value in order to know how to access it.
One could argue that doing this saves some space in the payload. You save two bytes omitting the [] when there's only a single value. But it's weak IMHO and manipulating the data is harder as we already know.
But looking at it in a different way, this seems to make some sense: it's optimizing for the more common result, a single value. I've seen my fair share of data formats where the structure was very strict. For example, a recursive dictionary-like structure where any property that would contain an object, must be an array of that object. So in a deeply nested object, accessing a value may look like this:
root.data[0].aparent[0].thechild[0].myvalue
vs:
root.data.aparent.thechild.myvalue
If there were actually multiple values, then using an array would be appropriate.
I don't necessarily buy this since you still have to do a check, you'd have to do some tests before consuming the data (like if a response didn't came back). This type of response might make more sense in languages that have some form of pattern matching.

Related

Validating against dynamic data - JSON Schema - Ajv

I'm trying to create a JSON Schema for something very dynamic. Say I have two pieces of data, and I want one (the source) to determine the validity of the other (the target). Both can change over time, but both will always be an array of objects with known properties. For example:
source.json
[
{ "id": 23, "active": true },
{ "id": 9, "active": false },
{ "id": 6, "active": true }
]
target.json
[
{ "identifier": 6 }
]
The schema I'm trying to create is this: For each active object in the source array, there should be an equivalent object in the target array. A little more formally, given an object in the source array where "active" equals true and "id" equals x, there should be an object in the target array where "identifier" equals x.
In the example above, the target would be invalid because it's missing an object like { "identifier": 23 }.
However, I want to statically define this schema (or something capable of generating it) in a JSON file ahead of time, and this feels pretty tough since the source array can change. I'm using Ajv, and I'm aware that it supports the $data reference, but I'm not sure that's enough to help me here. The other option I could see is creating some kind of schema-generator definition? In concept, it too would be a JSON object I define ahead of time, but at runtime it would be used to safely generate arbitrary schemas based on runtime data such as the source array. However, if a mechanism like this doesn't already exist, trying to implement it myself sounds like a great way to give myself a code-injection vulnerability.
Thanks for your time!

Assign proper types to data read from JSON

I have a struct such as this one:
type Data struct {
Id string
Value string
Custom customtype1
Special customtype2
TimeStamp Time
}
var model Data
I am reading data from a JSON object. Because the JSON is structured very differently, I can't just directly unmarshall the JSON into the struct. So I am trying to "match" the fields from the JSON objects to those of the struct one by one. I don't actually need to properly unmarshall the JSON data into the struct, all I really need is to be able to assign, for each field, the proper type to its value.
So I unmarshall the JSON to a generic interface, then convert it to a map[string]interface{} and iterate over that. For each field, I try to find a match among the field names in the model variable which I get using reflect.
Now this all works fine, but the problem arises when I try to get the right type for the values.
I can get the Type for a certain field from the model using reflect, but then I can't use that to cast the type of the value I get from the JSON because that is not a type. I can't use a switch statement either, because this is a simplified version of the situation and in reality I'm dealing with 1000+ different possible types. How can I convert the values I have for each field into their proper type ?
The only I can think of solving this would be to recreate a json string that matches the format of the struct and then unmarshall that into its proper struct, but that seems way to convoluted. Surely there must be a simpler way?
Here's a sample JSON (I can not change this structure, unless I rework it within my Go program):
{
"requestId": 101901,
"userName": "test",
"options": [1, 4],
"request": {
"timeStamp": {
"Value1": "11/02/2018",
"Value2": "11/03/2018"
},
"id": {
"Value1": "123abcd",
"Value2": "0987acd",
"Value3": "a9c003"
},
"custom": {
"Value1": "customtype1_value",
"Value2": "customtype1_value"
}
}
}
I'd advise against your current approach. You haven't provided enough context to tell us why you're choosing to unmarshall things one by one, but Go's support for JSON is good enough that I'd guess it is capable of doing what you want.
Are you aware of Marshall's support for struct tags? Those might serve the purpose you're looking for. Your struct would then look something more like:
type Data struct {
Id string `json:"id"`
Value string `json:"value"`
Custom customtype1 `json:"custom_type"`
Special customtype2 `json:"special_type"`
TimeStamp Time `json:"timestamp"`
}
If your problem is that the custom types don't know how to be unmarshalled, you can define custom unmarshalling functions for them.
This would then enable you to unmarshall an object like the following:
{
"id": "foo",
"value": "bar",
"custom_type": "2342-5234-4b24-b23a",
"special_type": "af23-af2f-rb32-ba23",
"timestamp": "2018-05-01 12:03:41"
}

How to get the key name in json?

My previous problem was I'm unable to arrange the json structure like what I wanted. And I found some answers that looks like it almost satisfy my needs but unfortunately I don't know if it's working or not because another problem has occurred.
Below, I arranged my own json data based on the json structure by someone named Programmer.
{
"dialog_type": {"human": {"inner": "He is so scary"}}
}
Here, I have a key called "human". I have two keys in my data. First is "human" and second is "non_human". Now if I have two data in my json file, it will become like this :
{
"dialog_type": {"human": {"inner": "He is so scary"}}
},
{
"dialog_type": {"non_human": "Once upon a time..."}
}
This case is maybe simillar to someone asked here. But unfortunately I have no idea if it's possible to do that in unity. I want to make a method like this answer. So I can determine what action to take by comparing those keys.
Now the question is how do I get the key name as a string in my json data using C# ?
To access the property names of a Unity javascript object, you can use:
for(var property in obj) {}
For instance, this will log all keys (i.e. property names) of all the property key-value pairs in a Unity javascript object (e.g. "key1" and "key2"):
function Start () {
var testObject = {
"key1": "value 1",
"key2": "value 2"
};
for(var property in testObject) {
Debug.Log(property.Key);
};
}
That should give you a way to check objects for any matching property names you are interested in.

TJSONUnMarshal: how to track what is actually unmarshalled

Is there another way to track what is unmarshalled than write own reverter for each field?
I'm updating my local data based on json message and my problem is (simplified):
I'm expecting json like
{ "items": [ { "id":1, "name":"foobar", "price":"12.34" } ] }
which is then unmarshaled to class TItems by
UnMarshaller.TryCreateObject( TItems, TJsonObject( OneJsonElement ), TargetItem )
My problem is that I can't make difference between
{ "items": [ { "id":1, "name":"", "price":"12.34" } ] }
and
{ "items": [ { "id":1, "price":"12.34" } ] }
In both cases name is blank and i'd like to update only those fields that are passed on json message. Of course I could create a reverted for each field, but there are plenty of fields and messages so it's quite huge.
I tried to look REST.Jsonreflect.pas source, but couldn't make sense.
I'm using delphi 10.
In Rest.Json unit there is a TJson class defined that offers several convenience methods like converting objects to JSON and vice versa. Specifically, it has a class function JsonToObject where you can specify options like for example ignore empty strings or ignore empty arrays. I think the TJson class can serve you. For unmarshalling complex business objects you have to write custom converters though.
Actually, my problem was finally simple to solve.
Instead of using TJSONUnMarshal.tryCreateObject I use now TJSONUnMarshal.CreateObject. First one has object parameters declared with out modifier, but CreateObject has Object parameter var modifier, so I was able to
create object, initalize it from database and pass it to CreateObject which only modifies fields in json message.

Point of JSON-RPC vs simpler JSON

this is a JSON-RPC object I am implementing
{
"method":"create",
"params": [
{
"nid": "69",
"body":
{
"und":
[
{
"value":
"blah"
}
]
}
}
]
}
here is how I would do it with "normal" JSON
{
"method":"create",
"id":"69",
"value":"blah"
}
since JSON is a parsed as a map or dictionary, this should be adequate regardless of the presence of nested JSONArrays and JSON Objects in those arrays, explain why JSON-RPC is better or desired by anything at all
thanks!
Your JSON-RPC is invalid; id has to be at the top level, as it is in your "normal" JSON
After correcting for the above, your JSON-RPC is still needlessly complex; params could just be [{"value":"blah"}]. Which would make your "normal" JSON very slightly less complex, but harder to parse (since you couldn't rely on "params" no matter what)
Your "normal" JSON would not allow for unnamed parameters (ones identified solely by position). Thus, the minimal added complexity buys you something which you might not need in your application, but others might