TJSONUnMarshal: how to track what is actually unmarshalled - json

Is there another way to track what is unmarshalled than write own reverter for each field?
I'm updating my local data based on json message and my problem is (simplified):
I'm expecting json like
{ "items": [ { "id":1, "name":"foobar", "price":"12.34" } ] }
which is then unmarshaled to class TItems by
UnMarshaller.TryCreateObject( TItems, TJsonObject( OneJsonElement ), TargetItem )
My problem is that I can't make difference between
{ "items": [ { "id":1, "name":"", "price":"12.34" } ] }
and
{ "items": [ { "id":1, "price":"12.34" } ] }
In both cases name is blank and i'd like to update only those fields that are passed on json message. Of course I could create a reverted for each field, but there are plenty of fields and messages so it's quite huge.
I tried to look REST.Jsonreflect.pas source, but couldn't make sense.
I'm using delphi 10.

In Rest.Json unit there is a TJson class defined that offers several convenience methods like converting objects to JSON and vice versa. Specifically, it has a class function JsonToObject where you can specify options like for example ignore empty strings or ignore empty arrays. I think the TJson class can serve you. For unmarshalling complex business objects you have to write custom converters though.

Actually, my problem was finally simple to solve.
Instead of using TJSONUnMarshal.tryCreateObject I use now TJSONUnMarshal.CreateObject. First one has object parameters declared with out modifier, but CreateObject has Object parameter var modifier, so I was able to
create object, initalize it from database and pass it to CreateObject which only modifies fields in json message.

Related

JSON data, properties are sometimes arrays sometimes objects

I'm reading a JSON response from a third party and I'm finding that some of the properties return in the notation for a single object when there is only one object to be returned and when there is multiple objects for the property the value is returned as an array of objects.
Example of a single object in the response
{
"data": {
"property1":"value",
"property2":"value",
"property3":"value"
}
}
Example of an array of objects in the response
{
"data": [
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
}
]
}
Why would the two different response formats be acceptable from the same endpoint?
This question bothered me as well whenever I saw it happening. I never really liked having to check the value in order to know how to access it.
One could argue that doing this saves some space in the payload. You save two bytes omitting the [] when there's only a single value. But it's weak IMHO and manipulating the data is harder as we already know.
But looking at it in a different way, this seems to make some sense: it's optimizing for the more common result, a single value. I've seen my fair share of data formats where the structure was very strict. For example, a recursive dictionary-like structure where any property that would contain an object, must be an array of that object. So in a deeply nested object, accessing a value may look like this:
root.data[0].aparent[0].thechild[0].myvalue
vs:
root.data.aparent.thechild.myvalue
If there were actually multiple values, then using an array would be appropriate.
I don't necessarily buy this since you still have to do a check, you'd have to do some tests before consuming the data (like if a response didn't came back). This type of response might make more sense in languages that have some form of pattern matching.

Unable to get a key value called properties "properties": "Value" Groovy

I'm making API requests to a service which returns a JSON object within the body.
I can't seem to get the value of a key called "properties" within groovy.
Everytime I call obj.properties i get the following back
{
"class": "org.json.JSONObject"
}
but if I call just the obj I get the expected JSON object
{
"dummy1": ,
"dummy2": false,
"dummy3": etsad,
"dummy4": asdfw,
"dummy5": qweqwe,
"dummy6": 123123,
"properties": {
"country": UK,
}
}
Likewise if I obj.dummy2 i get false it's only when I obj.properties do I get the above mentioned response
Notice groovy have a special handling for Object's properties, for example for number:
def y = 25
print y.properties
It will print [class:class java.lang.Integer]
So it's part of basic groovy object
See also an answer about getting non-synthetic properties from groovy object
As #daggett comment, you can use
obj.get('properties')
Check out this answer here on how to access the properties of objects.
The reason obj.properties isn't working is most likely due to the fact that every object will have properties, and in your case obj.properties is getting the properties of the JSON object and not the value associated with the key.
Instead of obj.properties, consider obj['properties']

Return nested JSON in AWS AppSync query

I'm quite new to AppSync (and GraphQL), in general, but I'm running into a strange issue when hooking up resolvers to our DynamoDB tables. Specifically, we have a nested Map structure for one of our item's attributes that is arbitrarily constructed (its complexity and form depends on the type of parent item) — a little something like this:
"item" : {
"name": "something",
"country": "somewhere",
"data" : {
"nest-level-1a": {
"attr1a" : "foo",
"attr1b" : "bar",
"nest-level-2" : {
"attr2a": "something else",
"attr2b": [
"some list element",
"and another, for good measure"
]
}
}
},
"cardType": "someType"
}
Our accompanying GraphQL type is the following:
type Item {
name: String!
country: String!
cardType: String!
data: AWSJSON! ## note: it was originally String!
}
When we query the item we get the following response:
{
"data": {
"genericItemQuery": {
"name": "info/en/usa/bra/visa",
"country": "USA:BRA",
"cardType": "visa",
"data": "{\"tourist\":{\"reqs\":{\"sourceURL\":\"https://travel.state.gov/content/passports/en/country/brazil.html\",\"visaFree\":false,\"type\":\"eVisa required\",\"stayLimit\":\"30 days from date of entry\"},\"pages\":\"One page per stamp required\"}}"
}}}
The problem is we can't seem to get the Item.data field resolver to return a JSON object (even when we attach a separate field-level resolver to it on top of the general Query resolver). It always returns a String and, weirdly, if we change the expected field type to String!, the response will replace all : in data with =. We've tried everything with our response resolvers, including suggestions like How return JSON object from DynamoDB with appsync?, but we're completely stuck at this point.
Our current response resolver for our query has been reverted back to the standard response after none of the suggestions in the aforementioned post worked:
## 'Before' response mapping template on genericItemQuery query; same result as the 'After' listed below **
#set($result = $ctx.result)
#set($result.data = $util.parseJson($ctx.result.data))
$util.toJson($result)
## 'After' response mapping template **
$util.toJson($ctx.result)
We're trying to avoid a situation where we need to include supporting types for each nest level in data (since it changes based on parent Item type and in cases like the example I gave it can have three or four tiers), and we thought changing the schema type to AWSJSON! would do the trick. I'm beginning to worry there's no way to get around rebuilding our base schema, though. Any suggestions to the contrary would be helpful!
P.S. I've noticed in the CloudWatch logs that the appropriate JSON response exists under the context.result.data response field, but somehow there's the following transformedTemplate (which, again, I find very unusual considering we're not applying any mapping template except to transform the result into valid JSON):
"arn": ...
"transformedTemplate": "{data={tourist={reqs={sourceURL=https://travel.state.gov/content/passports/en/country/brazil.html, visaFree=false, type=eVisa required, stayLimit=30 days from date of entry}, pages=One page per stamp required}}, resIds=USA:BRA, cardType=visa, id=info/en/usa/bra/visa}",
"context": ...
Apologies for the lengthy question, but I'm stumped.
AWSJSON is a JSON string type so you will always get back a string value (this is what your type definition must adhere to).
You could try to make a type for data field which contains all possible fields and then resolve fields to a corresponding to a parent type or alternatively you could try to implement graphQL interfaces

How can you subset a SwiftyJSON JSON object

I am building an iOS app in which one of my API calls returns a large JSON blob that I load into a JSON object using SwiftyJSON. For example, it looks something like this
{
"data": {
"name":"object name",
"id":1,
"description":"short description of object",
"type":"type",
"runs":[
],
}
As part of the app the user can modify things like the name, but the API endpoint for the PATCH call needs to have the runs key removed. Does anyone know how to take a SwiftyJSON JSON object and create a new one that has a subset of keys. For example, I want the JSON blob to look like
{
"data": {
"name":"object name",
"id":1,
"description":"short description of object",
"type":"type"
}
I have spent many hours trying various things with no luck. Any help would be greatly appreciated.
You can grab the associated dictionaryValue from your SwiftyJSON object and use the removeValueForKey method.
Of course it means you have to assign the dictionaryValue to a variable, you can't remove a key in the SwiftyJSON object itself.
Example:
var dataDict = json["data"].dictionaryValue
dataDict.removeValueForKey("runs")
Note: the removeValueForKey method name is a bit misleading; it will remove the key, not just the value.

jackson jsonparser restart parsing in broken JSON

I am using Jackson to process JSON that comes in chunks in Hadoop. That means, they are big files that are cut up in blocks (in my problem it's 128M but it doesn't really matter).
For efficiency reasons, I need it to be streaming (not possible to build the whole tree in memory).
I am using a mixture of JsonParser and ObjectMapper to read from my input.
At the moment, I am using a custom InputFormat that is not splittable, so I can read my whole JSON.
The structure of the (valid) JSON is something like:
[ { "Rep":
{
"date":"2013-07-26 00:00:00",
"TBook":
[
{
"TBookC":"ABCD",
"Records":
[
{"TSSName":"AAA",
...
},
{"TSSName":"AAB",
...
},
{"TSSName":"ZZZ",
...
}
] } ] } } ]
The records I want to read in my RecordReader are the elements inside the "Records" element. The "..." means that there is more info there, which conforms my record.
If I have an only split, there is no problem at all.
I use a JsonParser for fine grain (headers and move to "Records" token) and then I use ObjectMapper and JsonParser to read records as Objects. For details:
configure(JsonParser.Feature.AUTO_CLOSE_SOURCE, false);
MappingJsonFactory factory = new MappingJsonFactory();
mapper = new ObjectMapper(factory);
mapper.configure(Feature.FAIL_ON_UNKNOWN_PROPERTIES,false);
mapper.configure(SerializationConfig.Feature.FAIL_ON_EMPTY_BEANS,false);
parser = factory.createJsonParser(iStream);
mapper.readValue(parser, JsonNode.class);
Now, let's imagine I have a file with two inputsplits (i.e. there are a lot of elements in "Records").
The valid JSON starts on the first split, and I read and keep the headers (which I need for each record, in this case the "date" field).
The split would cut anywhere in the Records array. So let's assume I get a second split like this:
...
},
{"TSSName":"ZZZ",
...
},
{"TSSName":"ZZZ2",
...
}
] } ] } } ]
I can check before I start parsing, to move the InputStream (FSDataInputStream) to the beginning ("{" ) of the record with the next "TSSNAME" in it (and this is done OK). It's fine to discard the trailing "garbage" at the beginning. So we got this:
{"TSSName":"ZZZ",
...
},
{"TSSName":"ZZZ2",
...
},
...
] } ] } } ]
Then I handle it to the JsonParser/ObjectMapper pair seen above.
The first object "ZZZ" is read OK.
But for the next "ZZZ2", it breaks: the JSONParser complaints about malformed JSON. It is encountering a "," not being in an array. So it fails. And then I cannot keep on reading my records.
How could this problem be solved, so I can still be reading my records from the second (and nth) splits? How could I make the parser ignore these errors on the commas, or either let the parser know in advance it's reading contents of an array?
It seems it's OK just catching the exception: the parser goes on and it's able to keep on reading objects via the ObjectMapper.
I don't really like it - I would like an option where the parser could not throw Exceptions on nonstandard or even bad JSON. So I don't know if this fully answers the question, but I hope it helps.