Point of JSON-RPC vs simpler JSON - json

this is a JSON-RPC object I am implementing
{
"method":"create",
"params": [
{
"nid": "69",
"body":
{
"und":
[
{
"value":
"blah"
}
]
}
}
]
}
here is how I would do it with "normal" JSON
{
"method":"create",
"id":"69",
"value":"blah"
}
since JSON is a parsed as a map or dictionary, this should be adequate regardless of the presence of nested JSONArrays and JSON Objects in those arrays, explain why JSON-RPC is better or desired by anything at all
thanks!

Your JSON-RPC is invalid; id has to be at the top level, as it is in your "normal" JSON
After correcting for the above, your JSON-RPC is still needlessly complex; params could just be [{"value":"blah"}]. Which would make your "normal" JSON very slightly less complex, but harder to parse (since you couldn't rely on "params" no matter what)
Your "normal" JSON would not allow for unnamed parameters (ones identified solely by position). Thus, the minimal added complexity buys you something which you might not need in your application, but others might

Related

Evaluate value of JSON Key using NIFI

I have a scenario where I can have multiple different types of json objects coming into my system. I do not know the object type ahead of time and based upon object type, will route to a different processor in my flow
{
"book": {
"id": "1234",
"name": "book1"
}
}
or
{
"video": {
"id": "3214",
"name": "video1"
}
}
or
{
"magazine": {
"id": "3233",
"name": "magazine1"
}
}
how can I evaluate if the object is a book, or a video, or a magazine to route to the correct processor
I've tried using evaluatejsonpath using the ~ but it just outputs the entire json object
Current flow :
One way is to extract all top level fields using EvaluateJsonPath processor, set the extracted field values to dynamic properties, and use the properties in RouteOnAttribute processor to route the flow to correct downstream processor.
EvaluateJsonPath:
Please don't forget to set
'Destination' to 'flowfile-attribute' and
'Return Type' to 'json'
If EvaluateJsonPath processor could not find the field or element, then the value of dynamic property will be set to empty string.
All we need to do is to use the dynamic properties in RouteOnAttribute processor.
RouteOnAttribute:
Using equals() and not()
or using isEmpty() and not()
Please don't forget to set
'Routing Strategy' to 'Route to Property Name'.
Apache NiFi expression language guide
Example Flow:
I am using PutFile processor as a downstream processor, for an example. It could be any processor.

Generic Codable types

I've got an idea that I'm trying to test out, I want to be able to have an array of different objects that are all Codable.
Here is the json
{
"cells":
[
{
"header": "dummy header"
},
{
"title": "dummy title"
}
]
}
Also a picture from Firestore because I'm not sure if I wrote that json out correctly:
Here's what I had so far testing with generics
struct Submission<Cell: Codable>: Codable {
let cells: [Cell]
}
struct ChecklistCell: Codable {
let header: String
}
struct SegmentedCell: Codable {
let title: String
}
The overarching goal is to decode a document that has an array (of cells) that can be different types, but are all codable. I'm not sure if this is possible, or if there is an even better approach. Thanks.
Update:
I did #Fogmeister 's solution and got it working, but not the most desirable outcome. It adds a weird layer to the json that ideally wouldn't be there. Any ideas?
I have done something similar to this in the past. Not with Firestore (although, more recently I did) but with our CMS that we use.
As #vadian pointed out, heterogeneous arrays are not supported by Swift.
Also... something else to point out.
When you have a generic type defined like...
struct Submission<Cell> {
let cells: [Cell]
}
Then, by definition, cells is a homogeneous array of a single type. If you try to put different types into it it will not compile.
You can get around this though by using an enum to bundle all your different Cells into a single type.
enum CellTypes {
case checkList(CheckListCell)
case segmented(SegmentedCell)
}
Now your array would be a homogeneous array of [CellTypes] where each element would be a case of the enum which would then contain the model of the cell inside it.
struct Submission {
let cells: [CellTypes]
}
This takes some custom decoding to get straight from JSON but I can't add that right now. If you need some guidance on that I'll update the answer.
Encoding and Decoding
Something to note from a JSON point of view. Your app will need to know which type of cell is being encoded/decoded. So your original JSON schema will need some updating to add this.
The automatic update from Firestore that you have shown is a fairly common way of doing this...
The JSON looks a bit like this...
{
"cells":
[
{
"checkListCell": {
"header": "dummy header"
}
},
{
"segmentedCell": {
"title": "dummy title"
}
}
]
}
Essentially, each item in the array is now an object that has a single key. From checkListCell, segmentedCell. This will be from any of the cases of your enum. This key tells your app which type of cell the object is.
Then the object shown against that key is then the underlying cell itself.
This is probably the cleanest way of modelling this data.
So, you might have two checklist cells and then a segmented cell and finally another checklist cell.
This will look like...
{
"cells":
[
{
"checkListCell": {
"header": "First checklist"
}
},
{
"checkListCell": {
"header": "Second checklist"
}
},
{
"segmentedCell": {
"title": "Some segmented stuff"
}
},
{
"checkListCell": {
"header": "Another checklist"
}
},
]
}
The important thing to think when analysing this JSON is not that it's harder for you (as a human being) to read. But that it's required, and actually fairly easy, for your app to read and decode/encode.
Hope that makes sense.

JSON data, properties are sometimes arrays sometimes objects

I'm reading a JSON response from a third party and I'm finding that some of the properties return in the notation for a single object when there is only one object to be returned and when there is multiple objects for the property the value is returned as an array of objects.
Example of a single object in the response
{
"data": {
"property1":"value",
"property2":"value",
"property3":"value"
}
}
Example of an array of objects in the response
{
"data": [
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
},
{
"property1":"value",
"property2":"value",
"property3":"value"
}
]
}
Why would the two different response formats be acceptable from the same endpoint?
This question bothered me as well whenever I saw it happening. I never really liked having to check the value in order to know how to access it.
One could argue that doing this saves some space in the payload. You save two bytes omitting the [] when there's only a single value. But it's weak IMHO and manipulating the data is harder as we already know.
But looking at it in a different way, this seems to make some sense: it's optimizing for the more common result, a single value. I've seen my fair share of data formats where the structure was very strict. For example, a recursive dictionary-like structure where any property that would contain an object, must be an array of that object. So in a deeply nested object, accessing a value may look like this:
root.data[0].aparent[0].thechild[0].myvalue
vs:
root.data.aparent.thechild.myvalue
If there were actually multiple values, then using an array would be appropriate.
I don't necessarily buy this since you still have to do a check, you'd have to do some tests before consuming the data (like if a response didn't came back). This type of response might make more sense in languages that have some form of pattern matching.

is there any way to write a null json transformation (passes through orig document) using Jolt?

You know how XSLT and other XML processing languages support the "null transformation" which passes a document through unmodified ?
I would like to do the same thing for Jolt (a very nice JSON transformation library used in Apache Camel and other places).
I could use JOLT's "insert default" feature and stick some harmless JSON tag and value at the top level of the document.. which is almost what want. But I couldnt' figure out how to pass through the document through JOLT but leave it untouched.
Why do i want to do this you ask ? We are developing a streaming data pipeline and I have to validate incoming strings as valid JSON... Jolt does that for me for free, but in some cases I don't want to monkey with the document. So, I want to use JOLT as a step in the pipeline, but (in some cases) have it do nothing to the input JSSON doc.
Another option is to create a custom transformer.
package com.example;
public class NullTransform implements Transform{
#Override
public Object transform(Object input) {
return input;
}
}
then reference it from the chainr jolt as below
[
{
"operation": "com.example.NullTransform"
}
]
You'll still incur the deserializaion/serialization overhead but no other code is run.
OOTB Jolt contains 5 "operations" that can be applied to the input hydrated Json. 4 of those (default, remove, sort, cardinality) are mutation operations / modify the supplied hydrated Json. I you gave those 4 an empty "spec", they would do nothing, and your data would "pass thru".
The "shift" operation does not mutate the input it is given. Instead it "copies" data from the "input" to a new "output" map/list. If you don't give "shift" a spec, then it copies nothing across.
Thus, from you question, it sounds like you are talking about "shift". With shift you have to explicitly pass thru all the things you "want to keep".
Depending on your data this may be terrible or easy, as you can have shift copy very large chunks of data across.
Example, for the "inception" example on the jolt demo site. http://jolt-demo.appspot.com/#inception
This Spec basically passes thru the input, coping the whole nested map that is "rating" thru to the output.
[
{
"operation": "shift",
"spec": {
"rating": "rating"
}
}
]
It can be generalized with wildcards to :
Spec
[
{
"operation": "shift",
"spec": {
"*": "&"
}
}
]

How to deserialize json with nested Dictionaries?

For some endpoints SimpleGeo.com returns something like this:
{
"geometry":{
"type":"Point",
"coordinates":[
-122.421583,
37.795027
]
},
"type":"Feature",
"id":SG_5JkVsYK82eLj26eomFrI7S_37.795027_-122.421583#1291796505,
"properties":{
"province":"CA",
"city":"San Francisco",
"name":"Bell Tower",
"tags":[],
"country":"US",
"phone":"+1 415 567 9596",
"href": http://api.simplegeo.com/1.0/features/SG_5JkVsYK82eLj26eomFrI7S_37.795027_-122.421583#1291796505.json,
"address":"1900 Polk St",
"owner":"simplegeo",
"postcode":"94109",
"classifiers":[
{
"category":"Restaurant",
"type":"Food & Drink",
"subcategory":""
}
]
}
}
(see http://simplegeo.com/docs/api-endpoints/simplegeo-features#get-detailed-information).
Now I have a small problem deserializing the 'properties' part. If I use e.g. a type of Dictionary it converts it to a nice dictionary, but the 'classifiers' value is just one {} string.
Is there any way to tell json.net to deserialize sub-arrays into yet another Dictionary etc etc? Basically there's an amount of plain key/values in that return, but I do know that there might be more than just that 'classifiers' sub-array (see the 'tags'), and maybe the depth goes even further in the values...
So basically what I was wondering is, how do I properly deserialize the properties part? Any suggestions? I don't mind writing my own JsonConverter, but maybe there is already a way that works without it.
I've found a solution for a similar question over here:
Json.NET: Deserializing nested dictionaries.
It uses a custom JsonConverter and I don't see a way to do without it.