Mapping JSON to Apama Events - json

I am trying to use the Mapper codec in my connectivity chain to convert a JSON object that looks like this:
{"test2":[
["column1","column2","column3"],
["16091", "449", "05302018"],
["16092", "705", "05302018"]
]}
to an EPL type. To me it looks like a sequence of sequences, so I used
event test1 {
sequence<string> values;
}
event test2 {
sequence<test1> tests;
}
But this gives me the error
Unable to parse event test.1: Incorrect type in get (you asked for map but its' actually list)
Any ideas how I should be using the Mapper codec to this end?

Unless explicitly remapped, that won't quite work. You have to consider the entire structure of the document from top to bottom. It's not a sequence of strings - it's a JSON object/dictionary at top-level, with a value that is a sequence of sequences of string.
A JSON object/dictionary can map to an event type based on field names. So as Matt's answer said, a JSON document like yours would need an event type like
event SomeEventType {
sequence<sequence<string > > test2;
}
If it's not appropriate to create an event type that exactly corresponds to the JSON document's structure, then you'll need to use the mapping codec to rearrange the fields in the JSON document to match the fields and sub-fields in an event type. Or possibly a custom codec; I think Matt's right that the mapper can't do exactly what you want.
Further, because JSON documents are type-less at the top-level, you'll need to make sure that the event type is defined somehow. There are multiple ways of doing that.
(1) If this particular connectivity will only send you events of one type, you can use the 'defaultEventType' configuration option of the apama.eventMap host plug-in at the top of your chain e.g.
apama.eventMap:
defaultEventMap: SomeEventType
(2) If it depends on the structure of the document, you'll need to use the classifier codec. That can take a message going towards the correlator, and assign it an event type based on the content of fields (or simply their presence). You can learn about it in the documentation.
(3) The transport will sometimes define it on messages being sent towards the correlator. For example, in the case of the Universal Messaging transport, then the 'tag' of the UM event will be used as the type. This may or may not be appropriate.
If you do end up doing anything non-trivial with the classifier or mapper, I'd strongly recommend use of the 'diagnostic codec' to help in developing the classifier or mapper rules. This is a codec you can put anywhere in the chain of codecs that will log every event going through it, so you can see how your rules are operating by seeing what happens before and after classification/mapping. You can read about it in the documentation, but it's usually as simple as putting '- diagnosticCodec' somewhere in your chain. I've found it absolutely invaluable when debugging connectivity chains.

you want your event type to look like:
event type1 {
sequence<sequence<string> > data;
}
it's not possible in the mapper directly to convert to your type2/type1 schema, but you'd be able to write your own codec to do that or do post-filtering in EPL.
HTH,
Matt

Related

TypeScript types serialisation/deserialization in localstorage

I have a Typescript app. I use the localstorage for development purpose to store my objects and I have the problem at the deserialization.
I have an object meeting of type MeetingModel:
export interface MeetingModel {
date: moment.Moment; // from the library momentjs
}
I store this object in the localStorage using JSON.stringify(meeting).
I suppose that stringify call moment.toJson(), that returns an iso string, hence the value stored is: {"date":"2016-12-26T15:03:54.586Z"}.
When I retrieve this object, I do:
const stored = window.localStorage.getItem("meeting");
const meeting: MeetingModel = JSON.parse(stored);
The problem is: meeting.date contains a string instead of a moment !
So, first I'm wondering why TypeScript let this happen ? Why can I assign a string value instead of a Moment and the compiler agree ?
Second, how can I restore my objects from plain JSON objects (aka strings) into Typescript types ?
I can create a factory of course, but when my object database will grow up it will be a pain in the *** to do all this work.
Maybe there is a solution for better storing in the local storage in the first place?
Thank you
1) TypeScript is optionally typed. That means there are ways around the strictness of the type system. The any type allows you to do dynamic typing. This can come in very handy if you know what you are doing, but of course you can also shoot yourself in the foot.
This code will compile:
var x: string = <any> 1;
What is happening here is that the number 1 is casted to any, which to TypeScript means it will just assume you as a developer know what it is and how you to use it. Since the any type is then assigned to a string TypeScript is absolutely fine with it, even though you are likely to get errors during run-time, just like when you make a mistake when coding JavaScript.
Of course this is by design. TypeScript types only exist during compile time. What kind of string you put in JSON.parse is unknowable to TypeScript, because the input string only exists during run-time and can be anything. Hence the any type. TypeScript does offer so-called type guards. Type guards are bits of code that are understood during compile-time as well as run-time, but that is beyond the scope of your question (Google it if you're interested).
2) Serializing and deserializing data is usually not as simple as calling JSON.stringify and JSON.parse. Most type information is lost to JSON and typically the way you want to store objects (in memory) during run-time is very different from the way you want to store them for transfer or storage (in memory, on disk, or any other medium). For instance, during run-time you might need lookup tables, user/session state, private fields, library specific properties, while in storage you might want version numbers, timestamps, metadata, different types of normalization, etc. You can JSON.stringify anything you want in JavaScript land, but that does necessarily mean it is a good idea. You might want to design how you actually store data. For example, an iso string looks pretty, but takes a lot of bytes. If you have just a few that does not matter, but when you are transferring millions a second you might want to consider another format.
My advise to you would be to define interfaces for the objects you want to save and like moment create a .toJson method on your model object, which will return the DTO (Data Transfer Object) that you can simply serialize with JSON.stringify. Then on the way back you cast the any output of JSON.parse to your DTO and then convert it back to your model with a factory function or constructor of your creation. That might seem like a lot of boilerplate, but in my experience it is totally worth it, because now you are in control of what gets stored and that gives you a lot of flexility to change your model without getting deserialization problems.
Good luck!
You could use the reviver feature of JSON.parse to convert the string back to a moment:
JSON.parse(input, (key, value) => {
if (key == "date") {
return parseStringAsMoment(value);
} else {
return value;
});
Check browser support for reviver, though, as it's not the same as basic JSON.parse

Deserialize an anonymous JSON array?

I got an anonymous array which I want to deserialize, here the example of the first array object
[
{ "time":"08:55:54",
"date":"2016-05-27",
"timestamp":1464332154807,
"level":3,
"message":"registerResourcePath ('', '/sap/bc/ui5_ui5/ui2/ushell/resources/')",
"details":"","component":"sap.ui.ModuleSystem"},
{"time":"08:55:54","date":"2016-05-27","timestamp":1464332154808,"level":3,"message":"URL prefixes set to:","details":"","component":"sap.ui.ModuleSystem"},
{"time":"08:55:54","date":"2016-05-27","timestamp":1464332154808,"level":3,"message":" (default) : /sap/bc/ui5_ui5/ui2/ushell/resources/","details":"","component":"sap.ui.ModuleSystem"}
]
I tried deserializing using CL_TREX_JSON_SERIALIZER, but it is corrupt and does not work with my JSON, here is why
Then I tried /UI2/CL_JSON, but it needs a "structure" that perfectly fits the object given by the JSON Object. "Structure" means in my case an internal table of objects with the attributes time, date, timestamp, level, messageanddetails. And there was the problem: it does not properly handle references and uses class description to describe the field assigned to the field-symbol. Since I can not have a list of objects but only a list of references to objects that solution also doesn't works.
As a third attempt I tried with the CALL TRANSFORMATION as described by Horst Keller, but with this method I was not able to read in an anonymous array, and here is why
My major points:
I do not want to change the JSON, since that is what I get from sap.ui.log
I prefere to use built-in functionality and not a thirdparty framework
Your problem comes out not from the anonymity of array, but from the awkwardness of SAP JSON (De)serializer, which doesn't respect double quotes, which enclose JSON attributes. The issue is thoroughly described in this answer.
If you don't want to change your JSON on-the-fly, the only way you have is to change CL_TREX_JSON_DESERIALIZER class like this.
/UI5/CL_JSON_PARSER parses JSONs with unknown format.
Note that it's got "for internal use" written on it so many times that you probably should take it seriously and clone its code to fixate it.

What this the best way to ignore unwanted fields in a JSON payload from a PUT/PATCH using Golang?

I have a situation where people consuming our API will need to do a partial update in my resource. I understand that the HTTP clearly specifies that this is a PATCH operation, even though people on our side are used to send a PUT request for this and that's how the legacy code is built.
For exemplification, imagine the simple following struct:
type Person struct {
Name string
Age int
Address string
}
On a POST request, I will provide a payload with all three values (Name, Age, Address) and validate them accordingly on my Golang backend. Simple.
On a PUT/PATCH request though, we know that, for instance, a name never changes. But say I would like to change the age, then I would simply send a JSON payload containing the new age:
PUT /person/1 {age:30}
Now to my real question:
What is the best practice to prevent name from being used/updated intentionally or unintentionally modified in case a consumer of our API send a JSON payload containing the name field?
Example:
PUT /person/1 {name:"New Name", age:35}
Possible solutions I thought of, but I don't actually like them, are:
On my validator method, I would either forcibly remove the unwanted field name OR respond with an error message saying that name is not allowed.
Create a DTO object/struct that would be pretty much an extension of my Person struct and then unmarshall my JSON payload into it, for instance
type PersonPut struct {
Age int
Address string
}
In my opinion this would add needless extra code and logic to abstract the problem, however I don't see any other elegant solution.
I honestly don't like those two approaches and I would like to know if you guys faced the same problem and how you solved it.
Thanks!
The first solution your brought is a good one. Some well known frameworks use to implement similar logic.
As an example, latests Rails versions come with a built in solution to prevent users to add extra data in the request, causing the server to update wrong fields in database. It is a kind of whitelist implemented by ActionController::Parameters class.
Let's suppose we have a controller class as bellow. For purpose of this explanation, it contains two update actions. But you won't see it in real code.
class PeopleController < ActionController::Base
# 1st version - Unsafe, it will rise an exception. Don't do it
def update
person = current_account.people.find(params[:id])
person.update!(params[:person])
redirect_to person
end
# 2nd version - Updates only permitted parameters
def update
person = current_account.people.find(params[:id])
person.update!(person_params) # call to person_params method
redirect_to person
end
private
def person_params
params.require(:person).permit(:name, :age)
end
end
Since the second version allows only permitted values, it'll block the user to change the payload and send a JSON containing a new password value:
{ name: "acme", age: 25, password: 'account-hacked' }
For more details, see Rails docs: Action Controller Overview and ActionController::Parameters
If the name cannot be written it is not valid to provide it for any update request. I would reject the request if the name was present. If I wanted to be more lenient, I might consider only rejecting the request if name is different from the current name.
I would not silently ignore a name which was different from the current name.
This can be solved by decoding the JSON body into a map[string]json.RawMessage first. The json.RawMessage type is useful for delaying the actual decoding. Afterwards, a whitelist can be applied on the map[string]json.RawMessage map, ignoring unwanted properties and only decoding the json.RawMessages of the properties we want to keep.
The process of decoding the whitelisted JSON body into a struct can be automated using the reflect package; an example implementation can be found here.
I am not proficient on Golang but I believe a good strategy would be converting your name field to be a read-only field.
For instance, in a strictly object-oriented language as Java/.NET/C++ you can just provide a Getter but not a Setter.
Maybe there is some accessor configuration for Golang just like Ruby has....
If it is read-only then it shouldn't bother with receiving a spare value, it should just ignore it. But again, not sure if Golang supports it.
I think the clean way is to put this logic inside the PATCH handler. There should be some logic that would update only the fields that you want. Is easier if you unpack into a map[string]string and only iterate over the fields that you want to update. Additionally you could decode the json into a map, delete all the fields that you don't want to be updated, re-encode in json and then decode into your struct.

How do I leverage Scala's type system to maintain type information for a dynamic set of attributes

I've run into a use case where I need to parse a bunch of information from a txt file. The meat of the payload is a bunch of key:value pairs, none of which I know at development time.
Wombat
Area: "Northern Alberta"
Tank Level (ft): -3.395
Temperature (C): 19.3
Batt Voltage: 13.09
Last Maintained: 2012-01-01
Secured: "Yes"
As you can see, there is potential for Strings, Numbers, Dates and Booleans. There is also a use case where the user needs to create rules for certain attributes, things like:
When the Tank Level exceeds n, please notify some.user#someplace.com
When a Site that contains "Alberta" is Not Secured, please notify some.user#someplace.com
Depending on the type of attribute, the available rule types will differ. I may also need to do some kind of aggregation on the numeric types. Anyway, to make a long story short, I need the type information. So what kind of data structure is best?
Initially I was going to go with distinct tuples.
val stringAttributes: Array[(String, String)]
val doubleAttributes: Array[(String, Double)]
val dateAttributes: Array[(String, Date)]
Now that seems wrong, or at the very least ugly. Then I though maybe something like:
val attributes: Array[(String, Any)]
Now I have a pattern match in many of places. Also note I'm using a JSON protocol for the web application and database (MongoDB). It'd be convenient to give the front end something like this:
{
site: "Wombat",
attributes: [
{ "Area": "Northern Alberta" },
{ "Tank Level (ft)": -3.395 },
{ "Temperature (C)": 19.3 }
]
}
But in the back end, do I encode the types? Do I parse raw JSON? In the end, I'm looking for the best way to maintain type information for dynamic set of attributes while supporting JSON to both the web client and the database.
+1 on #david's comment.
Why not use JSON end-to-end and derive the type desired for each individual rule? Then use a single function to transform a JSON value to the correct type (using defaults or Option when it doesn't really match).
Then you can use something like this to run the ruleset over all of the parsed objects:
for (object <- parseObjects(); rule <- rules) {
val value = coerce(object, rule.desiredType)
rule(value)
}
Or something similar, such as having the rule itself call the coercion code. It seems like you don't really need the objects to have a type until you're doing rule processing, so why try to?
P.S.
Typesafe Config may be able to parse the file for you already, although it looks to be a simple format to parse.

Parsing JSON objects of unknown type with AutoBean on GWT

My server returns a list of objects in JSON. They might be Cats or Dogs, for example.
When I know that they'll all be Cats, I can set the AutoBeanCodex to work easily. When I don't know what types they are, though... what should I do?
I could give all of my entities a type field, but then I'd have to parse each entity before passing it to the AutoBeanCodex, which borders on defeating the point. What other options do I have?
Just got to play with this the other day, and fought it for a few hours, trying #Category methods and others, until I found this: You can create a property of type Splittable, which represents the underlying transport type that has some encoding for booleans/Strings/Lists/Maps. In my case, I know some enveloping type that goes over the wire at design time, and based on some other property, some other field can be any number of other autobeans.
You don't even need to know the type of the other bean at compile time, you could get values out using Splittable's methods, but if using autobeans anyway, it is nice to define the data that is wrapped.
interface Envelope {
String getStatus();
String getDataType();
Splittable getData();
}
(Setters might be desired if you sending data as well as recieving - encoding a bean into a `Splittable to send it in an envelope is even easier than decoding it)
The JSON sent over the wire is decoded (probably using AutoBeanCodex) into the Envelope type, and after you've decided what type must be coming out of the getData() method, call something like this to get the nested object out
SpecificNestedBean bean = AutoBeanCodex.decode(factory,
SpecificNestedBean.class,
env.getData()).as();
The Envelope type and the nested types (in factory above) don't even need to be the same AutoBeanFactory type. This could allow you to abstract out the reading/writing of envelopes from the generic transport instance, and use a specific factory for each dataType string property to decode the data's model (and nested models).