So I've read all the posts on String Based Enums in Typescript, but I couldn't find a solution that meets my requirements. Those would be:
Enums that provide code completion
Enums that can be iterated over
Not having to specify an element twice
String based
The possibilities I've seen so far for enums in typescript are:
enum MyEnum {bla, blub}: This fails at being string based, so I can't simply read from JSONs which are string based...
type MyEnum = 'bla' | 'blub': Not iterable and no code completion
Do it yourself class MyEnum { static get bla():string{return "bla"} ; static get blub():string{return "blub"}}: Specifies elements twice
So here come the questions:
There's no way to satisfy those requirements simultaneously? If no, will it be possible in the future?
Why didn't they make enums string based?
Did someone experience similar problems and how did you solve them?
I think that implementing Enum in a C-like style with numbers is fine, because an Enum (similar to a Symbol) is usually used to declare a value that is uniquely identifiable on development time. How the machine represents that value on run time doesn't really concern the developer.
But what we developer sometimes want (because we're all lazy and still want to have all the benefits!), is to use the Enum as an API or with an API that does not share that Enum with us, even though the API is essentially an Enum because the valid value of a property only is foo and bar.
I guess this is the reason why some languages have string based Enums :)
How TypeScript handles Enums
If you look at the transpiled JavaScript you can see that TypeScript just uses a plain JavaScript Object to implement an Enum. For example:
enum Color {
Red,
Green,
Blue
}
will be transpiled to:
{
0: "Red",
1: "Green",
2: "Blue",
Blue: 2,
Green: 1,
Red: 0
}
This means you can access the string value like Color[Color.Red]. You will still have code completion and you do not have to specify the values twice. But you can not just do Object.keys(Color) to iterate over the Enum, because the values exist "twice" on the object.
Why didn't they make enums string based
To be clear Enums are both number and string based in that direct access is number and reverse map is string (more on this).
Meeting your requirement
You key reason for ruling out raw enums is
so I can't simply read from JSONs which are string based...
You will experience the same thing e.g. when reading Dates cause JSON has no date data type. You would new Date("someServerDateTime") to convert these.
You would use the same strategy to go from server side enum (string) to TS enum (number). Easy done thanks to the reverse lookup MyEnum["someServerString"]
Hydration
This process of converting server side data to client side active data is sometimes called Hydration. My favorite lib for this at the moment is https://github.com/pleerock/class-transformer
I personally handle this stuff myself at the server access level i.e. hand write an API that makes the XHR + does the serialization.
At my last job we automated this with code generation that did even more than that (supported common validation patterns between server and client code).
Related
How would I use a variable in a Go struct tag?
This works:
type Shape struct {
Type string `json:"type"`
}
This does not work:
const (
TYPE = "type"
)
type Shape struct {
Type string fmt.Sprintf("json:\"%s\"", TYPE)
}
In the first example, I am using a string literal, which works. In the second example, I am building a string using fmt.Sprintf, and I seem to be getting an error:
syntax error: unexpected name, expecting }
Here it is on the Go playground:
https://play.golang.org/p/lUmylztaFg
How would I use a variable in a Go struct tag? You wouldn't, it's not allowed by the language. You can't use a statement that evaluates at runtime in place of a compile time string literal for as an annotation to a field on a struct. As far as I know nothing of the sort works in any compiled language.
With the introduction of go generate, it is possible to do achieve this.
However, go generate essentially makes the compilation a 2 phase process. Phase 1 generates the new code, phase 2 compiles and links etc.
There are a few limitations with using go generate:
Your library will not be 'go get'-able unless you run go generate every time it is needed and check in the result, since go generate needs to be explicitly run before go build
This is a compile time process, so you will not be able to do it at run time using run time information. If you really must do this at run time, and in your case, you are just adding JSON serialization annotations, you could consider using a map.
String const/variable is not allowed in tag value to keep things simple and I support that. However with this limit, we need to use reflection to retrieve the tag value which is costly OR type string literals everywhere in the project, which may lead to bugs because of typos.
Solution
We can generate the tag values as string constant and then use this constant further in the project. It doesn't use reflection(saves performance cost), is more maintainable and removes the possibility of any bug because of typos.
ast package is an amazing tool to analyse and generate the go code. For example -
type user struct {
Name string `json:"name"`
Age int `json:"age"`
}
We can generated constants for user struct as below -
const (
UserNameJson = "name"
UserAgeJson = "age"
)
You may find tgcon helpful to generate the field tag value as const.
I have a Typescript app. I use the localstorage for development purpose to store my objects and I have the problem at the deserialization.
I have an object meeting of type MeetingModel:
export interface MeetingModel {
date: moment.Moment; // from the library momentjs
}
I store this object in the localStorage using JSON.stringify(meeting).
I suppose that stringify call moment.toJson(), that returns an iso string, hence the value stored is: {"date":"2016-12-26T15:03:54.586Z"}.
When I retrieve this object, I do:
const stored = window.localStorage.getItem("meeting");
const meeting: MeetingModel = JSON.parse(stored);
The problem is: meeting.date contains a string instead of a moment !
So, first I'm wondering why TypeScript let this happen ? Why can I assign a string value instead of a Moment and the compiler agree ?
Second, how can I restore my objects from plain JSON objects (aka strings) into Typescript types ?
I can create a factory of course, but when my object database will grow up it will be a pain in the *** to do all this work.
Maybe there is a solution for better storing in the local storage in the first place?
Thank you
1) TypeScript is optionally typed. That means there are ways around the strictness of the type system. The any type allows you to do dynamic typing. This can come in very handy if you know what you are doing, but of course you can also shoot yourself in the foot.
This code will compile:
var x: string = <any> 1;
What is happening here is that the number 1 is casted to any, which to TypeScript means it will just assume you as a developer know what it is and how you to use it. Since the any type is then assigned to a string TypeScript is absolutely fine with it, even though you are likely to get errors during run-time, just like when you make a mistake when coding JavaScript.
Of course this is by design. TypeScript types only exist during compile time. What kind of string you put in JSON.parse is unknowable to TypeScript, because the input string only exists during run-time and can be anything. Hence the any type. TypeScript does offer so-called type guards. Type guards are bits of code that are understood during compile-time as well as run-time, but that is beyond the scope of your question (Google it if you're interested).
2) Serializing and deserializing data is usually not as simple as calling JSON.stringify and JSON.parse. Most type information is lost to JSON and typically the way you want to store objects (in memory) during run-time is very different from the way you want to store them for transfer or storage (in memory, on disk, or any other medium). For instance, during run-time you might need lookup tables, user/session state, private fields, library specific properties, while in storage you might want version numbers, timestamps, metadata, different types of normalization, etc. You can JSON.stringify anything you want in JavaScript land, but that does necessarily mean it is a good idea. You might want to design how you actually store data. For example, an iso string looks pretty, but takes a lot of bytes. If you have just a few that does not matter, but when you are transferring millions a second you might want to consider another format.
My advise to you would be to define interfaces for the objects you want to save and like moment create a .toJson method on your model object, which will return the DTO (Data Transfer Object) that you can simply serialize with JSON.stringify. Then on the way back you cast the any output of JSON.parse to your DTO and then convert it back to your model with a factory function or constructor of your creation. That might seem like a lot of boilerplate, but in my experience it is totally worth it, because now you are in control of what gets stored and that gives you a lot of flexility to change your model without getting deserialization problems.
Good luck!
You could use the reviver feature of JSON.parse to convert the string back to a moment:
JSON.parse(input, (key, value) => {
if (key == "date") {
return parseStringAsMoment(value);
} else {
return value;
});
Check browser support for reviver, though, as it's not the same as basic JSON.parse
According to http://www.cocos2d-x.org/wiki/Value,
Value can handle strings as well as int, float, bool, etc.
I'm confused when I have to make a choice between using
std::string
or
Value
In what circumstances should I use Value over std::string, and vice versa??
I think you have misunderstood the Value object. As written in the documentation you linked to:
cocos2d::Value is a wrapper class for many primitives ([...] and std::string) plus [...]
So really Value is an object that wraps a bunch of other types of variables, which allows cocos2d-x to have loosely-typed structures like the ValueMap (a hash of strings to Values - where each Value can be a different type of object) and ValueVector (a list of Values).
For example, if you wanted to have a configuration hash with keys that are all strings, but with a bunch of different values - in vanilla C++, you would have to create a separate data structure for each type of value you want to save, but with Value you can just do:
unordered_map<std::string, cocos2d::Value> configuration;
configuration["numEnemies"] = Value(10);
configuration["gameTitle"] = Value("Super Mega Raiders");
It's just a mechanism to create some loose typing in C++ which is a strongly-typed language.
You can save a string in a Value with something like this:
std::string name = "Vidur";
Value nameVal = Value(name);
And then later retrieve it with:
std::string retrievedName = nameVal.asString();
If you attempt to parse a Value as the wrong type, it will throw an error in runtime, since this is isn't something that the compiler can figure out.
Do let me know if you have any questions.
I have a situation where people consuming our API will need to do a partial update in my resource. I understand that the HTTP clearly specifies that this is a PATCH operation, even though people on our side are used to send a PUT request for this and that's how the legacy code is built.
For exemplification, imagine the simple following struct:
type Person struct {
Name string
Age int
Address string
}
On a POST request, I will provide a payload with all three values (Name, Age, Address) and validate them accordingly on my Golang backend. Simple.
On a PUT/PATCH request though, we know that, for instance, a name never changes. But say I would like to change the age, then I would simply send a JSON payload containing the new age:
PUT /person/1 {age:30}
Now to my real question:
What is the best practice to prevent name from being used/updated intentionally or unintentionally modified in case a consumer of our API send a JSON payload containing the name field?
Example:
PUT /person/1 {name:"New Name", age:35}
Possible solutions I thought of, but I don't actually like them, are:
On my validator method, I would either forcibly remove the unwanted field name OR respond with an error message saying that name is not allowed.
Create a DTO object/struct that would be pretty much an extension of my Person struct and then unmarshall my JSON payload into it, for instance
type PersonPut struct {
Age int
Address string
}
In my opinion this would add needless extra code and logic to abstract the problem, however I don't see any other elegant solution.
I honestly don't like those two approaches and I would like to know if you guys faced the same problem and how you solved it.
Thanks!
The first solution your brought is a good one. Some well known frameworks use to implement similar logic.
As an example, latests Rails versions come with a built in solution to prevent users to add extra data in the request, causing the server to update wrong fields in database. It is a kind of whitelist implemented by ActionController::Parameters class.
Let's suppose we have a controller class as bellow. For purpose of this explanation, it contains two update actions. But you won't see it in real code.
class PeopleController < ActionController::Base
# 1st version - Unsafe, it will rise an exception. Don't do it
def update
person = current_account.people.find(params[:id])
person.update!(params[:person])
redirect_to person
end
# 2nd version - Updates only permitted parameters
def update
person = current_account.people.find(params[:id])
person.update!(person_params) # call to person_params method
redirect_to person
end
private
def person_params
params.require(:person).permit(:name, :age)
end
end
Since the second version allows only permitted values, it'll block the user to change the payload and send a JSON containing a new password value:
{ name: "acme", age: 25, password: 'account-hacked' }
For more details, see Rails docs: Action Controller Overview and ActionController::Parameters
If the name cannot be written it is not valid to provide it for any update request. I would reject the request if the name was present. If I wanted to be more lenient, I might consider only rejecting the request if name is different from the current name.
I would not silently ignore a name which was different from the current name.
This can be solved by decoding the JSON body into a map[string]json.RawMessage first. The json.RawMessage type is useful for delaying the actual decoding. Afterwards, a whitelist can be applied on the map[string]json.RawMessage map, ignoring unwanted properties and only decoding the json.RawMessages of the properties we want to keep.
The process of decoding the whitelisted JSON body into a struct can be automated using the reflect package; an example implementation can be found here.
I am not proficient on Golang but I believe a good strategy would be converting your name field to be a read-only field.
For instance, in a strictly object-oriented language as Java/.NET/C++ you can just provide a Getter but not a Setter.
Maybe there is some accessor configuration for Golang just like Ruby has....
If it is read-only then it shouldn't bother with receiving a spare value, it should just ignore it. But again, not sure if Golang supports it.
I think the clean way is to put this logic inside the PATCH handler. There should be some logic that would update only the fields that you want. Is easier if you unpack into a map[string]string and only iterate over the fields that you want to update. Additionally you could decode the json into a map, delete all the fields that you don't want to be updated, re-encode in json and then decode into your struct.
I have an interesting problem to solve that would be helped by successfully casting objects created by LINQ to SQL into a single master object that I could pass around. Here is the scenario at a high level.
I have a number of stored procedures that fetch data and then all return the exact same columns. The params into the procs and the logic vary greatly, so a single proc will not work. Then Linq creates a strongly typed object which is used throughout my application as parameter and return values.
I am using these strongly typed objects as noted above as parameters and return values in a series of filters used to analyze stocks. My client would like to change the order the order of the filters. The issue is that each succeeding filter will only work on what passed the last filter.
Currently I am hard coding my parameters, and if I could create a master object that I could cast any of these Linq objects to, I could then always pass and return the master object.
I have read the materials available on the internet about casting between different types such as static to anonymous types or a list of integers and an array list containing objects representing integers, but I need to actually cast one object into another.
What general direction would I take to solve this problem of converting strongly typed objects generated by linq that are exactly the same into a single master object?
Thank you for any of your thoughts.
If all your linq objects have the same fields, you could have them implement an interface defined with those common fields. Then the calls to your filter methods can depend on an interface rather than a specific implementation. In other words, the parameters in the filter methods will be of the interface type rather than a linq class type.
e.g.: Where ICommonFields is an interface you define with all the common fields in each l2s class -
public class Filterer
{
public ICommonFields filterStuff(ICommonFields x)
{
//do stuff
}
}
or -
public class Filterer
{
public T filterStuff<T>(T x)
where T: class, ICommonFields, new()
{
//do stuff
}
}
I'd prefer the generic version, as T becomes the actual type rather than a reference through an interface - linq-to-sql has issues when using a type through an interface with query expressions.
Edit: sorry, it was late when i first wrote this response (likely excuse! :). Fixed my obvious mistake with the example code :)
Although there might be a way to do this with casting, I'm going to offer you a quick and dirty solution - and I'm assuming that your resultant objects are collection-based:
Given that all of your child objects
all share the same columns, go ahead
and pick one of them to act as your
master object - then simply iterate
through the rows of your other LINQ
objects and add them to the collection
of your master object. If your
resultant object is a strongly typed
data table, then all you'd do is Add
to the .Rows collection.
Additionally, you might be able to just add the elements retrieved some subsequent LINQ queries directly to your master object depending upon how you write your SELECT causes in LINQ.