PtrToStructure() generates SafeArrayTypeMismatchException - exception

Folks,
Please help me chase out a SafeArrayTypeMismatchException that I'm getting. I need to pass a struct to unmanaged DLL. One of the struct members is a variable-length array. Unmanaged code will populate it with data, then my C# code will use the data.
My approach is this:
Get IntPtr for my structure using StructureToPtr(), do the memory allocation too, of course.
Call unmanaged function, and pass IntPtr as a parameter
Get the populated structure using PtrToStructure()
If for the purposes of an exercise I call StructureToPtr() and PtrToStructure() back-to-back there are no exceptions.
PtrToStructure() generates SafeArrayTypeMismatchException, if I put a call to unmanaged DLL between StructureToPtr() and PtrToStructure(). The description for SafeArrayTypeMismatchException is "Mismatch has occurred between the runtime type of the array and the sub type recorded in the metadata."
Any suggestion or insight is really appreciated!
I can post my code if needed.
- Nick

The .NET marshaller can't handle C/C++ arrays of unknown size. An array in .NET always has a fixed size associated with it but C/C++ arrays are just a pointer to a memory block. The marshaller has no way to know how large the array returning from the C/C++ code is, so it throws an exception.
What it's is trying to do in your case is to marshal the array as a SafeArray which is a COM type - an array that contains it's own size, but you're array isn't a SafeArray.
There isn't a way to get the marshaller to handle this automatically. Declare the struct member you are using as an IntPtr and manually create .NET array and copy the values in. See this for an example of how to do that.

Related

A better solution to validate JSON unmarshal to nested structs

There appears to be few options to validate the source JSON used when unmarshalling to a struct. By validate I mean 3 main things:
a required field exists in the JSON
the field is the correct type (e.g. don't force a string into an integer)
the field contains a valid value (value range / enum)
For nested structs, I simply mean where an attribute in one struct has the type of another struct:
type Example struct {
Attr1 int `json:"attr1"`
Attr2 ExampleToo `json:"attr2"`
}
type ExampleToo struct {
Attr3 int `json:"attr3"`
}
And this JSON would be valid:
{"attr1": 5, "attr2": {"attr3": 0}}
To keep this simple, I'll focus simply on integers. The concept of "zero values" is the first issue. I could create an UnmarshalJSON method, which is detected by JSON packages, including the standard encoding/json package. The problem with this approach is that is that is does not support nested structs. If ExampleToo has an UnmarshalJSON method, the ExampleToo.UnmarshalJSON() method is never called if unmarshalling to an Example object. It would be possible to write a method Example.UnmarshalJSON() that recursively handled validation, but that seems extremely complex, especially if ExampleToo is reused in many places.
So there appears to be some packages like the go-playground/validator where validation can be specified both as functions and tags. However, this works on the struct created, and not the JSON itself. So if a field is tagged as validation:"required" on an integer, and the integer value is 0, this will return an error because 0 is both a valid value and the "zero value" for integers.
An example of the latter here: https://go.dev/play/p/zqSUksPzUiq
I could also use pointers for everything, checking for nil as missing values. The main problem with that is that it requires dereferencing on each use and is a pretty uncommon practice for things like integers and strings.
One thing that I have also considered is a "sister struct" that uses pointers to do validation for required fields. The process would basically be to write a validation method for each struct, then validate that sister struct. If it works, then deserialize the main struct (without pointers). I haven't started on this, just a concept I've thought about, but I'm hoping there are better validation options.
So... is there a better way to do JSON/YAML input validation on nested structs? I'm happy to mix methods where say UnmarshalJSON is used for doing some work like verifying fields exist, but I'd like to pass that back to the library to let it continue to call UnmarshalJSON on subsequent nested structs. I'd also rather defer to the JSON library for casting values into the struct, etc.

Excessive use of map[string]interface{} in go development?

The majority of my development experience has been from dynamically typed languages like PHP and Javascript. I've been practicing with Golang for about a month now by re-creating some of my old PHP/Javascript REST APIs in Golang. I feel like I'm not doing things the Golang way most of the time. Or more generally, I'm not use to working with strongly typed languages. I feel like I'm making excessive use of map[string]interface{} and slices of them to box up data as it comes in from http requests or when it gets shipped out as json http output. So what I'd like to know is if what I'm about to describe goes against the philosophy of golang development? Or if I'm breaking the principles of developing with strongly typed languages?
Right now, about 90% of the program flow for REST Apis I've rewritten with Golang can be described by these 5 steps.
STEP 1 - Receive Data
I receive http form data from http.Request.ParseForm() as formvals := map[string][]string. Sometimes I will store serialized JSON objects that need to be unmarshaled like jsonUserInfo := json.Unmarshal(formvals["user_information"][0]) /* gives some complex json object */.
STEP 2 - Validate Data
I do validation on formvals to make sure all the data values are what I expect before using it in SQL queries. I treat everyting as a string, then use Regex to determine if the string format and business logic is valid (eg. IsEmail, IsNumeric, IsFloat, IsCASLCompliant, IsEligibleForVoting,IsLibraryCardExpired etc...). I've written my own Regex and custom functions for these types of validations
STEP 3 - Bind Data to SQL Queries
I use golang's database/sql.DB to take my formvals and bind them to my Query and Exec functions like this Query("SELECT * FROM tblUser WHERE user_id = ?, user_birthday > ? ",formvals["user_id"][0], jsonUserInfo["birthday"]). I never care about the data types I'm supplying as arguments to be bound, so they're all probably strings. I trust the validation in the step immediately above has determined they are acceptable for SQL use.
STEP 4 - Bind SQL results to []map[string]interface{}{}
I Scan() the results of my queries into a sqlResult := []map[string]interface{}{} because I don't care if the value types are null, strings, float, ints or whatever. So the schema of an sqlResult might look like:
sqlResult =>
[0] {
"user_id":"1"
"user_name":"Bob Smith"
"age":"45"
"weight":"34.22"
},
[1] {
"user_id":"2"
"user_name":"Jane Do"
"age":nil
"weight":"22.22"
}
I wrote my own eager load function so that I can bind more information like so EagerLoad("tblAddress", "JOIN ON tblAddress.user_id",&sqlResult) which then populates sqlResult with more information of the type []map[string]interface{}{} such that it looks like this:
sqlResult =>
[0] {
"user_id":"1"
"user_name":"Bob Smith"
"age":"45"
"weight":"34.22"
"addresses"=>
[0] {
"type":"home"
"address1":"56 Front Street West"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
[1] {
"type":"work"
"address1":"5 Kennedy Avenue"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
},
[1] {
"user_id":"2"
"user_name":"Jane Do"
"age":nil
"weight":"22.22"
"addresses"=>
[0] {
"type":"home"
"address1":"56 Front Street West"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
}
STEP 5 - JSON Marshal and send HTTP Response
then I do a http.ResponseWriter.Write(json.Marshal(sqlResult)) and output data for my REST API
Recently, I've been revisiting articles with code samples that use structs in places I would have used map[string]interface{}. For example, I wanted to refactor Step 2 with a more standard approach that other golang developers would use. So I found this https://godoc.org/gopkg.in/go-playground/validator.v9, except all it's examples are with structs . I also noticed that most blogs that talk about database/sql scan their SQL results into typed variables or structs with typed properties, as opposed to my Step 4 which just puts everything into map[string]interface{}
Hence, i started writing this question. I feel the map[string]interface{} is so useful because majority of the time,I don't really care what the data is and it gives me to the freedom in Step 4 to construct any data schema on the fly before I dump it as JSON http response. I do all this with as little code verbosity as possible. But this means my code is not as ready to leverage Go's validation tools, and it doesn't seem to comply with the golang community's way of doing things.
So my question is, what do other golang developers do with regards to Step 2 and Step 4? Especially in Step 4...do Golang developers really encourage specifying the schema of the data through structs and strongly typed properties? Do they also specify structs with strongly typed properties along with every eager loading call they make? Doesn't that seem like so much more code verbosity?
It really depends on the requirements just like you have said you don't require to process the json it comes from the request or from the sql results. Then you can easily unmarshal into interface{}. And marshal the json coming from sql results.
For Step 2
Golang has library which works on validation of structs used to unmarshal json with tags for the fields inside.
https://github.com/go-playground/validator
type Test struct {
Field `validate:"max=10,min=1"`
}
// max will be checked then min
you can also go to godoc for validation library. It is very good implementation of validation for json values using struct tags.
For STEP 4
Most of the times, We use structs if we know the format and data of our JSON. Because it provides us more control over the data types and other functionality. For example if you wants to empty a JSON feild if you don't require it in your JSON. You should use struct with _ json tag.
Now you have said that you don't care if the result coming from sql is empty or not. But if you do it again comes to using struct. You can scan the result into struct with sql.NullTypes. With that also you can provide json tag for omitempty if you wants to omit the json object when marshaling the data when sending a response.
Struct values encode as JSON objects. Each exported struct field
becomes a member of the object, using the field name as the object
key, unless the field is omitted for one of the reasons given below.
The encoding of each struct field can be customized by the format
string stored under the "json" key in the struct field's tag. The
format string gives the name of the field, possibly followed by a
comma-separated list of options. The name may be empty in order to
specify options without overriding the default field name.
The "omitempty" option specifies that the field should be omitted from
the encoding if the field has an empty value, defined as false, 0, a
nil pointer, a nil interface value, and any empty array, slice, map,
or string.
As a special case, if the field tag is "-", the field is always
omitted. Note that a field with name "-" can still be generated using
the tag "-,".
Example of json tags
// Field appears in JSON as key "myName".
Field int `json:"myName"`
// Field appears in JSON as key "myName" and
// the field is omitted from the object if its value is empty,
// as defined above.
Field int `json:"myName,omitempty"`
// Field appears in JSON as key "Field" (the default), but
// the field is skipped if empty.
// Note the leading comma.
Field int `json:",omitempty"`
// Field is ignored by this package.
Field int `json:"-"`
// Field appears in JSON as key "-".
Field int `json:"-,"`
As you can analyze from above information given in Golang spec for json marshal. Struct provide so much control over json. That's why Golang developer most probably use structs.
Now on using map[string]interface{} you should use it when you don't the structure of your json coming from the server or the types of fields. Most Golang developers stick to structs wherever they can.

TypeScript types serialisation/deserialization in localstorage

I have a Typescript app. I use the localstorage for development purpose to store my objects and I have the problem at the deserialization.
I have an object meeting of type MeetingModel:
export interface MeetingModel {
date: moment.Moment; // from the library momentjs
}
I store this object in the localStorage using JSON.stringify(meeting).
I suppose that stringify call moment.toJson(), that returns an iso string, hence the value stored is: {"date":"2016-12-26T15:03:54.586Z"}.
When I retrieve this object, I do:
const stored = window.localStorage.getItem("meeting");
const meeting: MeetingModel = JSON.parse(stored);
The problem is: meeting.date contains a string instead of a moment !
So, first I'm wondering why TypeScript let this happen ? Why can I assign a string value instead of a Moment and the compiler agree ?
Second, how can I restore my objects from plain JSON objects (aka strings) into Typescript types ?
I can create a factory of course, but when my object database will grow up it will be a pain in the *** to do all this work.
Maybe there is a solution for better storing in the local storage in the first place?
Thank you
1) TypeScript is optionally typed. That means there are ways around the strictness of the type system. The any type allows you to do dynamic typing. This can come in very handy if you know what you are doing, but of course you can also shoot yourself in the foot.
This code will compile:
var x: string = <any> 1;
What is happening here is that the number 1 is casted to any, which to TypeScript means it will just assume you as a developer know what it is and how you to use it. Since the any type is then assigned to a string TypeScript is absolutely fine with it, even though you are likely to get errors during run-time, just like when you make a mistake when coding JavaScript.
Of course this is by design. TypeScript types only exist during compile time. What kind of string you put in JSON.parse is unknowable to TypeScript, because the input string only exists during run-time and can be anything. Hence the any type. TypeScript does offer so-called type guards. Type guards are bits of code that are understood during compile-time as well as run-time, but that is beyond the scope of your question (Google it if you're interested).
2) Serializing and deserializing data is usually not as simple as calling JSON.stringify and JSON.parse. Most type information is lost to JSON and typically the way you want to store objects (in memory) during run-time is very different from the way you want to store them for transfer or storage (in memory, on disk, or any other medium). For instance, during run-time you might need lookup tables, user/session state, private fields, library specific properties, while in storage you might want version numbers, timestamps, metadata, different types of normalization, etc. You can JSON.stringify anything you want in JavaScript land, but that does necessarily mean it is a good idea. You might want to design how you actually store data. For example, an iso string looks pretty, but takes a lot of bytes. If you have just a few that does not matter, but when you are transferring millions a second you might want to consider another format.
My advise to you would be to define interfaces for the objects you want to save and like moment create a .toJson method on your model object, which will return the DTO (Data Transfer Object) that you can simply serialize with JSON.stringify. Then on the way back you cast the any output of JSON.parse to your DTO and then convert it back to your model with a factory function or constructor of your creation. That might seem like a lot of boilerplate, but in my experience it is totally worth it, because now you are in control of what gets stored and that gives you a lot of flexility to change your model without getting deserialization problems.
Good luck!
You could use the reviver feature of JSON.parse to convert the string back to a moment:
JSON.parse(input, (key, value) => {
if (key == "date") {
return parseStringAsMoment(value);
} else {
return value;
});
Check browser support for reviver, though, as it's not the same as basic JSON.parse

Which one to use Value vs std::string in cocos2d-x V3 C++?

According to http://www.cocos2d-x.org/wiki/Value,
Value can handle strings as well as int, float, bool, etc.
I'm confused when I have to make a choice between using
std::string
or
Value
In what circumstances should I use Value over std::string, and vice versa??
I think you have misunderstood the Value object. As written in the documentation you linked to:
cocos2d::Value is a wrapper class for many primitives ([...] and std::string) plus [...]
So really Value is an object that wraps a bunch of other types of variables, which allows cocos2d-x to have loosely-typed structures like the ValueMap (a hash of strings to Values - where each Value can be a different type of object) and ValueVector (a list of Values).
For example, if you wanted to have a configuration hash with keys that are all strings, but with a bunch of different values - in vanilla C++, you would have to create a separate data structure for each type of value you want to save, but with Value you can just do:
unordered_map<std::string, cocos2d::Value> configuration;
configuration["numEnemies"] = Value(10);
configuration["gameTitle"] = Value("Super Mega Raiders");
It's just a mechanism to create some loose typing in C++ which is a strongly-typed language.
You can save a string in a Value with something like this:
std::string name = "Vidur";
Value nameVal = Value(name);
And then later retrieve it with:
std::string retrievedName = nameVal.asString();
If you attempt to parse a Value as the wrong type, it will throw an error in runtime, since this is isn't something that the compiler can figure out.
Do let me know if you have any questions.

Parsing JSON objects of unknown type with AutoBean on GWT

My server returns a list of objects in JSON. They might be Cats or Dogs, for example.
When I know that they'll all be Cats, I can set the AutoBeanCodex to work easily. When I don't know what types they are, though... what should I do?
I could give all of my entities a type field, but then I'd have to parse each entity before passing it to the AutoBeanCodex, which borders on defeating the point. What other options do I have?
Just got to play with this the other day, and fought it for a few hours, trying #Category methods and others, until I found this: You can create a property of type Splittable, which represents the underlying transport type that has some encoding for booleans/Strings/Lists/Maps. In my case, I know some enveloping type that goes over the wire at design time, and based on some other property, some other field can be any number of other autobeans.
You don't even need to know the type of the other bean at compile time, you could get values out using Splittable's methods, but if using autobeans anyway, it is nice to define the data that is wrapped.
interface Envelope {
String getStatus();
String getDataType();
Splittable getData();
}
(Setters might be desired if you sending data as well as recieving - encoding a bean into a `Splittable to send it in an envelope is even easier than decoding it)
The JSON sent over the wire is decoded (probably using AutoBeanCodex) into the Envelope type, and after you've decided what type must be coming out of the getData() method, call something like this to get the nested object out
SpecificNestedBean bean = AutoBeanCodex.decode(factory,
SpecificNestedBean.class,
env.getData()).as();
The Envelope type and the nested types (in factory above) don't even need to be the same AutoBeanFactory type. This could allow you to abstract out the reading/writing of envelopes from the generic transport instance, and use a specific factory for each dataType string property to decode the data's model (and nested models).