Deserialization problem: Error when deserializing from a different program version - exception

I finally decided myself to post my problem, after a couple of hours spent searching the Internet for solutions and trying some.
[Problem Context]
I am developing an application which will be deployed in two parts:
an XML Importer tool: its role is to Load/Read an xml file in order to fill some datastructures, which are afterwards serialized into a binary file.
the end user application: it will Load the binary file generated by the XML Importer and do some stuff with the recovered data structures.
For now, I only use the XML Importer for both purposes (meaning I first load the xml and save it to a binary file, then I reopen the XML Importer and load my binary file).
[Actual Problem]
This works just fine and I am able to recover all the data I had after XML loading, as long as I do that with the same build of my XML Importer. This is not viable, as I will need at the very least two different builds, one for the XML Importer and one for the end user application. Please note that the two versions of the XML Importer I use for my testing are exactly the same concerning the source code and thus the datastructures, the only difference lies in the build number (to force a different build I just add a space somewhere and build again).
So what I'm trying to do is:
Build a version of my XML Importer
Open the XML Importer, load an XML file and save the resulting datastructures to a binary file
Rebuild the XML Importer
Open the XML Importer newly built, load the previously created binary file and recover my datastructures.
At this time, I get an Exception:
SerializationException: Could not find type 'System.Collections.Generic.List`1[[Grid, 74b7fa2fcc11e47f8bc966e9110610a6, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null]]'.
System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadType (System.IO.BinaryReader reader, TypeTag code)
System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadTypeMetadata (System.IO.BinaryReader reader, Boolean isRuntimeObject, Boolean hasTypeInfo)
System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectInstance (System.IO.BinaryReader reader, Boolean isRuntimeObject, Boolean hasTypeInfo, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info)
System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObject (BinaryElement element, System.IO.BinaryReader reader, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info)
For your information (don't know if useful or not), the actual type it is struggling to deserialize is a List, Grid being a custom Class (which is correctly serializable, as I am able to do it when using the same version of XML Importer).
[Potential Solution]
I do believe it comes from somewhere around the Assembly, as I read many posts and articles about this. However, I already have a custom Binder taking care of the differences of Assembly names, looking like this:
public sealed class VersionDeserializationBinder : SerializationBinder
{
public override Type BindToType( string assemblyName, string typeName )
{
if ( !string.IsNullOrEmpty( assemblyName ) && !string.IsNullOrEmpty( typeName ) )
{
Type typeToDeserialize = null;
assemblyName = Assembly.GetExecutingAssembly().FullName;
// The following line of code returns the type.
typeToDeserialize = Type.GetType( String.Format( "{0}, {1}", typeName, assemblyName ) );
return typeToDeserialize;
}
return null;
}
}
which I assign to the BinaryFormatter before deserializing here:
public static SaveData Load (string filePath)
{
SaveData data = null;//new SaveData ();
Stream stream;
stream = File.Open(filePath, FileMode.Open);
BinaryFormatter bformatter = new BinaryFormatter();
bformatter.Binder = new VersionDeserializationBinder();
data = (SaveData)bformatter.Deserialize(stream);
stream.Close();
Debug.Log("Binary version loaded from " + filePath);
return data;
}
Do any of you guys have an idea on how I could fix it? Would be awesome, pretty please :)

Move the working bits to a separate assembly and use the assembly in both "server" and "client". Based on your explanation of the problem, this should get around the "wrong version" problem, if that is the core issue. I would also take any "models" (i.e. bits of state like Grid) to a domain model project, and use that in both places.

I just bumped into your thread while I had the same problem. Especially your code sample with the SerializationBinder helped me a lot. I just had to modify it slightly to tell a difference between my own assemblies and those of Microsoft. Hopefully it still helps you, too:
sealed class VersionDeserializationBinder : SerializationBinder
{
public override Type BindToType(string assemblyName, string typeName)
{
Type typeToDeserialize = null;
string currentAssemblyInfo = Assembly.GetExecutingAssembly().FullName;
//my modification
string currentAssemblyName = currentAssemblyInfo.Split(',')[0];
if (assemblyName.StartsWith(currentAssemblyName))assemblyName = currentAssemblyInfo;
typeToDeserialize = Type.GetType(string.Format("{0}, {1}", typeName, assemblyName));
return typeToDeserialize;
}
}

I believe the problem is that you are telling it to look for List<> in the executing assembly, whereas in fact it is in the System assembly. You should only re-assign the assembly name in your binder if the original assembly is one of yours.
Also, you might have to handle the parameter types for generics specifically in the binder, by parsing out the type name and making sure the parameter types are not specific to the foreign assembly when you return the parameterized generic type.

Related

Collect JSON object in a file when a Junit test fails

I have ~50 JSON arrays as an array of models being plugged into Unit tests to compare resultant configs. Each file looks like this:
0.json
1.json... and so on
[{model1},{model2},{model3}]
I am trying to run unit tests to compare the resultant configs and want to run the tests in a manner that the test itself keeps running and collect the models if an assertion fails and output it to a json file somewhere.
Say, model2 fails, I want to collect model2 into a file output.json as an array
Till now, the code looks like this, even if the test is file by file, its fine, but will save me days of effort:
#Test
public void compareAWithB() throws Exception {
File lbJsonFile1 = new File("src/test/resources/iad_ad3/6.json");
compareAWithBHelper(lbJsonFile1);
}
public void compareAWithBHelper(File lbJsonFile) throws Exception {
Model[] dtos = new ObjectMapper().readValue(lbJsonFile, Model[].class);
for(Model dto : dtos) {
Model model = ModelConverter.apiToDao(dto);
String A = doSomeThing();
String B = doSomething2();
Assert.assertEquals(A,B);
//Required: if assert fails, collect the json object and continue
}
I tried using SoftAssertions in AssertJ, but weirdly, it was not printing out all the json objects OR maybe, I don't really understand the checkThat() method properly.
Tried using collectors.checkThat, couldn't get it to work reliably. This is a production area, so, don't have much room for errors, and wanna reduce the manual effort.
Made another attempt to use collectors as one of the posts on stackoverflow, couldn't get it to work reliably
/*try {
collector.checkThat(A, CoreMatchers.equalTo(B));
} catch (AssertionError error) {
System.out.println(dto.toString());
throw new AssertionError(error.getMessage());
}*/
Can someone please help ?
If you want to gather all assertion errors and not stop at the first error then soft assertions is a good candidate to use. To get started with soft assertions you can follow the guide available here: https://assertj.github.io/doc/#assertj-core-soft-assertions.
collector.checkThat does not come from AssertJ (neither anything from your code samples), it's a bit confusing, I would suggest to write a reproducible test so that people can help more easily.
Alternatively if you are dealing with JSON, you can give a try to addressed by https://github.com/lukas-krecan/JsonUnit which provides first class citizen JSON assertions.
Hope it helps.

Dart objects with strong typing from JSON

I'm learning Dart and was reading the article Using Dart with JSON Web Services, which told me that I could get help with type checking when converting my objects to and from JSON. I used their code snippet but ended up with compiler warnings. I found another Stack Overflow question which discussed the same problem, and the answer was to use the #proxy annotation and implement noSuchMethod. Here's my attempt:
abstract class Language {
String language;
List targets;
Map website;
}
#proxy
class LanguageImpl extends JsonObject implements Language {
LanguageImpl();
factory LanguageImpl.fromJsonString(string) {
return new JsonObject.fromJsonString(string, new LanguageImpl());
}
noSuchMethod(i) => super.noSuchMethod(i);
}
I don't know if the noSuchMethod implementation is correct, and #proxy seems redundant now. Regardless, the code doesn't do what I want. If I run
var lang1 = new LanguageImpl.fromJsonString('{"language":"Dart"}');
print(JSON.encode(lang1));
print(lang1.language);
print(lang1.language + "!");
var lang2 = new LanguageImpl.fromJsonString('{"language":13.37000}');
print(JSON.encode(lang2));
print(lang2.language);
print(lang2.language + "!");
I get the output
{"language":"Dart"}
Dart
Dart!
{"language":13.37}
13.37
type 'String' is not a subtype of type 'num' of 'other'.
and then a stacktrace. Hence, although the readability is a little bit better (one of the goals of the article), the strong typing promised by the article doesn't work and the code might or might not crash, depending on the input.
What am I doing wrong?
The article mentions static types in one paragraph but JsonObject has nothing to do with static types.
What you get from JsonObject is that you don't need Map access syntax.
Instead of someMap['language'] = value; you can write someObj.language = value; and you get the fields in the autocomplete list, but Dart is not able to do any type checking neither when you assign a value to a field of the object (someObj.language = value;) nor when you use fromJsonString() (as mentioned because of noSuchMethod/#proxy).
I assume that you want an exception to be thrown on this line:
var lang2 = new LanguageImpl.fromJsonString('{"language":13.37000}');
because 13.37 is not a String. In order for JsonObject to do this it would need to use mirrors to determine the type of the field and manually do a type check. This is possible, but it would add to the dart2js output size.
So barring that, I think that throwing a type error when reading the field is reasonable, and you might have just found a bug-worthy issue here. Since noSuchMethod is being used to implement an abstract method, the runtime can actually do a type check on the arguments and return values. It appears from your example that it's not. Care to file a bug?
If this was addressed, then JsonObject could immediate read a field after setting it to cause a type check when decoding without mirrors, and it could do that check in an assert() so that it's only done in checked mode. I think that would be a nice solution.

Typescript with TypeLite - Run time type checking

Let's say I have some C# DTO's and I want to convert them to TypeScript interfaces using T4 templates and neat little library called TypeLite
On the client side, I have some concrete TypeScript classes (that inherit from Backbone.Model but that's not important) that represent the same DTO defined on the server side.
The intended goal of the interfaces is to act as data contracts and ensure client and server DTOs are kept in sync.
However, this is problematic since TypeScript supports no run-time type checking facilities other than instanceof. The problem with instance of is when I fetch my DTOs from the server they are plain JSON objects and not instances of my model. I need to perform run-time type checking on these DTOs that come in from the server as JSON objects.
I know I can do something like this:
collection.fetch({...}).done((baseModels) => {
baseModels.forEach((baseModel) => {
if(baseModel&& baseModel.SomeProperty && baseModel.SomeOtherProperty){
//JSON model has been "type-checked"
}
});
});
However, there is obvious problems to this because now I need to update in three places if I change or add a property.
Currently the only thing I found is this but it's undocumented, not maintained, and uses node which I have zero experience with so I'll save myself the frustration. Does anybody know of anything similar to perform run-time type checking in TypeScript or some other way to accomplish what I'm after?
It would be great if this was built into TypeLite to generate the interfaces as well as a JSON schema for type checking at run-time. Being that this project is open source somebody should really go ahead and extend it. I'd need some pointers at the least if I would do it myself (thus the question).
More details about my particular problem here (not necessary but if needed extra context)
At runtime you are using plain JavaScript, so you could use this answer as it relates to plain JavaScript:
How do I get the name of an object's type in JavaScript?
Here is a TypeScript get-class-name implementation that can supply the name of the enclosing TypeScript class (the link also has a static separate version of this example).
class ShoutMyName {
getName() {
var funcNameRegex = /function (.{1,})\(/;
var anyThis = <any> this;
var results = (funcNameRegex).exec(anyThis.constructor.toString());
return (results && results.length > 1) ? results[1] : "";
}
}
class Example extends ShoutMyName {
}
class AnotherClass extends ShoutMyName {
}
var x = new Example();
var y = new AnotherClass();
alert(x.getName());
alert(y.getName());
This doesn't give you data about the inheritance chain, just the class you are inspecting.

What is the best way to handle versioning using JSON protocol?

I am normally writing all parts of the code in C# and when writing protocols that are serialized I use FastSerializer that serializes/deserializes the classes fast and efficient. It is also very easy to use, and fairly straight-forward to do "versioning", ie to handle different versions of the serialization. The thing I normally use, looks like this:
public override void DeserializeOwnedData(SerializationReader reader, object context)
{
base.DeserializeOwnedData(reader, context);
byte serializeVersion = reader.ReadByte(); // used to keep what version we are using
this.CustomerNumber = reader.ReadString();
this.HomeAddress = reader.ReadString();
this.ZipCode = reader.ReadString();
this.HomeCity = reader.ReadString();
if (serializeVersion > 0)
this.HomeAddressObj = reader.ReadUInt32();
if (serializeVersion > 1)
this.County = reader.ReadString();
if (serializeVersion > 2)
this.Muni = reader.ReadString();
if (serializeVersion > 3)
this._AvailableCustomers = reader.ReadList<uint>();
}
and
public override void SerializeOwnedData(SerializationWriter writer, object context)
{
base.SerializeOwnedData(writer, context);
byte serializeVersion = 4;
writer.Write(serializeVersion);
writer.Write(CustomerNumber);
writer.Write(PopulationRegistryNumber);
writer.Write(HomeAddress);
writer.Write(ZipCode);
writer.Write(HomeCity);
if (CustomerCards == null)
CustomerCards = new List<uint>();
writer.Write(CustomerCards);
writer.Write(HomeAddressObj);
writer.Write(County);
// v 2
writer.Write(Muni);
// v 4
if (_AvailableCustomers == null)
_AvailableCustomers = new List<uint>();
writer.Write(_AvailableCustomers);
}
So its easy to add new things, or change the serialization completely if one chooses to.
However, I now want to use JSON for reasons not relevant right here =) I am currently using DataContractJsonSerializer and I am now looking for a way to have the same flexibility I have using the FastSerializer above.
So the question is; what is the best way to create a JSON protocol/serialization and to be able to detail the serialization as above, so that I do not break the serialization just because another machine hasn't yet updated their version?
The key to versioning JSON is to always add new properties, and never remove or rename existing properties. This is similar to how protocol buffers handle versioning.
For example, if you started with the following JSON:
{
"version": "1.0",
"foo": true
}
And you want to rename the "foo" property to "bar", don't just rename it. Instead, add a new property:
{
"version": "1.1",
"foo": true,
"bar": true
}
Since you are never removing properties, clients based on older versions will continue to work. The downside of this method is that the JSON can get bloated over time, and you have to continue maintaining old properties.
It is also important to clearly define your "edge" cases to your clients. Suppose you have an array property called "fooList". The "fooList" property could take on the following possible values: does not exist/undefined (the property is not physically present in the JSON object, or it exists and is set to "undefined"), null, empty list or a list with one or more values. It is important that clients understand how to behave, especially in the undefined/null/empty cases.
I would also recommend reading up on how semantic versioning works. If you introduce a semantic versioning scheme to your version numbers, then backwards compatible changes can be made on a minor version boundary, while breaking changes can be made on a major version boundary (both clients and servers would have to agree on the same major version). While this isn't a property of the JSON itself, this is useful for communicating the types of changes a client should expect when the version changes.
Google's java based gson library has an excellent versioning support for json. It could prove a very handy if you are thinking going java way.
There is nice and easy tutorial here.
It doesn't matter what serializing protocol you use, the techniques to version APIs are generally the same.
Generally you need:
a way for the consumer to communicate to the producer the API version it accepts (though this is not always possible)
a way for the producer to embed versioning information to the serialized data
a backward compatible strategy to handle unknown fields
In a web API, generally the API version that the consumer accepts is embedded in the Accept header (e.g. Accept: application/vnd.myapp-v1+json application/vnd.myapp-v2+json means the consumer can handle either version 1 and version 2 of your API) or less commonly in the URL (e.g. https://api.twitter.com/1/statuses/user_timeline.json). This is generally used for major versions (i.e. backward incompatible changes). If the server and the client does not have a matching Accept header, then the communication fails (or proceeds in best-effort basis or fallback to a default baseline protocol, depending on the nature of the application).
The producer then generates a serialized data in one of the requested version, then embed this version info into the serialized data (e.g. as a field named version). The consumer should use the version information embedded in the data to determine how to parse the serialized data. The version information in the data should also contain minor version (i.e. for backward compatible changes), generally consumers should be able to ignore the minor version information and still process the data correctly although understanding the minor version may allow the client to make additional assumptions about how the data should be processed.
A common strategy to handle unknown fields is like how HTML and CSS are parsed. When the consumer sees an unknown fields they should ignore it, and when the data is missing a field that the client is expecting, it should use a default value; depending on the nature of the communication, you may also want to specify some fields that are mandatory (i.e. missing fields is considered fatal error). Fields added within minor versions should always be optional field; minor version can add optional fields or change fields semantic as long as it's backward compatible, while major version can delete fields or add mandatory fields or change fields semantic in a backward incompatible manner.
In an extensible serialization format (like JSON or XML), the data should be self-descriptive, in other words, the field names should always be stored together with the data; you should not rely on the specific data being available on specific positions.
Don't use DataContractJsonSerializer, as the name says, the objects that are processed through this class will have to:
a) Be marked with [DataContract] and [DataMember] attributes.
b) Be strictly compliant with the defined "Contract" that is, nothing less and nothing more that it is defined. Any extra or missing [DataMember] will make the deserialization to throw an exception.
If you want to be flexible enough, then use the JavaScriptSerializer if you want to go for the cheap option... or use this library:
http://json.codeplex.com/
This will give you enough control over your JSON serialization.
Imagine you have an object in its early days.
public class Customer
{
public string Name;
public string LastName;
}
Once serialized it will look like this:
{ Name: "John", LastName: "Doe" }
If you change your object definition to add / remove fields. The deserialization will occur smoothly if you use, for example, JavaScriptSerializer.
public class Customer
{
public string Name;
public string LastName;
public int Age;
}
If yo try to de-serialize the last json to this new class, no error will be thrown. The thing is that your new fields will be set to their defaults. In this example: "Age" will be set to zero.
You can include, in your own conventions, a field present in all your objects, that contains the version number. In this case you can tell the difference between an empty field or a version inconsistence.
So lets say: You have your class Customer v1 serialized:
{ Version: 1, LastName: "Doe", Name: "John" }
You want to deserialize into a Customer v2 instance, you will have:
{ Version: 1, LastName: "Doe", Name: "John", Age: 0}
You can somehow, detect what fields in your object are somehow reliable and what's not. In this case you know that your v2 object instance is coming from a v1 object instance, so the field Age should not be trusted.
I have in mind that you should use also a custom attribute, e.g. "MinVersion", and mark each field with the minimum supported version number, so you get something like this:
public class Customer
{
[MinVersion(1)]
public int Version;
[MinVersion(1)]
public string Name;
[MinVersion(1)]
public string LastName;
[MinVersion(2)]
public int Age;
}
Then later you can access this meta-data and do whatever you might need with that.

How do you implement Pipes and Filters pattern with LinqToSQL/Entity Framework/NHibernate?

While building by DAL Repository, I stumbled upon a concept called Pipes and Filters. I read about it here, here and saw a screencast from here. I am still not sure how to go about implementing this pattern. Theoretically all sounds good , but how do we really implement this in an enterprise scenario?
I will appreciate, if you have any resources,tips or examples ro explanation for this pattern in context to the data mappers/ORM mentioned in the question.
Thanks in advance!!
Ultimately, LINQ on IEnumerable<T> is a pipes and filters implementation. IEnumerable<T> is a streaming API - meaning that data is lazily returns as you ask for it (via iterator blocks), rather than loading everything at once, and returning a big buffer of records.
This means that your query:
var qry = from row in source // IEnumerable<T>
where row.Foo == "abc"
select new {row.ID, row.Name};
is:
var qry = source.Where(row => row.Foo == "abc")
.Select(row = > new {row.ID, row.Name});
as you enumerate over this, it will consume the data lazily. You can see this graphically with Jon Skeet's Visual LINQ. The only things that break the pipe are things that force buffering; OrderBy, GroupBy, etc. For high volume work, Jon and myself worked on Push LINQ for doing aggregates without buffering in such scenarios.
IQueryable<T> (exposed by most ORM tools - LINQ-to-SQL, Entity Framework, LINQ-to-NHibernate) is a slightly different beast; because the database engine is going to do most of the heavy lifting, the chances are that most of the steps are already done - all that is left is to consume an IDataReader and project this to objects/values - but that is still typically a pipe (IQueryable<T> implements IEnumerable<T>) unless you call .ToArray(), .ToList() etc.
With regard to use in enterprise... my view is that it is fine to use IQueryable<T> to write composable queries inside the repository, but they shouldn't leave the repository - as that would make the internal operation of the repository subject to the caller, so you would be unable to properly unit test / profile / optimize / etc. I've taken to doing clever things in the repository, but return lists/arrays. This also means my repository stays unaware of the implementation.
This is a shame - as the temptation to "return" IQueryable<T> from a repository method is quite large; for example, this would allow the caller to add paging/filters/etc - but remember that they haven't actually consumed the data yet. This makes resource management a pain. Also, in MVC etc you'd need to ensure that the controller calls .ToList() or similar, so that it isn't the view that is controlling data access (otherwise, again, you can't unit test the controller properly).
A safe (IMO) use of filters in the DAL would be things like:
public Customer[] List(string name, string countryCode) {
using(var ctx = new CustomerDataContext()) {
IQueryable<Customer> qry = ctx.Customers.Where(x=>x.IsOpen);
if(!string.IsNullOrEmpty(name)) {
qry = qry.Where(cust => cust.Name.Contains(name));
}
if(!string.IsNullOrEmpty(countryCode)) {
qry = qry.Where(cust => cust.CountryCode == countryCode);
}
return qry.ToArray();
}
}
Here we've added filters on-the-fly, but nothing happens until we call ToArray. At this point, the data is obtained and returned (disposing the data-context in the process). This can be fully unit tested. If we did something similar but just returned IQueryable<T>, the caller might do something like:
var custs = customerRepository.GetCustomers()
.Where(x=>SomeUnmappedFunction(x));
And all of a sudden our DAL starts failing (cannot translate SomeUnmappedFunction to TSQL, etc). You can still do a lot of interesting things in the repository, though.
The only pain point here is that it might push you to have a few overloads to support different calling patterns (with/without paging, etc). Until optional/named parameters arrives, I find the best answer here is to use extension methods on the interface; that way, I only need one concrete repository implementation:
class CustomerRepository {
public Customer[] List(
string name, string countryCode,
int? pageSize, int? pageNumber) {...}
}
interface ICustomerRepository {
Customer[] List(
string name, string countryCode,
int? pageSize, int? pageNumber);
}
static class CustomerRepositoryExtensions {
public static Customer[] List(
this ICustomerRepository repo,
string name, string countryCode) {
return repo.List(name, countryCode, null, null);
}
}
Now we have virtual overloads (as extension methods) on ICustomerRepository - so our caller can use repo.List("abc","def") without having to specify the paging.
Finally - without LINQ, using pipes and filters becomes a lot more painful. You'll be writing some kind of text based query (TSQL, ESQL, HQL). You can obviously append strings, but it isn't very "pipe/filter"-ish. The "Criteria API" is a bit better - but not as elegant as LINQ.