Is it safe to combine RIO fields and properties? - mvvmcross

I've been looking at the N=36 tutorial which introduces new RIO support in MvvmCross 3.09. Is it safe to combine INC fields and old school properties in the same class? I ask because some of my property setters and getters are complex so it maybe be easier to leave them as-is. However the vast majority of my existing properties are simple and as such seem excellent candidates for fields.
Thanks
Mark

'safe' is an interesting word to use here - I'm not entirely sure what it means in this context.
I personally believe it is safe to mix and match INotifyChanged and INotifyPropertyChanged in the same project and in the same view model - there's nothing that should go bang as a result and the memory and processing speed performance of INotifyChanged should be as good as or better than the performance of INotifyPropertyChanged.
The only potential areas of unsafe risk I can think of are:
team development and later code maintenance - using the two different approaches together might confuse yourself or other coders either now or later in maintenance - it would be fair for them to ask "where do I use one approach or the other?" and "why?"
lack of 'change all' support - INotifyPropertyChanged allows ViewModels to send a everything has changed notification - they can do this using a null or empty property name. INotifyChanged does not currently join in with this notification. In my experience, this 'change all' mechanism is used very infrequently and is not well known by Mvvm developers - so the risk here is small. However, if anyone did try to use it, then they might be surprised that the INotifyChanged bound-fields didn't update.
portability to other Mvvm libraries - Rio is a binding mechanism MvvmCross has introduced - so it isn't yet available in other Mvvm platforms. If you were ever to port back to something like Prism then this might be a risk for you (you might have to rewrite those fields as properties)
confusing to Windows developers - experienced Xaml developers have been used to using INotifyPropertyChanged all the way back to 2005 - so it might confuse them to have to use the MvvmCross Xaml Binding Extensions in order to get the fields bound inside Xaml. (Whether or not this confusion is good or bad for them depends on your world view!)

Related

First write code using API, then actual API - does this approach have a name and is valid for API design process?

Standard way of working on new API (library, class, whatever) usually looks like this:
you think about what methods would API user need
you implement API that you suspect user will need
So basically you trying to guess what your API should look like. It very often leads to over engineering stuff, huge APIs that you think user will need and it is very possible that great part of your code won't be used at all.
Some time ago, maybe few years even, I read some article that promoted writing client code first. I don't remember where I found it but author pointed out several advantages like better understanding how API will be used, what it should provide and what is basically obsolete. I think idea was that it goes along with SCRUM methodology and user stories but on implementation level.
Just out of curiosity for my latest private project I started not with actual API (some kind of toolkit library) but with client code that would use this API. Of course my code is all in red because classes, methods and properties does not exist and I can forget about help from intellisense but what I noticed is that after few days of coding my application "has" all basic functionalities and my library API "is" a lot smaller than I imagined when starting a project.
I don't say that if somebody took my library and started using it it wouldn't lack some features but I think it helped me to realize that my idea of this API was somewhat flawed because I usually try to cover all bases and provide methods "just in case". And sometimes it bites me badly because I made some stupid mistake in basic functions being more focused on code that somebody maybe would need.
So what I would like to ask you do you ever tried this approach when needed to create a new API and did it helped you? Is it some recognized technique that has a name?
So basically you're trying to guess what your API should look like.
And that's the biggest problem with designing anything this way: there should be no (well, minimal) guesswork in software design. Designing an API based on assumptions rather than actual information is dangerous, for several reasons:
It's directly counter to the principle of YAGNI: in order to get anything done, you have to assume what the user is going to need, with no information to back up those assumptions.
When you're done, and you finally get around to using your API, you'll invariably find that it sucks to use (poor user experience), because you weren't thinking about how the library is used (UX), you were thinking about what the library must do (features).
An API, by definition, is an interface for users (i.e., developers). Designing as anything else just makes for a bad design, without fail.
Writing sample code is like designing a GUI before writing the backend: a Good Thing. It forces you to think about user experience and practical effects of design decisions without getting bogged down in useless theorising and assumption.
And contrary to Gabriel's answer, this is not bottom-up design: it's top-down. Rather than design the concrete backend of your library and then force an abstract interface on top of it, you first design the interface and then worry about the implementation.
Generally speaking, the idea of designing the concrete first and abstracting from that afterwards is called bottom-up design. Test Driven Development uses similar principle to what you describe to support better design. Firstly you write a test, which is an use of code you are going to write afterwards. It is important to proceed stepwise, because you have to proove the API is implementable. IMportant part of each part is refactoring - this allows you design more concise API and reuse parts of your code.

Linq2SQL vs EF in .net Framework 4.0

So what's the verdict on these two products now? I can't seem to find anything regarding this issue SPECIFICALLY for VS2010/.net 4.0
Back in .net 3.5 days, most people believe Linq2SQL will be dead when .net 4.0 comes around, but it seems alive and well.
On the other hand, EF 4.0 seems to have gotten significant improvement.
For me, most of my work so far are small to medium sized projects, and my company is migrating from VS08 to VS10 soonish. What should I be looking at? Or really, should I spend the time studying EF4.0 or would it be time more well spent looking at nHibernate? (But back on topic, I'm really more interested in Linq2Sql - EF.)
Lastly, I am currently using entlib / unity, which framework is more friendly for dependency/policy injection?
Thanks in advance.
Here are some reasons why Entity Framework (v4) is better:
1 - L2SQL is essentially obsolete
2 - L2SQL does not support POCO mapping, EF does.
3 - EF has more flexibility (code first, model first, database first). L2SQL has only 1.
4 - EF has support for SPROC -> POCO mapping
5 - EF has Entity-SQL, allowing you to go back to classic ADO.NET when required
6 - EF supports inheritance (TPT, TPH)
7 - EF goes hand-in-hand with Repository pattern, and deferred execution via IQueryable
8 - EF components (ObjectSet, ObjectContext) are easily mockable and allow for DI
I cannot think of any reason why new projects should use L2SQL.
Some might say "Well L2SQL is good for small projects, i can drag and drop and be done with it".
Well you can do that with EF4 as well, and you'll have more flexibility/support in the long run if you decide to modify/grow your project. So that is not an excuse.
HTH
Just to add to the previous answers and comments (all three got a +1 vote from me):
a) Performance: the L2S runtime is more lightweight than EF (due to only a single layer; EF has to deal with two model layers and the mappings between them).
EF often generates a bit more verbose TSQL than L2S but most of the time that only affects readability if you're profiling and looking at the generated queries; the SQL optimizer will end up with the same execution plan most of the time. There are however some cases when the queries can grow so large and complex that it can have a performance impact.
L2S is also slightly better at doing client-side optimization of queries; it eliminates where clause predicates that can be evaluated client-side so the database don't have to worry about them. This means less work for SQL Server's optimizer, and less risk that you'll end up with a 'bad' execution plan.
b) L2S vs L2E: L2S is still slightly better than L2E at translating LINQ queries that use normal CLR methods into TSQL, especially when it comes to DateTime and methods related to it. L2E supports more or less the same things but through its' own EntityFunctions class: http://msdn.microsoft.com/en-us/library/system.data.objects.entityfunctions.aspx.
Both L2S and EF are great choices in my opinion, pick the one you feel comfortable with and covers the things you need now and during the reasonable lifespan of the code you're writing. Before you know it, Microsoft will probably announce yet another data access technology. They seem to do that every 3-5 years... :) DAO, RDO, ODBC, ADO, OLEDB, ADO.NET, typed datasets, ObjectSpaces, WinFS, L2S, EF, ... etc etc. Code I wrote 15 years ago against DAO is still out there, in apps that are still on the market, and it still works despite DAO being "dead" for years.
Sometimes names are reused for completely new data access technologies, but that doesn't change the fact that whatever constitutes Microsoft's latest database access technology is a constantly moving target...
L2S isn't going anywhere. The VS team has made that clear. It won't get significant improvement, but it will still be around and work just fine.
L2S is great, and easy to use for small scale projects with fairly simple data models. The trigger for me, when to choose EF over L2S is if I have many-to-many tables, or I need to map more complex entities over more than just a single table.
I know this is probably too late for the original query, but for the sake of future people with a similar question...
To my mind, the crucial aspect is whether you're doing an entirely new project or working with a legacy DB. I'm working with a legacy DB, with some rather idiosyncratic design decisions. I'd like to use EF, but it simply failed to map over these idiosyncracies, while L2S managed perfectly well.
For example. Some of the FKs contained other values than keys to related rows - effectively doubling as an FK/flag column.
Further, the FK inheritance mapping totally failed against out DB, so I opted for a flat L2S model, to get the benefits of type-checking and name checking at query time, but ended building my own mapping layer.
All of this is a horrible pain, if its any consolation to MS, I also found NHibernate incapable of the task. In my experience, in real-world usage too many DBs have these kind of issues, hence my recommendation that EF is not really suitable for brown-field develop ment.
For new projects, you have the luxury of evolving your DB design to match the assumptions of the framework. I can't do that as existing applications rely on the data design. We're hoping to improve the design incrementally, but attempting to correct all the problems up front would result in an infeasibly large migration event (for our resources).
Note: In the UK (at least), brown-field development is building houses on land that has previously been developed. Green-field development is building on our dwindling resources of countryside. I've reused the terms here.
We've just moved from VS 2008 to VS 2010 and from L2S to EF.
Even though we are using EF in a very similar fashion to L2S, it comforts me knowing that I have flexibility to do more advanced ORM should the need arise.
That said - for a small project - I would probably still use L2S. Medium to large projects I would use EF.
Also - EF seemed like a big learning curve, because the EF documentation prompted us to start investigating some design patterns like Unit Of Work, Repository, Dependency Injection. However I realised that these patterns applied to L2S as well as EF.
In terms of Nhibernate - my research (ie. browsing SO) indicates that the latest version of EF4.0 is sufficiently advanced (POCO support etc.) to be considered a competitive product.
If third-party products are appropriate for you, try using LinqConnect. This product allows you to use Linq to SQL (with some modifications) with different DBMS (Oracle, MySQL, PostgreSQL, etc.).
LinqConnect offers you the following features unavailable in L2S earlier:
Model-First approach
TPT and TPH support
POCO support
Automatic synchronization of model and database without data losses.
As for performance, the latest comparative test on OrmBattle our provider is among leaders.
Also, all EF functions are supported in our DBMS-specific providers as well.

Presentation patterns to use with Ext

Which presentation patterns do you think Ext favors or have you successfully used to achieve high testability and also maintainability?
Since Ext component instances usually come tightly coupled with state and some sort of presentation logic (e.g. format validation for text fields), Passive View is not a natural fit. Supervising Presenter seems like it can work (and I've painlessly used it in one occasion). How about the suitability of Presentation Model? Any others?
While this question is specifically for Ext, it can apply to similar frameworks like SmartClient and even RIA technologies like Flex. So, if you have any first-hand pattern experiences with any other web UI technologies, your input would still be appreciated.
When thinking of presentation patterns, this is a great quote:
Separating user interface code from
everything else is a key principle in
well-engineered software. But it’s not
always easy to follow and it leads to
more abstraction in an application
that is hard to understand. Quite a
lot design patterns try to target this
scenario: MVC, MVP, Supervising
Controller, Passive View,
PresentationModel,
Model-View-ViewModel, etc. The reason
for this variety of patterns is that
this problem domain is too big to be
solved by one generic solution.
However, each UI Framework has its own
unique characteristics and so they
work better with some patterns than
with others.
As far as Ext is concerned, in my opinion the closest pattern would be the Model-View-Viewmodel, however this pattern is inherently difficult to code for whilst maintaining the separation of the key tenets (state, view, model).
That said, as per the quote above, each pattern tries to solve/compartmentalise/simplify a problem/situation often too complex for the individual application at hand, or which often fails when you try and take it to its absolute. As such, think about getting a 'best fit' as opposed to an absolute when pattern matching application development.
And remember:
The reason
for this variety of patterns is that
this problem domain is too big to be
solved by one generic solution.
I hope this helps!
2 yeas have passed since this question was aksed and now Ext-JS 4 has a built-in implementation of the MVC pattern. However, instead of an MVP (which I prefer), it favors a straight controller because the views attachment themselves to the models through stores.
Here's the docs on the controller:
http://docs.sencha.com/ext-js/4-1/#!/api/Ext.app.Controller
Nonetheless it can be made to act more like a supervising controller. One nice aspect of Ext-JS is the global application objects ability to act like an event bus for handling controller to controller communication. See this post on how to do that:
http://www.sencha.com/forum/showthread.php?176495-How-to-listen-for-custom-events-fired-in-application
Of course the definitive explanation of all these patterns can be found here:
http://martinfowler.com/eaaDev/uiArchs.html

How often do you need to create a real class hierarchy in your day to day programming?

I create business applications with heavy database use. Most of the programming work is just to connect components to the database and modifying components to adapt to general interface behaviour. I mostly use Delphi with its rich VCL library, and generally buy components needed. I keep most of the business logic in the database. I rarely get the chance to build a nice class hierarchy from the bottom up as there really is no need. Anyone else have this experience?
For me, occasionally a problem is clearer or easier with subclassing, but not often.
This also changes quite a bit in a given design as it's refactored.
My biggest problem is that programming courses and texts give so much weight to inheritance, hierarchies, and polymorphism through base classes (vs. interfaces or dynamic typing). This helps create legions of programmers that subclass everything and their mother.
The answer to this question is not totally language-agnostic;
Some languages like Java have a fairly limited set of language features available, meaning that subclassing is fairly often used because it's a convenient method for re-use, technical inheritance.
Closures and lambdas of C# make inheritance for technical reasons much less relevant. So normally inheritance is used for semantic reasons (like cat extends animal).
The last C# project I worked on, we more or less made all of the class hierarchies within a few weeks. After that it was more or less over.
On my current java project we create new class hierarchies all of the time.
Other languages will have other features that similarly affect this composition (mixins come to mind)
I put on my architecting/class design hat probably once or twice a month. It's probably the best hat I have and is the most fun to wear.
Depends what stage of the lifecycle your project is in though.
When your tackling problem domains you are well familiar with and already have a common code base to work from, you often have no need to create a new class hierarchy. It's when you stumble upon problems you have no ready solutions for, that you start building your own.
It's also very dependant on the type of applications you develop. If your domain already has well accepted conventions and libraries to work from, there probably isn't any need to reinvent the wheel (other than personal / academic interests). Some areas have inherently less available resources to work with, and in those you'll find yourself building everything from scratch most of the time.
A majority of applications, especially business applications, contains at least some kind of business logic in it. I would contend that business should not be in the database, but should rather be in the application. You can put referential integrity in the database as I think this is a good choice, but business logic should be only in the application.
By class hierarchy, I suppose you mean do you always have to end up with some inheritance in your object model, then the answer is no. But chances are you can often find some common code, factor it out and create a base class to contain the common code.
If you agree with me on the point that business logic should not be in the database, but should be in the application, then I recommend you look into the MVC Design Pattern to guide your design. You will find your design contain classes or objects. Your VCLs will represent your View, and you can have your Model classes map directly to the database table, i.e. each member in the class in the model corresponds to a field in a database table (again, this is the norm but there will be exception, where this simplicity fails to apply). Then you'll need a layer to handle the CRUD (Create, Read, Update, Delete) of the Model classes to the database tables. You will end up with an "layered" application that is easier to maintain and enhance.
It depends on what you mean by hierarchy - inheritance or layering?
When object oriented languages first came out, inheritance was overused. Complicated hierarchies were common. Now, interfaces (as in Java and C#) provide a simpler way to get the benefit of polymorphism without the complications of inheritance. I rarely use inheritance anymore.
Layering, however, is vital when creating a large application. Layering prevents general low-level classes (like lists) from directly referencing specific high-level classes (like web browser windows). As far as I know, there isn't a formal way to describe layering, but there are general guidelines (model-view-controller (MVC), separate GUI logic from business logic, separate data from presentation, etc.).
It really depends on the types/phases of the projects you're working on. I happen to do that everyday because I'm working on database internals for a new database, creating related libraries/frameworks. I'd imagine doing that a lot less if I'm working within a mature framework using other people's libraries.
I'm doing Infrastructure for our companys' product, so I'm writing a lot of code that will be used later by guys in other teams. So I end up writing lots of abstract classes, interfaces, hierarchies and so on. Mostly it's just a pattern of "default behaviour in an abstract/virtual class, which other programmers may override".
Very challenging, I must say.
The time that I find class hierarchies most beneficial is when the relationship between objects actually does match a true "is-a" relationship in the domain.
However if I can avoid large hierarchies I will due to the fact that they are often a little more tricky to map to relational databases and can really complicate your database designs. Since you say most of your applications make heavy use of databases this would be something to take into consideration.

Switching to ORMs

I'm toying with the idea of phasing in an ORM into an application I support. The app is not very structured with no unit tests. So any change will be risky. I'm obviously concerned that I've got a good enough reason to change. The idea is that there will be less boiler plate code for data access and there for greater productivity.
Do this ring true with your experiences?
Is it possible or even a good idea to phase it in?
What are the downsides of an ORM?
I would strongly recommend getting a copy of Michael Feather's book Working Effectively With Legacy Code (by "Legacy Code" Feathers means any system that isn't adequately covered by unit tests). It is full of good ideas which should help you with your refactoring and phasing in of best practices.
Sure, you could phase in the introduction of an ORM, initially using it for accessing some subset of your domain model. And yes, I have found that use of an ORM speeds up development time - this is one of the key benefits and I certainly don't miss the days when I used to laboriously hand-craft data access layers.
Downsides of ORM - from experience, there is inevitably a bit of a learning curve in getting to grips with the concepts, configuration and idiosyncracies of the chosen ORM solution.
Edit: corrected author's name
The "Robert C Martin" book, which was actually written by Michael Feathers ("Uncle Bob" is, it seems, a brand name these days!) is a must.
It's near-impossible - not to mention insanely time-consuming - to put unit tests into an application not developed with them. The code just won't be amenable.
But that's not a problem. Refactoring is about changing design without changing function (I hope I haven't corrupted the meaning too badly there) so you can work in a much broader fashion.
Start out with big chunks. Set up a repeatable execution, and capture what happens as the expected result for subsequent executions. Now you have your app, or part of it at least, under test. Not a very good or comprehensive test, sure, but it's a start and things can only get better from there.
Now you can start to refactor. You want to start extracting your data access code so that it can be replaced with ORM functionality without disturbing too much. Test often: with legacy apps you'll be surprised what breaks; cohesion and coupling are seldom what they might be.
I'd also consider looking at Martin Fowler's Refactoring, which is, obviously enough, the definitive work on the process.
I work on a large ASP.net application where we recently started to use NHibernate. We moved a large number of domain objects that we had been persisting manually to Sql Server over to NHibernate instead. It simplified things quite a bit and made it much easier to change things over time. We're glad we made the changes and are using NHibernate where appropriate for a lot of our new work.
I heard that TypeMock is often being used to refactor legacy code.
I seriously think introducing ORM into a legacy application is calling for trouble (and might be the same amount of trouble as a complete rewrite).
Other than that, ORM is a great way to go, and should definitely by considered.
The rule for refactoring is. Do unit tests.
So maybe first you should place some unittests at least for the core/major things.
The ORM should be designed for decreasing boilerplate code. The time/trouble vs. ROI to be enterprisy is up to you to estimate :)
Unless your code is already architectured to allow for "hot swapping" of your model layer backend, changing it in any way will always be extremely risky.
Trying to build a safety net of unit tests on poorly architected code isn't going to guarantee success, only make you feel safer about changing it.
So, unless you have a strong business case for taking on the risks involved it's probably best to leave well enough alone.