Continuing to work on my port of a CakePHP 1.3 app to 3.0, and have run into another issue. I have a number of areas where functionality varies based on certain settings, and I have previously used a modular component approach. For example, Leagues can have round-robin, ladder or tournament scheduling. This impacts on the scheduling algorithm itself, such that there are different settings required to configure each type, but also dictates the way standings are rendered, ties are broken, etc. (This is just one of 10 areas where I have something similar, though not all of these suffer from the problem below.)
My solution to this in the past was to create a LeagueComponent with a base implementation, and then extend that class as LeagueRoundRobinComponent, LeagueLadderComponent and LeagueTournamentComponent. When controllers need to do anything algorithm-specific, they check the schedule_type field in the leagues table, create the appropriate component, and call functions in it. This still works just fine.
I mentioned that this also affects views. The old solution for this was to pass the league component object from the controller to the view via $this->set. The view can then query it for various functionality. This is admittedly a bit kludgy, but the obvious alternative seems to be extracting all the info the view might require and setting it all individually, which doesn't seem to me to be a lot better. If there's a better option, I'm open to it, but I'm not overly concerned about this at the moment.
The problem I've encountered is when tables need to get some of that component info. The issue at hand is when I am saving my add/edit form and need to deal with the custom settings. In order to be as flexible as possible for the future, I don't have all of these possible setting fields represented in the database, but rather serialize them into a single "custom" column. (Reading this all works quite nicely with a custom constructor and getters.) I had previously done this by loading the component from the beforeSave function in the League model, calling the function that returns the list of schedule-specific settings, extracting those values and serializing them. But with the changes to component access in 3.0, it seems I can no longer create the component in my new beforeMarshal function.
I suppose the controller could "pass" the component to the table by setting it as a property, but that feels like a major kludge, and there must be a better way. It doesn't seem like extending the table class is a good solution, because that would horribly complicate associations. I don't think that custom types are the solution, as I don't see how they'd access a component either. I'm leaning towards passing just the list of fields from the controller to the model, that's more of a "configuration" method. Speaking of configuration, I suppose it could all just go into the central Configure data store, but that's always felt to me like somewhere that you only put "small" data. I'm wondering if there's a better design pattern I could follow that would let the table continue to take care of these implementation details on its own without the controller needing to get involved; if at some point I decide to change from the serialized method to adding all of the possible columns, it would be nice to have those changes restricted to the table class.
Oh, and keep in mind that this list of custom settings is needed in both a view and the table, so whatever solution is proposed will ideally provide a way for both of them to access it, rather than requiring duplication of code.
Related
Let's say I have a bunch of kittens. Perhaps I have a KittenViewModel. I want to show it as a kitten card in a card view, but also as broken down into columns in a list view. Does MvvmCross support binding the KittenViewModel to multiple views? Should I have multiple ViewModels that refer back to a single model?
Disclaimer: I know that I am replying to an old question which you may well have forgotten; this is for posterity. Also, I have limited understanding of the MVVM design pattern. I remember reading somewhere that Views and ViewModels are typically in 1-to-1 correspondence, so the conventional answer is probably "You shouldn't do that. Reconsider your design."
With that being said, I recently struggled with this for a while before coming up with a very simple solution that operates under the following assumptions: (1) you wish to use the exact same instance of a ViewModel in two separate Views; (2) for whatever reason, you cannot use a DataTemplateSelector to determine which View to use; and (3) you do not mind creating multiple Views for the same ViewModel.
The solution is to define separate data templates for the KittenViewModel as resources for whatever controls you are going to use to display the data. For example, if you have created a KittenCardView user control and intend to display it in a ContentControl, you can set the DataTemplate in a ContentControl resource, something like:
<ContentControl>
<Control.Resources>
<DataTemplate DataType="{x:Type viewmodel:KittenViewModel}">
<view:KittenCardView/>
</DataTemplate>
</Control.Resources>
</ContentControl>
The KittenColumnView (or whatever you call it) would be handled similarly. You may find it helpful to define one of the Views as a Window or App resource if typically use one and only need the other in special circumstances.
what is the meaning of this interfaces? even if we implement an interface on a class, we have to declare it's functionality again and again each time we implement it on a different class, so what is the reason of interfaces exist on as3 or any other languages which has interface.
Thank you
I basically agree with the answers posted so far, just had a bit to add.
First to answer the easy part, yes other languages have interfaces. Java comes to mind immediately but I'm pretty sure all OOP languages (C++, C#, etc.) include some mechanism for creating interfaces.
As stated by Jake, you can write interfaces as "contracts" for what will be fulfilled in order to separate work. To take a hypothetical say I'm working on A and you're working on C, and bob is working on B. If we define B' as an interface for B, we can quickly and relatively easily define B' (relative to defining B, the implementation), and all go on our way. I can assume that from A I can code to B', you can assume from C you can code to B', and when bob gets done with B we can just plug it in.
This comes to Jugg1es point. The ability to swap out a whole functional piece is made easier by "dependency injection" (if you don't know this phrase, please google it). This is the exact thing described, you create an interface that defines generally what something will do, say a database connector. For all database connectors, you want it to be able to connect to database, and run queries, so you might define an interface that says the classes must have a "connect()" method and a "doQuery(stringQuery)." Now lets say Bob writes the implementation for MySQL databases, now your client says well we just paid 200,000 for new servers and they'll run Microsoft SQL so to take advantage of that with your software all you'd need to do is swap out the database connector.
In real life, I have a friend who runs a meat packing/distribution company in Chicago. The company that makes their software/hardware setup for scanning packages and weighing things as they come in and out (inventory) is telling them they have to upgrade to a newer OS/Server and newer hardware to keep with the software. The software is not written in a modular way that allows them to maintain backwards compatibility. I've been in this boat before plenty of times, telling someone xyz needs to be upgraded to get abc functionality that will make doing my job 90% easier. Anyhow guess point being in the real world people don't always make use of these things and it can bite you in the ass.
Interfaces are vital to OOP, particularly when developing large applications. One example is if you needed a data layer that returns data on, say, Users. What if you eventually change how the data is obtained, say you started with XML web services data, but then switched to a flat file or something. If you created an interface for your data layer, you could create another class that implements it and make all the changes to the data layer without ever having to change the code in your application layer. I don't know if you're using Flex or Flash, but when using Flex, interfaces are very useful.
Interfaces are a way of defining functionality of a class. it might not make a whole lot of sense when you are working alone (especially starting out), but when you start working in a team it helps people understand how your code works and how to use the classes you wrote (while keeping your code encapsulated). That's the best way to think of them at an intermediate level in my opinion.
While the existing answers are pretty good, I think they miss the chief advantage of using Interfaces in ActionScript, which is that you can avoid compiling the implementation of that Interface into the Main Document Class.
For example, if you have an ISpaceShip Interface, you now have a choice to do several things to populate a variable typed to that Interface. You could load an external swf whose main Document Class implements ISpaceShip. Once the Loader's contentLoaderInfo's COMPLETE event fires, you cast the contentto ISpaceShip, and the implementation of that (whatever it is) is never compiled into your loading swf. This allows you to put real content in front of your users while the load process happens.
By the same token, you could have a timeline instance declared in the parent AS Class of type ISpaceShip with "Export for Actionscript in Frame N *un*checked. This will compile on the frame where it is first used, so you no longer need to account for this in your preloading time. Do this with enough things and suddenly you don't even need a preloader.
Another advantage of coding to Interfaces is if you're doing unit tests on your code, which you should unless your code is completely trivial. This enables you to make sure that the code is succeeding or failing on its own merits, not based on the merits of the collaborator, or where the collaborator isn't appropriate for a test. For example, if you have a controller that is designed to control a specific type of View, you're not going to want to instantiate the full view for the test, but only the functionality that makes a difference for the test.
If you don't have support in your work situation for writing tests, coding to interfaces helps make sure that your code will be testable once you get to the point where you can write tests.
The above answers are all very good, the only thing I'd add - and it might not be immediately clear in a language like AS3, where there are several untyped collection classes (Array, Object and Dictionary) and Object/dynamic classes - is that it's a means of grouping otherwise disparate objects by type.
A quick example:
Image you had a space shooter, where the player has missiles which lock-on to various targets. Suppose, for this purpose, you wanted any type of object which could be locked onto to have internal functions for registering this (aka an interface):
function lockOn():void;//Tells the object something's locked onto it
function getLockData():Object;//Returns information, position, heat, whatever etc
These targets could be anything, a series of totally unrelated classes - enemy, friend, powerup, health.
One solution would be to have them all to inherit from a base class which contained these methods - but Enemies and Health Pickups wouldn't logically share a common ancestor (and if you find yourself making bizarre inheritance chains to accomodate your needs then you should rethink your design!), and your missile will also need a reference to the object its locked onto:
var myTarget:Enemy;//This isn't going to work for the Powerup class!
or
var myTarget:Powerup;//This isn't going to work for the Enemy class!
...but if all lockable classes implement the ILockable interface, you can set this as the type reference:
var myTarget:ILockable;//This can be set as Enemy, Powerup, any class which implements ILockable!
..and have the functions above as the interface itself.
They're also handy when using the Vector class (the name may mislead you, it's just a typed array) - they run much faster than arrays, but only allow a single type of element - and again, an interface can be specified as type:
var lockTargets:Vector.<Enemy> = new Vector.<Enemy>();//New array of lockable objects
lockTargets[0] = new HealthPickup();//Compiler won't like this!
but this...
var lockTargets:Vector.<ILockable> = new Vector.<ILockable>();
lockTargets[0] = new HealthPickup();
lockTargets[1] = new Enemy();
Will, provided Enemy and HealthPickup implement ILockable, work just fine!
I have a linq-to-sql object from a table. I want to store a similar data in the profile collection. Technically, I don't need all of the data in the linq2sql table and do not really prefer the legacy naming construct. My first thought would be to create a collection of CRUD objects.
But, if I choose this solution I will have double the classes with much overlapping functionality. If I use the linq2sql objects as-is then I'll be dealing with abstractions that contain more data then necessary.
To give you a more clear example, I will create an example that is similar:
This goes into the database and a linq-2-sql abstraction is created
NoteSaved
Date
Id
UserId
Text
....
Custom Methods
This goes into the user profile
[Serializable]
SavedSearchText
Text
...
Custom Methods
SavedSearchText doesn't need the junk like id, userid, and date, and that data doesn't even make sense for it. But yet the custom functionality will overlap for both classes.
I see 2 trivial approaches:
Created a whole new set of objects that are CRUD just for this purpose
Use the linq2sql objects as a proxy for the CRUD, i.e using my "wisdom" that these objects are not really "those objects"
I was going the route 1 but see a lot of duplication. It is not very DRY. What are some solutions that keep things as DRY as possible while also maintaining a clear architecture? In other words, I want to avoid having to duplicate the same methods for every object AND I want to avoid storing unneeded data to the profile, such as Ids or DateStored which are not required.
I thought this should be obvious but SavedSearchText and SearchText share a common data field text and common functionality, i.e SomeFunction1, SomeFunction2, i.e FindText.
Edit/Update:
Typically this would be handled with inheritance. We'd have a base business class Text and then 2 derived types SavedText and UserText. But, with linq2sql I do not see a way to do this keeping with DRY principle.
One might also choose to solve via containment via has-a relationship but that is not really not "right" for this context.
Obviously one could create own business layer but that also doesnt keep with dry, as linq2sql has much of the functionality already required.
Perhaps the cleanest solution would be to create CRUD objects but that only read/dump back into the linq2sql objects. Unfortunately, these linq2sql objects will not actually be "tables" and some fields wont make sense. that appears to be the DRYiest solution.
It may even be possible in this case to use advanced methods such as extension methods which would extend both classes but I prefer not to use extension methods unless required.
During coding I frequently encounter this situation:
I have several objects (ConcreteType1, ConcreteType2, ...) with the same base type AbstractType, which has abstract methods save and load . Each object can (and has to) save some specific kind of data, by overriding the save method.
I have a list of AbstractType objects which contains various ConcreteTypeX objects.
I walk the list and the save method for each object.
At this point I think it's a good OO design. (Or am I wrong?) The problems start when I want to reload the data:
Each object can load its own data, but I have to know the concrete type in advance, so I can instantiate the right ConcreteTypeX and call the load method. So the loading method has to know a great deal about the concrete types. I usually "solved" this problem by writing some kind of marker before calling save, which is used by the loader to determine the right ConcreteTypeX.
I always had/have a bad feeling about this. It feels like some kind of anti-pattern...
Are there better ways?
EDIT:
I'm sorry for the confusion, I re-wrote some of the text.
I'm aware of serialization and perhaps there is some next-to-perfect solution in Java/.NET/yourFavoriteLanguage, but I'm searching for a general solution, which might be better and more "OOP-ish" compared to my concept.
Is this either .NET or Java? If so, why aren't you using serialisation?
If you can't simply use serialization, then I would still definitely pull the object loading logic out of the base class. Your instinct is correct, leading you to correctly identify a code smell. The base class shouldn't need to change when you change or add derived classes.
The problem is, something has to load the data and instantiate those objects. This sounds like a job for the Abstract Factory pattern.
There are better ways, but let's take a step back and look at it conceptually. What are all objects doing? Loading and Saving. When you get the object from memory, you really don't to have to care whether it gets its information from a file, a database, or the windows registry. You just want the object loaded. That's important to remember because later on, your maintanence programmer will look at the LoadFromFile() method and wonder, "Why is it called that since it really doesn't load anything from a file?"
Secondly, you're running into the issue that we all run into, and it's based in dividing work. You want a level that handles getting data from a physical source; you want a level that manipulates this data, and you want a level that displays this data. This is the crux of N-Tier Development. I've linked to an article that discusses your problem in great detail, and details how to create a Data Access Layer to resolve your issue. There are also numerous code projects here and here.
If it's Java you seek, simply substitute 'java' for .NET and search for 'Java N-Tier development'. However, besides syntactical differences, the design structure is the same.
I'm trying to find the most efficient way to send my Linq2Sql objects to my jQuery plugins via JSON, preferably without additional code for each class.
The EntitySets are the main issue as they cause not only recursion, but when recursion is ignored (using JSON.NET's ReferenceLoopHandling feature) a silly amount of data can be retrieved, when I only really need 1 or 2 levels. This gets really bad when you're talking about Users, Roles and Permissions as you get the User's Role, the User's Permissions, the Role's Permissions, and the Role's Users all up in your JSON before it hits recursion and stops. Compare this to what I actually want, which is just the RoleId.
My initial approach was to send a "simplified" version of the object, where I reflect the entity and set any EntitySets to null, but of course in the above example Roles gets set to null and so RoleId is null. Setting only the 2nd level properties to null kind of works but there's still too much data as the EntitySets that weren't killed (the first level ones) repopulate their associated tables when the JsonSerializer does its reflection and I still get all those Permission objects that I just don't need.
I definately don't want to get into the situation of creating a lightweight version of every class and implementing "From" and "To" style methods on them, as this is a lot of work and seems wasteful.
Another option is to put a JsonIgnoreAttribute on the relevant properties, but this is going to cause a nightmare scenario whenever classes need to be re-generated.
My current favourite solution which I like and hate at the same time is to put the classes into opt-in serialization mode, but because I can't add attributes to the real properties I'd have to create JSON-only properties in a partial class. Again, this seems wasteful but I think it's the best so far.
Any suggestions gratefully received!
Have you tried to set the Serialization Mode in the dbml file?
It's a standard property under code generation and when you set it to Unidirectional it won't generate all the additional levels of your table structure. I've used this with silverlight and WCF to send data because the data contracts don't allow for additional levels to be sent (silverlight is very limited on what you can and can't do).
Hope this helps!