reusing queries in 2 datacontext using dependency injection - linq-to-sql

I have a web application that uses linq-to-sql queries (will soon be upgraded to linq-to-EF compiled queries) and for which there's data context and a database already in place. I want to create a demo version of the application and for the demo, I want to use an entirely different database file but that will have the same tables. So in essence, I'll have the same data structure for two different databases: one database for logged-in users and one database for demo users. I want to reuse many of the queries I've already written; they look like this:
public class FruitQueries
{
public List<SomeObjectModel> MyQuery(list of parameters)
{
using (MyDataContext TheDC = new MyDataContext())
{
var TheQueryResult = (from f in TheDC.Fruits
......).ToList();
return TheQueryResult;
}
}
public List<SomeObject> AnotherQuery(some other parameters) {...}
}
Now I think I know that this calls for dependency injection where the data context is passed in as a parameter but I'm not sure on the syntax. How do you reuse queries using dependency injection to make them work on two different databases? Right now I'm using a using statement and I want to keep this pattern; is that possible if I inject the DC as a parameter?
Thanks.

Since you already have a lot of code in place, probably the simplest thing to do is to inject a factory:
public interface IMyDataContextFactory
{
MyDataContext CreateNewContext();
}
All the code will roughly stay the same:
public List<SomeObjectModel> MyQuery(params)
{
using (var TheDC = this.factory.CreateNewContext())
{
var TheQueryResult = (from f in TheDC.Fruits
......).ToList();
return TheQueryResult;
}
}
You can let the injected IMyDataContextFactory decide how to construct a MyDataContext instance (based on the user). This would be trivial.
In the end it will probably be better to inject a MyDataContext (or an abstraction such as IUnitOfWork) into consumers, but this changes everything completely. Since this class is passed in from the outside, the consumer isn't responsible anymore for disposing it, but someone else is. Although disposing such instance isn't that hard with most DI container. It gets harder though when you want to share the same MyDataContext instance over multiple consumers (within the same web request for instance) and where do you call SubmitChanges?

Elaborating the previous answer
What you can do, is provide the connectionstring to the DC (would this qualify as contructor injection?)
using (MyDataContext TheDC = new MyDataContext(this.factory.CreateConString()))
This way, disposal is still handled by the consumer and you can continue your Using() approach. Your factory can read the two different connectionstrings from your webconfig and determine the right one to use, based on the user. (not that trivial as it may seem)
PS: I think the quickest way is to deploy the demo application to a different URL so they can have a separate web.config and you do not need to code anything but that does not answer your question.

Related

How to call a remoteObject method that is outside of my TitleWindow component on Flex?

I have a TitleWindow component. It allows me to save some data provided through 3 TextInput.
That data "fills" a DropDownList which is in another TitleWindow component, not inside the original one.
How can I call the remoteObject method that fills (or refresh) my DropDownList?
Any ideas will be appreciated!
You can simply use a Singleton as a model if you'd like, this will allow you to share data, but beware keep data only that needs to be shared in here or it will just become a global nightmare.
Using a singleton means you'll have a class that you can only ever have one instance of. If you put properties in that class any time you reference it it will be the same memory throughout the application execution.
http://blog.pixelbreaker.com/actionscript-3-0/as30-better-singletons
Marking the singleton class or individual properties as Bindable will make it so you can watch for the changes and call a function.
http://livedocs.adobe.com/flex/3/html/help.html?content=databinding_8.html
Putting this together you have something like this:
[Singleton.as]
package
{
[Bindable]
public class Singleton
{
public var myListData:Array;
public static var instance:Singleton;
public static function getInstance():Singleton
{
if( instance == null ) instance = new Singleton( new SingletonEnforcer() );
return instance;
}
public function Singleton( pvt:SingletonEnforcer )
{
// init class
}
}
}
internal class SingletonEnforcer{}
Somewhere else you want to get a handle on this
[MyTitleWindow.as]
var instance:Singleton = Singleton.getInstance();
instance.myListData = [1,2,3];
[MyTitleWindowWithAList]
var instance:Singleton = Singleton.getInstance();
BindingUtils.bindSetter(funcUpdateList, instance, "myListData");
private function funcUpdateList(data:Object)
{
myList.dataProvider = data as Array;
}
Another option is to create an event that carries your data payload, dispatch that event from the first title window, and capture it, the problem with this is you have to register the listeners on the PopUpManager or SystemManager I believe because the TitleWindow's aren't direct children of the Application I believe.
Singletons are a bad idea and you should not get in the habit of using them. Instead, just dispatch an event from the View and catch it from something else that has access to your Service object.
Note that your Service should not be part and parcel of any View--the responsibility of a View is displaying data and capturing requests from the user to change the data, not communicating with a server.
For examples of an application written with this pattern in mind, check out
[Refactoring with Mate] (http://www.developria.com/2010/05/refactoring-with-mate.html) - The example has View source enabled
The same application done with RobotLegs - again, View Source is enabled.
Note that these are written against some popular frameworks, but they are written in such a way that you can easily replace that framework code with something else, even your own code.
For reference, here is the naiive implementation, where the service layer is being called directly in the Views. You couldn't call a different service without changing the Views, though the use of the static service means you could use it from elsewhere.
That static usage survived into the later examples, though today I would never write something depending on a globally accessible object. In part this is because I discovered Test Driven Development, and it is impossible to replace the "real" static object with an object that lets you isolate what you are testing. However, the fact that most of the code in the 2 "better" examples is insulated from that static object means that it is trivial to replace it with one that is provided some other way.
The lesson here is if you're going to use static, global objects, lock them away behind as much abstraction as you can. But avoid them if you're at all interested in best practice. Note that a Singleton is a static global object of the worst kind.

Can I use nested DataContexts in Linq TO Sql?

Would creating another datacontext in the GetData3() method lead to problems?
public void SetoFDataMethods()
{
using (DataContext DC= new DataContext())
{
var d1=DC.GetData1();
var d2=DC.GetData2();
var d3=DC.GetData3();
display(d3);
}
}
public result GetData3()
{
If (conditionA)
{
using (DataContext newContext=new DataContext())
{
var result= newContext.GetData4();
return result;
}
}
}
No I don't think that it would necessarily create problems, but it could create confusion. Each context will be independent and won't share transactions unless you explicitly code that in. I do something similar where I use multiple data contexts, one for auditing and the other for the actual data, but they are separate and don't even map the same tables.
My suggestion is to push the data context up one level -- from the method to the class -- so all of the methods in the class can share the same data context. The reason that I didn't do this is that I explicitly wanted to separate the transactions so that I could log both failures and successes via the auditing utility. In places where I am sharing the same data context, I do use it at the instance level rather than the method level.
One advantage of pushing the context up is that you can then easily inject a mock context if you need to for unit testing. Creating a data context inside your method is going to make it difficult to unit test since you don't have an easy way to isolate it from your actual database. I posted a blog entry on mocking/faking the LINQ data context using a wrapper awhile back that may be helpful if you go down this road.
Do be aware that objects retrieved from one context are tied to that context unless you do some explicit magic to the contrary. As it stands, attempting to, say, delete an object from a context other than the one it was retrieved from will throw an exception.
I would suggest that you send the DataContext as a parameter and use the same DataContext.
Not easily by default. You would have to detach entities bound to the context in order to use them on another data context. A simple solution to this would be creating a Unit of Work pattern.

as3 loading architecture

I have a large application that needs to ensure that various items are loaded (at different times, not just at startup) before calling other routines that depend on said loaded items. What i find problematic is how my architecture ends up looking to support this: it is either littered with callbacks (and nested callbacks!), or pre populated with dozens of neat little
private function SaveUser_complete(params:ReturnType):void
{
continueOnWithTheRoutineIWasIn();
}
and so forth. Right now the codebase is only perhaps 2500 lines, but it is going to grow to probably around 10k. I just can't see any other way around this, but it seems so wrong (and laborious). Also, i've looked into pureMVC, Cairngorm, and these methods seem equally tedious,except with another layer of abstraction. Any suggestions?
Well asynchronous operations always have this affect on code bases, unfortunately there's not really a lot you can do. If your loading operations form some sort of 'Service' then it would be best to make a IService interface, along with the appropriate MVC Style architecture and use data tokens. Briefly:
//In your command or whatever
var service:IService = model.getService();
var asyncToken:Token = service.someAsyncOperation(commandParams);
//some messaging is used here, 'sendMessage' would be 'sendNotification' in PureMVC
var autoCallBack:Function = function(event:TokenEvent):void
{
sendMessage(workOutMessageNameHere(commandParams), event.token.getResult());
//tidy up listeners and dispose token here
}
asyncToken.addEventListener(TokenEvent.RESULT, autoCallBack, false, 0, true);
Where I have written the words 'workOutMessageNameHere()' I assume is the part you want to automate, you could either have some sort of huge switch, or a map of commandParams (urls or whatever) to message names, either way best get this info from a model (in the same command):
private function workOutMessageNameHere(commandParams):String
{
var model:CallbackModel = frameworkMethodOfRetrivingModels();
return model.getMessageNameForAsyncCommand(commandParams);
}
This should hopefully just leave you with calling the command 'callService' or however you are triggering it, you can configure the callbackMap / switch in code or possibly via parsed XML.
Hope this gets you started, and as I've just realized, is relevant?
EDIT:
Hi, just had another read through of the problem you are trying to solve, and I think you are describing a series of finite states, i.e. a state machine.
It seems as if roughly your sequences are FunctionState -> LoadingState -> ResultState. This might be a better general approach to managing loads of little async 'chains'.
Agreeing with enzuguri. You'll need lots of callbacks no matter what, but if you can define a single interface for all of them and shove the code into controller classes or a service manager and have it all in one place, it won't become overwhelming.
I know what you are going through. Unfortunately I have never seen a good solution. Basically asynchronous code just kind of ends up this way.
One solution algorithm:
static var resourcesNeededAreLoaded:Boolean = false;
static var shouldDoItOnLoad:Boolean = false;
function doSomething()
{
if(resourcesNeededAreLoaded)
{
actuallyDoIt();
}
else
{
shouldDoItOnLoad = true;
loadNeededResource();
}
}
function loadNeededResource()
{
startLoadOfResource(callBackWhenResourceLoaded);
}
function callBackWhenResourceLoaded()
{
resourcesNeededAreLoaded = true;
if(shouldDoItOnLoad)
{
doSomething();
}
}
This kind of pattern allows you to do lazy loading, but you can also force a load when necessary. This general pattern can be abstracted and it tends to work alright. Note: an important part is calling doSomething() from the load callback and not actuallyDoIt() for reasons which will be obvious if you don't want your code to become out-of-sync.
How you abstract the above pattern depends on your specific use case. You could have a single class that manages all resource loading and acquisition and uses a map to manage what is loaded and what isn't and allows the caller to set a callback if the resource isn't available. e.g.
public class ResourceManager
{
private var isResourceLoaded:Object = {};
private var callbackOnLoad:Object = {};
private var resources:Object = {};
public function getResource(resourceId:String, callBack:Function):void
{
if(isResourceLoaded[resourceId])
{
callback(resources[resourceId]);
}
else
{
callbackOnLoad[resourceId] = callBack;
loadResource(resourceId);
}
}
// ... snip the rest since you can work it out ...
}
I would probably use events and not callbacks but that is up to you. Sometimes a central class managing all resources isn't possible in which case you might want to pass a loading proxy to an object that is capable of managing the algorithm.
public class NeedsToLoad
{
public var asyncLoader:AsyncLoaderClass;
public function doSomething():void
{
asyncLoader.execute(resourceId, actuallyDoIt);
}
public function actuallyDoIt ():void { }
}
public class AsyncLoaderClass
{
/* vars like original algorithm */
public function execute(resourceId:String, callback:Function):void
{
if(isResourceLoaded)
{
callback();
}
else
{
loadResource(resourceId);
}
}
/* implements the rest of the original algorithm */
}
Again, it isn't hard to change the above from working with callbacks to events (which I would prefer in practise but it is harder to write short example code for that).
It is important to see how the above two abstract approaches merely encapsulate the original algorithm. That way you can tailor an approach that suites your needs.
The main determinants in your final abstraction will depend on:
Who knows the state of resources ... the calling context or the service abstraction?
Do you need a central place to acquire resources from ... and the hassle of making this central place available all throughout your program (ugh ... Singletons)
How complicated really is the loading necessities of your program? (e.g. it is possible to write this abstraction in such a way that a function will not be executed until a list of resources are available).
In one of my project, I build custom loader which was basically wrapper class. I was sending it Array of elements to load and wait for either complete or failed event(further I modified it and added priority also). So I didn't have to add so many handlers for all resources.
You just need to monitor which all resources has been downloaded and when all resources complete, dispatch a custom event-resourceDownloaded or else resourcesFailed.
You can also put a flag with every resource saying it is necessary or compulsory or not, If not compulsory, don't throw failed event on failing of that resource and continue monitoring other resources!
Now with priority, you can have bunch of file which you want to display first, display and continue loading other resources in background.
You can do this same and believe me you'll enjoy using it!!
You can check the Masapi framework to see if it fulfills your needs.
You can also investigate the source code to learn how they approached the problem.
http://code.google.com/p/masapi/
It's well written and maintained. I used it successfully in a desktop RSS client I developed with Air.
It worked very well assuming you pay attention to the overhead while loading too many resources in parallel.

How to separate data validation from my simple domain objects (POCOs)?

This question is language agnostic but I am a C# guy so I use the term POCO to mean an object that only preforms data storage, usually using getter and setter fields.
I just reworked my Domain Model to be super-duper POCO and am left with a couple of concerns regarding how to ensure that the property values make sense witin the domain.
For example, the EndDate of a Service should not exceed the EndDate of the Contract that Service is under. However, it seems like a violation of SOLID to put the check into the Service.EndDate setter, not to mention that as the number of validations that need to be done grows my POCO classes will become cluttered.
I have some solutions (will post in answers), but they have their disadvantages and am wondering what are some favorite approaches to solving this dilemma?
I think you're starting off with a bad assumption, ie, that you should have objects that do nothing but store data, and have no methods but accessors. The whole point of having objects is to encapsulate data and behaviors. If you have a thing that's just, basically, a struct, what behaviors are you encapsulating?
I always hear people argument for a "Validate" or "IsValid" method.
Personally I think this may work, but with most DDD projects you usually end up
with multiple validations that are allowable depending on the specific state of the object.
So I prefer "IsValidForNewContract", "IsValidForTermination" or similar, because I believe most projects end up with multiple such validators/states per class. That also means I get no interface, but I can write aggregated validators that read very well reflect the business conditions I am asserting.
I really do believe the generic solutions in this case very often take focus away from what's important - what the code is doing - for a very minor gain in technical elegance (the interface, delegate or whatever). Just vote me down for it ;)
A colleague of mine came up with an idea that worked out pretty well. We never came up with a great name for it but we called it Inspector/Judge.
The Inspector would look at an object and tell you all of the rules it violated. The Judge would decide what to do about it. This separation let us do a couple of things. It let us put all the rules in one place (Inspector) but we could have multiple Judges and choose the Judge by the context.
One example of the use of multiple Judges revolves around the rule that said a Customer must have an Address. This was a standard three tier app. In the UI tier the Judge would produce something that the UI could use to indicate the fields that had to be filled in. The UI Judge did not throw exceptions. In the service layer there was another Judge. If it found a Customer without an Address during Save it would throw an exception. At that point you really have to stop things from proceeding.
We also had Judges that were more strict as the state of the objects changed. It was an insurance application and during the Quoting process a Policy was allowed to be saved in an incomplete state. But once that Policy was ready to be made Active a lot of things had to be set. So the Quoting Judge on the service side was not as strict as the Activation Judge. Yet the rules used in the Inspector were still the same so you could still tell what wasn't complete even if you decided not to do anything about it.
One solution is to have each object's DataAccessObject take a list of Validators. When Save is called it preforms a check against each validator:
public class ServiceEndDateValidator : IValidator<Service> {
public void Check(Service s) {
if(s.EndDate > s.Contract.EndDate)
throw new InvalidOperationException();
}
}
public class ServiceDao : IDao<Service> {
IValidator<Service> _validators;
public ServiceDao(IEnumerable<IValidator<Service>> validators) {_validators = validators;}
public void Save(Service s) {
foreach(var v in _validators)
v.Check(service);
// Go on to save
}
}
The benefit, is very clear SoC, the disadvantage is that we don't get the check until Save() is called.
In the past I have usually delegated validation to a service unto its own, such as a ValidationService. This in principle still ad hears to the philosophy of DDD.
Internally this would contain a collection of Validators and a very simple set of public methods such as Validate() which could return a collection of error object.
Very simply, something like this in C#
public class ValidationService<T>
{
private IList<IValidator> _validators;
public IList<Error> Validate(T objectToValidate)
{
foreach(IValidator validator in _validators)
{
yield return validator.Validate(objectToValidate);
}
}
}
Validators could either be added within a default constructor or injected via some other class such as a ValidationServiceFactory.
I think that would probably be the best place for the logic, actually, but that's just me. You could have some kind of IsValid method that checks all of the conditions too and returns true/false, maybe some kind of ErrorMessages collection but that's an iffy topic since the error messages aren't really a part of the Domain Model. I'm a little biased as I've done some work with RoR and that's essentially what its models do.
Another possibility is to have each of my classes implement
public interface Validatable<T> {
public event Action<T> RequiresValidation;
}
And have each setter for each class raise the event before setting (maybe I could achieve this via attributes).
The advantage is real-time validation checking. But messier code and it is unclear who should be doing the attaching.
Here's another possibility. Validation is done through a proxy or decorator on the Domain object:
public class ServiceValidationProxy : Service {
public override DateTime EndDate {
get {return EndDate;}
set {
if(value > Contract.EndDate)
throw new InvalidOperationexception();
base.EndDate = value;
}
}
}
Advantage: Instant validation. Can easily be configured via an IoC.
Disadvantage: If a proxy, validated properties must be virtual, if a decorator all domain models must be interface-based. The validation classes will end up a bit heavyweight - proxys have to inherit the class and decorators have to implement all the methods. Naming and organization might get confusing.

Registering derived classes with reflection, good or evil?

As we all know, when we derive a class and use polymorphism, someone, somewhere needs to know what class to instanciate. We can use factories, a big switch statement, if-else-if, etc. I just learnt from Bill K this is called Dependency Injection.
My Question: Is it good practice to use reflection and attributes as the dependency injection mechanism? That way, the list gets populated dynamically as we add new types.
Here is an example. Please no comment about how loading images can be done other ways, we know.
Suppose we have the following IImageFileFormat interface:
public interface IImageFileFormat
{
string[] SupportedFormats { get; };
Image Load(string fileName);
void Save(Image image, string fileName);
}
Different classes will implement this interface:
[FileFormat]
public class BmpFileFormat : IImageFileFormat { ... }
[FileFormat]
public class JpegFileFormat : IImageFileFormat { ... }
When a file needs to be loaded or saved, a manager needs to iterate through all known loader and call the Load()/Save() from the appropriate instance depending on their SupportedExtensions.
class ImageLoader
{
public Image Load(string fileName)
{
return FindFormat(fileName).Load(fileName);
}
public void Save(Image image, string fileName)
{
FindFormat(fileName).Save(image, fileName);
}
IImageFileFormat FindFormat(string fileName)
{
string extension = Path.GetExtension(fileName);
return formats.First(f => f.SupportedExtensions.Contains(extension));
}
private List<IImageFileFormat> formats;
}
I guess the important point here is whether the list of available loader (formats) should be populated by hand or using reflection.
By hand:
public ImageLoader()
{
formats = new List<IImageFileFormat>();
formats.Add(new BmpFileFormat());
formats.Add(new JpegFileFormat());
}
By reflection:
public ImageLoader()
{
formats = new List<IImageFileFormat>();
foreach(Type type in Assembly.GetExecutingAssembly().GetTypes())
{
if(type.GetCustomAttributes(typeof(FileFormatAttribute), false).Length > 0)
{
formats.Add(Activator.CreateInstance(type))
}
}
}
I sometimes use the later and it never occured to me that it could be a very bad idea. Yes, adding new classes is easy, but the mechanic registering those same classes is harder to grasp and therefore maintain than a simple coded-by-hand list.
Please discuss.
My personal preference is neither - when there is a mapping of classes to some arbitrary string, a configuration file is the place to do it IMHO. This way, you never need to modify the code - especially if you use a dynamic loading mechanism to add new dynamic libraries.
In general, I always prefer some method that allows me to write code once as much as possible - both your methods require altering already-written/built/deployed code (since your reflection route makes no provision for adding file format loaders in new DLLs).
Edit by Coincoin:
Reflection approach could be effectively combined with configuration files to locate the implmentations to be injected.
The type could be declared explicitely in the config file using canonical names, similar to MSBuild <UsingTask>
The config could locate the assemblies, but then we have to inject all matching types, ala Microsoft Visual Studio Packages.
Any other mechanism to match a value or set of condition to the needed type.
My vote is that the reflection method is nicer. With that method, adding a new file format only modifies one part of the code - the place where you define the class to handle the file format. Without reflection, you'll have to remember to modify the other class, the ImageLoader, as well
Isn't this pretty much what the Dependency Injection pattern is all about?
If you can isolate the dependencies then the mechanics will almost certainly be reflection based, but it will be configuration file driven so the messiness of the reflection can be pretty well encapsulated and isolated.
I believe with DI you simply say I need an object of type <interface> with some other parameters, and the DI system returns an object to you that satisfies your conditions.
This goes together with IoC (Inversion of Control) where the object being supplied may need something else, so that other thing is automatically created and installed into your object (being created by DI) before it's returned to the user.
I know this borders on the "no comment about loading images other ways", but why not just flip your dependencies -- rather than have ImageLoader depend on ImageFileFormats, have each IImageFileFormat depend on an ImageLoader? You'll gain a few things out of this:
Each time you add a new IImageFileFormat, you won't need to make any changes anywhere else (and you won't have to use reflection, either)
If you take it one step further and abstract ImageLoader, you can mock it in Unit Tests, making testing the concrete implementations of each IImageFileFormat that much easier
In vb.net, if all the image loaders will be in the same assembly, one could use partial classes and events to achieve the desired effect (have a class whose purpose is to fire an event when the image loaders should register themselves; each file containing image loaders can have use a "partial class" to add another event handler to that class); C# doesn't have a direct equivalent to vb.net's WithEvents syntax, but I suspect partial classes are a limited mechanism for achieving the same thing.