Should a BLL be stateless? - business-objects

I'm toying with building a BLL for my application. From what I've seen / read, it seems the BLL should be stateless. Doesn't this mean all BLL methods could be static? Or I'd at least only ever need one instance of each BLL class? Which seems odd to me for some reason, so I thought I better check I wasn't getting the wrong end of the stick before I delved too far into my experimentation.
I'm also thinking this means the BLL objects should never contain data, because the data represents state - so for each BLL operation invoked, any data that is needed needs to be requeried (or fetched from cache) and then discarded. Does that sound about right?
Thanks.

In theory, yes, a stateless BLL can mean that all methods can be static.
However, there are some considerations which may swing you toward using instances of BLL objects instead of static.
Static methods introduce tight coupling between classes and prevent you from using e.g. interfaces to loosely couple dependencies. Use of interfaces improve the testability of the BLL since it can now be mocked when writing unit tests for the Service Layer. If your BLL methods are static, then you generally won't be able to do this without 'workarounds' (e.g. you would need TypeMock or Microsoft Fakes in a .Net environment).
On a complex BLL method (e.g. large transaction logic), you may want to refactor each of your business rules into several discrete methods, and on the output, possibly accumulate all of the validation and rule violations into a single aggregated result containing all of the violations. In this case, it can be cumbersome shunting the entity / entities to be validated and accumulating the rule violations, and instead elect for stateful storage of these on a class instance. A base class or generic for your instance BLL can assist here.

Related

AFHTTPRequestOperationManager property or not in shared client?

I'm using AFNetworking for all my connections in my app. I created a singleton 'client' class that takes care of all the AFNetworking code and uses AFHTTPRequestOperationManager. What I am confused about is whether the AFHTTPRequestOperationManager object should be a property, or should I recreate one everytime my client is asked for a connection? If it is a property, can my client be called many times asynchronously, or will that cause problems, since the same instance of AFHTTPRequestOperationManager will be used possibly at the same time ?
Typically, your singleton 'client' class would be a subclass of AFHTTPRequestOperationManager. It could also be a property, but then you won't be able to override methods. Some commonly overridden methods are:
- HTTPRequestOperationWithRequest:success:failure:, to modify how all request operations are constructed (for example, if you need an identical header in every request)
– initWithBaseURL:, to apply additional customization to the operation manager
That said, a property could work fine depending on your needs. (See Prefer composition over inheritance? for some delightful weekend reading.)
And finally:
If it is a property, can my client be called many times asynchronously, or will that cause problems, since the same instance of AFHTTPRequestOperationManager will be used possibly at the same time?
Yes, AFHTTPRequestOperationManager is designed to be thread-safe. You can tell it to do stuff from different threads. (Note that its completion blocks are always called on the main thread, since UI work is typically done there.)

Why to use Singleton patern?

So.. I can't understand why should I even use the Singleton pattern in ActionScript 3. Can anyone explain me this? Maybe I just don't understand the purpose of it. I mean how it differs from other patterns? How it works?
I checked the PureMVC source and it's full of Singletons. Why are they using them in the View, Module, Controller?
I have next to no practical experience with PureMVC so I can't argue for or against their use of Singletons. Hence, I'll try to keep my answer more generic.
A singleton is a type of object that can only be instantiated once and is globally accessible.
Typically, this kind of pattern is used in order to have easy access to services of some kind, perhaps a service facade used to retrieve data from a server or an application model that holds information about settings or such.
The singleton pattern is by many considered to be an anti-pattern for a number of reasons, a few of which are mentioned below:
They carry state, making certain tasks such as unit testing virtually impossible.
They inherently violate the Single Responsibility Principle.
They promote tight coupling between classes due to them being globally accessible.
I won't list all of the reasons why a singleton may be an anti pattern, there are plenty of resources on the subject.
The singleton pattern restricts the instantiation of an object to only one instance. Sometimes in systems this pattern is used so an object that controls parts of the system can't be just created at-will. If you have some object that manages settings, for example, you would want something that changes settings to only modify that one object, and not create a new one.

How Long to keep a LINQ-to-SQL DataContext Open?

I'm new to linq-to-sql (and sql for that matter), and I've started to gather evidence that maybe I'm not doing things the right way, so I wanted to see what you all have to say.
In my staff allocation application I allow the user to create assignments between employees and projects. At the beginning of the application, I open up a linq-to-sql data context to my management database. Throughout the program, I never let that data context go. As a matter of fact, most of the form constructors take this data context as one of their arguments.
I kinda thought that this was the way to do things until I read through another SO question where the asker was discussing repetitively re-creating the data context throughout his program and then "attaching" the entities to the new data contexts as needed. This would help me get around the problem I've been having wherein things are "sneaking" into my database.
So where would you use the first style (and don't be ashamed to say never), and where would you use the second style?
If you are writing a web application in, say, ASP.NET MVC, or a web service, you will be recreating the DataContext each time, as the application is "stateless" between page GETs and POSTs.
If you are writing a Winforms or WPF application, you can do it the same way, although holding a DataContext open can be easier to do, since Winforms applications are stateful (i.e. you have a container for the DataContext to live).
In general, it is probably sensible to open a DataContext each time you need to complete a "unit of work." The DataContext itself is fairly lightweight, so opening one for each "transaction" is not that big of a deal. This practice is also consistent with software layers in enterprise applications (i.e. Database, DAL, Service Layer, Repository, etc.), and helps to enforce separation of concerns between the requisite layers.
The generally recommended way of doing things is to create a new DataContext for each atomic operation. DataContext's are actually quite cheap to instantiate, and are very well suited to rapid turnover.
As a general rule of thumb, I tend to instantiate a DataContext, perform a CRUD operation, then dispose of it again. This could be the updating of a single entity, or inserting a load of objects. Do whatever makes the most sense for your scenario.
Just be careful if you're passing entities from your context around, as exceptions will be thrown if you try to enumerate or retrieve related data - it's best practice to transform the LINQ entities into independent objects (for example, a Person LINQ entity could be transformed into a PersonResult, which is consumed by the logic layer of your solution).
Hope that helps!

Prefetching data in with Linq-to-SQL, IOC and Repository pattern

using Linq-to-SQL I'd like to prefetch some data.
1) the common solution is to deal with DataLoadOptions, but in my architecture it won't work because :
the options have to be set before the first query
I'm using IOC, so I don't directly instanciate the DataContext (I cannot execute code at instanciation)
my DataContext is persistent for the duration of a web request
2) I have seen another possibility based on loading the data and its childs in a method, then returning only the data (so the child is already loaded) see an example here
Nonetheless, in my architecture, it cannot not work :
My queries are cascaded out of my repository and can be consumed by many services that will add clauses
I work with interfaces, the concrete instances of the linq-to-sql objects do not leave the repositories (yes, you can work with interfaces AND add clauses)
My repositories are generic
Yes, this architecture is quiet complicated, but it's very cool as I can play with the code like lego ;)
My question is : what are the other possibilities to prefetch a data ?
In my app i use perhaps a variation to your potential solution #2. It's somewhat difficult to explain but simply: i chain and defer lazy loading in my model with custom lazy classes so as to abstract away from the LinqToSql-specific Differed Execution that i take advantage of with IQueryable. Benefits:
My Domain Model and Service layer upwards does not necessarily have to depend on the LinqToSql provider (i can swap out my DAL with interfaces if i want to)
My Service methods can and do return complete object graphs with multiple 'anchor points' for lazy loading using classes that abstract away a particular lazy loading implementation - so i can use LinqToSql-specific Differed Execution or something else (eg. anon delegates. again, refer to this answer)
I can maintain IQueryable results throughout my app (even to the UI if i want to) thus allowing infinite LINQ query chaining without having to worry about performance.
I'm not aware of other possibilities, it seems like you've pushed LinqToSql to its limits (I may be wrong, however).
I think your best options at this point are:
Add some "non-generic" methods to your application to handle just the
specific scenarios where you want/need eager loading and don't
use your "normal", "generic" infrastructure for those methods.
Use an ORM that has more sophisticated support for eager and lazy loading.
I found a solution.
My answer is 'Dependency injection'.
It's generally shipped with IOC, and mean you can have your IOC container manage injection of classes at instanciation.
All I need is to inject a CustomDCParameter class when I instanciate a DC.
That class will contains the rules, and the constructor will apply all of them.

Singletons: good design or a crutch? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Singletons are a hotly debated design pattern, so I am interested in what the Stack Overflow community thought about them.
Please provide reasons for your opinions, not just "Singletons are for lazy programmers!"
Here is a fairly good article on the issue, although it is against the use of Singletons:
scientificninja.com: performant-singletons.
Does anyone have any other good articles on them? Maybe in support of Singletons?
In defense of singletons:
They are not as bad as globals because globals have no standard-enforced initialization order, and you could easily see nondeterministic bugs due to naive or unexpected dependency orders. Singletons (assuming they're allocated on the heap) are created after all globals, and in a very predictable place in the code.
They're very useful for resource-lazy / -caching systems such as an interface to a slow I/O device. If you intelligently build a singleton interface to a slow device, and no one ever calls it, you won't waste any time. If another piece of code calls it from multiple places, your singleton can optimize caching for both simultaneously, and avoid any double look-ups. You can also easily avoid any deadlock condition on the singleton-controlled resource.
Against singletons:
In C++, there's no nice way to auto-clean-up after singletons. There are work-arounds, and slightly hacky ways to do it, but there's just no simple, universal way to make sure your singleton's destructor is always called. This isn't so terrible memory-wise -- just think of it as more global variables, for this purpose. But it can be bad if your singleton allocates other resources (e.g. locks some files) and doesn't release them.
My own opinion:
I use singletons, but avoid them if there's a reasonable alternative. This has worked well for me so far, and I have found them to be testable, although slightly more work to test.
Google has a Singleton Detector for Java that I believe started out as a tool that must be run on all code produced at Google. The nutshell reason to remove Singletons:
because they can make testing
difficult and hide problems with your
design
For a more explicit explanation see 'Why Singletons Are Controversial' from Google.
A singleton is just a bunch of global variables in a fancy dress.
Global variables have their uses, as do singletons, but if you think you're doing something cool and useful with a singleton instead of using a yucky global variable (everyone knows globals are bad mmkay), you're unfortunately misled.
The purpose of a Singleton is to ensure a class has only one instance, and provide a global point of access to it. Most of the time the focus is on the single instance point. Imagine if it were called a Globalton. It would sound less appealing as this emphasizes the (usually) negative connotations of a global variable.
Most of the good arguments against singletons have to do with the difficulty they present in testing as creating test doubles for them is not easy.
There's three pretty good blog posts about Singletons by Miško Hevery in the Google Testing blog.
Singletons are Pathological Liars
Where Have All the Singletons Gone?
Root Cause of Singletons
Singleton is not a horrible pattern, although it is misused a lot. I think this misuse is because it is one of the easier patterns and most new to the singleton are attracted to the global side effect.
Erich Gamma had said the singleton is a pattern he wishes wasn't included in the GOF book and it's a bad design. I tend to disagree.
If the pattern is used in order to create a single instance of an object at any given time then the pattern is being used correctly. If the singleton is used in order to give a global effect, it is being used incorrectly.
Disadvantages:
You are coupling to one class throughout the code that calls the singleton
Creates a hassle with unit testing because it is difficult to replace the instance with a mock object
If the code needs to be refactored later on because of the need for more than one instance, it is more painful than if the singleton class were passed into the object (using an interface) that uses it
Advantages:
One instance of a class is represented at any given point in time.
By design you are enforcing this
Instance is created when it is needed
Global access is a side effect
Chicks dig me because I rarely use singleton and when I do it's typically something unusual. No, seriously, I love the singleton pattern. You know why? Because:
I'm lazy.
Nothing can go wrong.
Sure, the "experts" will throw around a bunch of talk about "unit testing" and "dependency injection" but that's all a load of dingo's kidneys. You say the singleton is hard to unit test? No problem! Just declare everything public and turn your class into a fun house of global goodness. You remember the show Highlander from the 1990's? The singleton is kind of like that because: A. It can never die; and B. There can be only one. So stop listening to all those DI weenies and implement your singleton with abandon. Here are some more good reasons...
Everybody is doing it.
The singleton pattern makes you invincible.
Singleton rhymes with "win" (or "fun" depending on your accent).
I think there is a great misunderstanding about the use of the Singleton pattern. Most of the comments here refer to it as a place to access global data. We need to be careful here - Singleton as a pattern is not for accessing globals.
Singleton should be used to have only one instance of the given class. Pattern Repository has great information on Singleton.
One of the colleagues I have worked with was very Singleton-minded. Whenever there was something that was kind of a manager or boss like object he would make that into a singleton, because he figured that there should be only one boss. And each time the system took up some new requirements, it turned out there were perfectly valid reasons to allow multiple instances.
I would say that singleton should be used if the domain model dictates (not 'suggests') that there is one. All other cases are just accendentally single instances of a class.
I've been trying to think of a way to come to the poor singelton's rescue here, but I must admit it's hard. I've seen very few legitimate uses of them and with the current drive to do dependency injection andd unit testing they are just hard to use. They definetly are the "cargo cult" manifestation of programming with design patterns I have worked with many programmers that have never cracked the "GoF" book but they know 'Singelton' and thus they know 'Patterns'.
I do have to disagree with Orion though, most of the time I've seen singeltons oversused it's not global variables in a dress, but more like global services(methods) in a dress. It's interesting to note that if you try to use Singeltons in the SQL Server 2005 in safe mode through the CLR interface the system will flag the code. The problem is that you have persistent data beyond any given transaction that may run, of course if you make the instance variable read only you can get around the issue.
That issue lead to a lot of rework for me one year.
Holy wars! Ok let me see.. Last time I checked the design police said..
Singletons are bad because they hinder auto testing - instances cannot be created afresh for each test case.
Instead the logic should be in a class (A) that can be easily instantiated and tested. Another class (B) should be responsible for constraining creation. Single Responsibility Principle to the fore! It should be team-knowledge that you're supposed to go via B to access A - sort of a team convention.
I concur mostly..
Many applications require that there is only one instance of some class, so the pattern of having only one instance of a class is useful. But there are variations to how the pattern is implemented.
There is the static singleton, in which the class forces that there can only be one instance of the class per process (in Java actually one per ClassLoader). Another option is to create only one instance.
Static singletons are evil - one sort of global variables. They make testing harder, because it's not possible to execute the tests in full isolation. You need complicated setup and tear down code for cleaning the system between every test, and it's very easy to forget to clean some global state properly, which in turn may result in unspecified behaviour in tests.
Creating only one instance is good. You just create one instance when the programs starts, and then pass the pointer to that instance to all other objects which need it. Dependency injection frameworks make this easy - you just configure the scope of the object, and the DI framework will take care of creating the instance and passing it to all who need it. For example in Guice you would annotate the class with #Singleton, and the DI framework will create only one instance of the class (per application - you can have multiple applications running in the same JVM). This makes testing easy, because you can create a new instance of the class for each test, and let the garbage collector destroy the instance when it is no more used. No global state will leak from one test to another.
For more information:
The Clean Code Talks - "Global State and Singletons"
Singleton as an implementation detail is fine. Singleton as an interface or as an access mechanism is a giant PITA.
A static method that takes no parameters returning an instance of an object is only slightly different from just using a global variable. If instead an object has a reference to the singleton object passed in, either via constructor or other method, then it doesn't matter how the singleton is actually created and the whole pattern turns out not to matter.
It was not just a bunch of variables in a fancy dress because this was had dozens of responsibilities, like communicating with persistence layer to save/retrieve data about the company, deal with employees and prices collections, etc.
I must say you're not really describing somthing that should be a single object and it's debatable that any of them, other than Data Serialization should have been a singelton.
I can see at least 3 sets of classes that I would normally design in, but I tend to favor smaller simpler objects that do a narrow set of tasks very well. I know that this is not the nature of most programmers. (Yes I work on 5000 line class monstrosities every day, and I have a special love for the 1200 line methods some people write.)
I think the point is that in most cases you don't need a singelton and often your just making your life harder.
The biggest problem with singletons is that they make unit testing hard, particularly when you want to run your tests in parallel but independently.
The second is that people often believe that lazy initialisation with double-checked locking is a good way to implement them.
Finally, unless your singletons are immutable, then they can easily become a performance problem when you try and scale your application up to run in multiple threads on multiple processors. Contended synchronization is expensive in most environments.
Singletons have their uses, but one must be careful in using and exposing them, because they are way too easy to abuse, difficult to truly unit test, and it is easy to create circular dependencies based on two singletons that accesses each other.
It is helpful however, for when you want to be sure that all your data is synchronized across multiple instances, e.g., configurations for a distributed application, for instance, may rely on singletons to make sure that all connections use the same up-to-date set of data.
I find you have to be very careful about why you're deciding to use a singleton. As others have mentioned, it's essentially the same issue as using global variables. You must be very cautious and consider what you could be doing by using one.
It's very rare to use them and usually there is a better way to do things. I've run into situations where I've done something with a singleton and then had to sift through my code to take it out after I discovered how much worse it made things (or after I came up with a much better, more sane solution)
I've used singletons a bunch of times in conjunction with Spring and didn't consider it a crutch or lazy.
What this pattern allowed me to do was create a single class for a bunch of configuration-type values and then share the single (non-mutable) instance of that specific configuration instance between several users of my web application.
In my case, the singleton contained client configuration criteria - css file location, db connection criteria, feature sets, etc. - specific for that client. These classes were instantiated and accessed through Spring and shared by users with the same configuration (i.e. 2 users from the same company). * **I know there's a name for this type of application but it's escaping me*
I feel it would've been wasteful to create (then garbage collect) new instances of these "constant" objects for each user of the app.
I'm reading a lot about "Singleton", its problems, when to use it, etc., and these are my conclusions until now:
Confusion between the classic implementation of Singleton and the real requirement: TO HAVE JUST ONE INSTANCE OF a CLASS!
It's generally bad implemented. If you want a unique instance, don't use the (anti)pattern of using a static GetInstance() method returning a static object. This makes a class to be responsible for instantiating a single instance of itself and also perform logic. This breaks the Single Responsibility Principle. Instead, this should be implemented by a factory class with the responsibility of ensuring that only one instance exists.
It's used in constructors, because it's easy to use and must not be passed as a parameter. This should be resolved using dependency injection, that is a great pattern to achieve a good and testable object model.
Not TDD. If you do TDD, dependencies are extracted from the implementation because you want your tests to be easy to write. This makes your object model to be better. If you use TDD, you won't write a static GetInstance =). BTW, if you think in objects with clear responsibilities instead classes, you'll get the same effect =).
I really disagree on the bunch of global variables in a fancy dress idea. Singletons are really useful when used to solve the right problem. Let me give you a real example.
I once developed a small piece of software to a place I worked, and some forms had to use some info about the company, its employees, services and prices. At its first version, the system kept loading that data from the database every time a form was opened. Of course, I soon realized this approach was not the best one.
Then I created a singleton class, named company, which encapsulated everything about the place, and it was completely filled with data by the time the system was opened.
It was not just a bunch of variables in a fancy dress because this was had dozens of responsibilities, like communicating with persistence layer to save/retrieve data about the company, deal with employees and prices collections, etc.
Plus, it was a fixed, system-wide, easily accessible point to have the company data.
Singletons are very useful, and using them is not in and of itself an anti-pattern. However, they've gotten a bad reputation largely because they force any consuming code to acknowledge that they are a singleton in order to interact with them. That means if you ever need to "un-Singletonize" them, the impact on your codebase can be very significant.
Instead, I'd suggest either hiding the Singleton behind a factory. That way, if you need to alter the service's instantiation behavior in the future, you can just change the factory rather than all types that consume the Singleton.
Even better, use an inversion of control container! Most of them allow you to separate instantiation behavior from the implementation of your classes.
One scary thing on singletons in for instance Java is that you can end up with multiple instances of the same singleton in some cases. The JVM uniquely identifies based on two elements: A class' fully qualified name, and the classloader responsible for loading it.
That means the same class can be loaded by two classloaders unaware of each other, and different parts of your application would have different instances of this singleton that they interact with.
Write normal, testable, injectable objects and let Guice/Spring/whatever handle the instantiation. Seriously.
This applies even in the case of caches or whatever the natural use cases for singletons are. There's no need to repeat the horror of writing code to try to enforce one instance. Let your dependency injection framework handle it. (I recommend Guice for a lightweight DI container if you're not already using one).