Single Responsibility Principle Core Understanding - single-responsibility-principle

In brushing up on the SRP I read this document which I located via Uncle Bob's page on principles of OOD. I find the following passage puzzling and somewhat at odds with the rest of the document:
"If, on the other hand, the application is not changing in ways that cause the the two responsibilities to change at different times, then there is no need to separate them. Indeed, separating them would smell of Needless Complexity. There is a corollary here. An axis of change is only an axis of change if the changes actually occur. It is not wise to apply the SRP, or any other principle for that matter, if there is no symptom."
While I understand the answer to many software development questions is "it depends" principles like the SRP appear to be almost universally beneficial and to be implemented as a matter of course. The SRP itself affords code a high adaptability to future changes in requirements. Isn't the point to separate out responsibilities from the get-go to avoid struggling with highly coupled code and cascading changes later on?
I would really appreciate some clarification on this to make sure my understanding of this core principle is correct. Thanks in advance!

From my humble understanding, in the Modem example that is presented here, it might be possible that responsibilities of the modem (Connection and Data Exchange) will change as one.
You have two possibilities here :
When the protocol changes, it is possible that only the connection part change, or only the data exchange part change. It this case you should have two interfaces, because a change of protocol in the data does not imply a change of protocol in the connection.
When the protocol changes, it will always change the connection part and the data exchange part. In that case, you don't need two interfaces, because everytime you will have to rewrite the connection part, you are sure that the data exchange will change as well. In that case, you have two responsibilities put on the same change Axis (which is the protocol handled by the modem), so you can leave them inside a single interface.

The key to this statement is "not changing in ways that cause the two responsibilities to change at different times". Let's say for the sake of argument you have a PaymentLogger and a Payment class. Every time you create a new PaymentType (CreditCard, Cash, Paypal, etc) you need to update the PaymentLogger to log actions specific to those Payments. Instead of splitting out a PaymentLogger class you could have Payment class have a method called Log which does whatever is specific for itself.
In this case it could be that the act of recording actions should be build into the class itself since creating a new Payment requires also creating a new PaymentLogger. It's a responsibility that should have been part of Payment all along.

Related

open closed principle - refactoring to create base class based on new features

So when original code was written there was only a need for say LabTest class. But now say we have new requirements to add say RadiologyTest, EKGTest etc.
These classes have a lot in common hence it makes sense to have a base class.
But that will mean LabTest class will have to be modified, lets say its interface will remain same as before, in other words consumers of LabTest class will not need change.
Is this violation of open closed principle principle? (LabTest is being modified).
I think you can look at it from two perspectives: existing requirements and new requirements.
If the existing requirements didn't cover the need for these kinds of changes then I'd say, based on those requirements, LabTest did not violate OCP.
With the new requirements, you need to add functionality that does not fit with the LabTest implementation. Adding it to OCP would violate SRP. The requirements now create a new change vector that will force you to refactor LabTest to keep it OCP. If you fail to refactor LabTest it will violate SRP and OCP. When you refactor, keep in mind the new change vector in any classes you create or modify.
These classes have a lot in common hence it makes sense to have a base class.
I think you may be violating SRP. After all, if each class does one task, how can two or more be so similar? If there's a task they both do identically, then that is a separate task and should be done by another class.
So I would say, first refactor LabTest into it's constituent parts (hope you've got unit tests!). Then when you come to write RadiologyTest, EKGTest they can reuse the parts that make sense for them. This is also known as composition over inheritance.
But whatever you do, do use interfaces to these classes in the client. Don't force those who follow to use your base classes to add extensions.
I may get burnt for this answer, but going on a limb anyways.
In my opinion(IMO), OCP cannot be followed in the purist sense like other principles such as SRP, DIP or ISP.
If requirements change in such a way that you have to change the responsibility of a class to be true to their representation of the domain model, then we have to change that class.
IMO, OCP stops us from re-factoring code to follow the evolution of the system.
Please correct me if I am wrong.
Update:
After further research, this is what I am thinking:
Lets say, I have automated test both on unit level and integration level, then IMO we should redesign the complete system to fit the new model, OCP is out the door here.
IMO, the goal of a system evolution is always to avoid hacks(not changing LabTest class and the corresponding DB table so as to not break old code[not violate OCP], and using LabTest to store EKGTest's common data and using LabTest inside of EKGTest or EKGTest inheriting from LabTest will be a hack, IMO) will be and to make the system represent its model as accurate as possible.
I think the Open-Closed Principle, (as outlined by Uncle Bob anyway, vs Bertrand Meyers) isn't about never modifying classes (if software was never going to change it might as well be hardware).
& in your own case, I don't think you're violating OCP as you've mentioned all of your uses of your class are depending on the abstraction of LabTest rather than the implementation of RadiologyTest.
From Uncle Bob's introductory paper, he has an example of a DrawAllShapes class that if designed to OCP, shouldn't need to change each time a new subclass of Shape is added to the system. Regarding to what level you apply it, Uncle Bob says that —
It should be clear that no significant program can be 100% closed. For
example, consider what would happen to the DrawAllShapes function from
Listing 2 if we decided that all Circles should be drawn before any
Squares. The DrawAllShapes function is not closed against a change
like this. In general, no matter how “closed” a module is, there will
always be some kind of change against which it is not closed.
Since closure cannot be complete, it must be strategic. That is, the
designer must choose the kinds of changes against which to close his
design.
I wouldn't read "closed for modification" as "don't refactor", more that you should design you classes in such a way that other classes can't make modifications which will affect you — e.g. applying the basic OO stuff — encapsulation via getters/setters & private member variables.

SOLID principles, and hard code configuration inside a class

I have noticed in a lot of code lately that people put hard coded configuration (like port numbers, etc.) values deep inside of classes/methods, making it difficult to find, and also not configurable.
Is this a violation of the SOLID principles? If not, is there another "principle" that I can cite to my team members about why it's not a good idea? I don't want to just say "it's bad because I don't like it" but I am having trouble thinking of a good argument.
A good argument against hardcoding a TCP port number in a class would be 'Context independence' violation. From GOOS, with my emphasis:
Context Independence
... the
"context independence" rule helps us decide whether an object hides
too much or hides the wrong information. A system is easier to change
if its objects are context-independent; that is, if each object has no
built-in knowledge about the system in which it executes. This allows
us to take units of behavior (objects) and apply them in new
situations. To be context-independent, whatever an object needs to
know about the larger environment it’s running in must be passed in.
In this specific case of Context Independence I would call it 'Environment Independence'. In other words a class with hardcoded port number has inappropriate dependency on a runtime OS environment, essentially stating 'I know that port 7778 will always be available' which is clearly wrong.
The SOLID principles cover class design.
I suspect the idea that you should store configuration in configuration files isn't normally regarded as controversial enough to warrant inventing a special principle to persuade people! :)
Most people just figure it out from experience, the first time they try get the software running anywhere other than their own development workstation.
While not strictly SOLID, another principle of OOD is the The Common Closure Principle, which states that classes that change together are packaged together. While not exactly a class, you could stretch this idea to configuration information. Since e.g. port numbers change based on different criteria than the surrounding code, it seems to violates this.
The Single Responsibility Principle (the S in SOLID) states that a class should only have one reason to change. This article gives an example of a Modem interface, and discusses how the details of how to connect and hang up are a separate responsibility from the communication of data, and will probably change for different reasons. You could use this to make a similar case for why port numbers are an extra "reason for change", separate from the class's main responsibility.

Write programs that do one thing and do it well

I can grasp the part "do one thing" via encapsulation, Dependency Injection, Principle of Least Knowledge, and You Ain't Gonna Need It; but how do I understand the second part "do it well?"
An example given was the notion of completeness, given in the same YAGNI article:
for example, among features which allow adding items, deleting items, or modifying items, completeness could be used to also recommend "renaming items".
However, I found reasoning like that could easily be abused into feature creep, thus violating the "do one thing" part.
So, what is a litmus test for seeing rather a feature belongs to the "do it well" category (hence, include it into the function/class/program) or to the other "do one thing" category (hence, exclude it)?
The first part, "do one thing," is best understood via UNIX's ls command as a counterexample for its inclusion of excessive number of flags for formatting its output, which should have been completely delegated to another external program. But I don't have a good example to see the second part "do it well."
What is a good example where removing any further feature would make it not "do it well?"
I see "Do It Well" as being as much about quality of implementation of a function than about the completeness of a set functions (in your example having rename, as well as create and delete).
Do It Well manifests in many ways, some ways of thinking:
Behaviour in response to "special" inputs. Example, calculating the mean of some integers:
int mean(int[] values) { ... }
what does this do if the array has zero elements? If the items total more than MAX_INT?
Performance Characteristics. Has sufficient attention been given to behaviour as the data volumes increase?
Dependency Failures. If our implementation depends upon other modules or infrastructure what happens when these fail. Example: File System Full, Database Down?
Concerning feature creep itself, I think you're correct to indentify a tension here. One thing you might consider: you don't need to implment every feature providing that it's pretty obvious that a feature can be added easily without a complete rewrite.
The whole purpose of this advice is to make you favor quality over quantity.
The concept of one thing is subjective and depends on granularity. Would you say that a spreadsheet application does more than one thing if it can also print, or is that part of that one thing?
The point is that you should make sure that any feature, and the application itself, is done and will delight customers before you scramble to add new features.
I think your question points out the fundamentally organic nature of feature creep, and in understanding that nature, you will be empowered to meditate on the larger question.
Think of it like a garden: If you plant one thing and plant it well, say, a chrysanthemum, you aren't done at simply planting the seed. In fact you'll need to ensure that the soil is well tended, that the area is sufficiently protected, that the season is right, etc.
As your chrysanthemum (your one thing) grows, so too will other competitive plants - some that need to be weeded out and others that may actually compliment the original one thing. In fact, these other organisms may in some cases prove vital for the survival of your one thing.
Like those features that YAGN, a bit of vigilance is required to determine which weeds represent feature creep and which represent vital and complimentary functions.
Regardless, having done it well means simply that your chrysanthemum is hearty, healthy, and on-time. :-)
I would say an email program without the ability to add attachments would be a good example.
This may sound like an odd example, but I'd say dropbox is a good, albeit complex example.
Its managed to beat off a swathe of similar competing apps, through a dedication to simplification and a lack of feature creep tha,t as you mentioned, would violate the 'do one thing' principle. The ap lets you store documents in a folder that you can access anywhere, and that's about the limit of it. They drilled down to the core problem, and solved it in a way that works perfectly well in 90+% of cases.
Its hard to put a hard and fast rule to it, but I'd say that catering to around the 90% majority of use cases and ignoring 'fringe requirements' is the best way to stick to this rule.
I'd guess 90+% of ls use is with no arguments or maybe two or three of the most popular. The 'do it well' principle should focus on what the majority of users need, instead of catering for power users or fringe cases, as ls does with its plethora of options.
This is what dropbox does successfully and why it is pretty well agreed upon as an example of good application design.

Business Objects - Containers or functional? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Where I work, we've gone back and forth on this subject a number of times and are looking for a sanity check. Here's the question: Should Business Objects be data containers (more like DTOs) or should they also contain logic that can perform some functionality on that object.
Example - Take a customer object, it probably contains some common properties (Name, Id, etc), should that customer object also include functions (Save, Calc, etc.)?
One line of reasoning says separate the object from the functionality (single responsibility principal) and put the functionality in a Business Logic layer or object.
The other line of reasoning says, no, if I have a customer object I just want to call Customer.Save and be done with it. Why do I need to know about how to save a customer if I'm consuming the object?
Our last two projects have had the objects separated from the functionality, but the debate has been raised again on a new project. Which makes more sense?
EDIT
These results are very similar to our debates. One vote to one side or another completely changes the direction. Does anyone else want to add their 2 cents?
EDIT
Eventhough the answer sampling is small, it appears that the majority believe that functionality in a business object is acceptable as long as it is simple but persistence is best placed in a separate class/layer. We'll give this a try. Thanks for everyone's input...
Objects are state and behavior together. If an object has sensible behavior (e.g., calculating age for a Person from their birth date, or a total tax for an Invoice), by all means add it. Business objects that are nothing more than DTOs are termed an "anemic domain model." I don't think it's a design requirement.
Persistence is a special kind of behavior. What I'm calling "sensible" is business behavior. A business object need not know that it's persistent. I'd say that a DAO can keep persistence separate from business behavior. I don't put "save" in the "sensible" category.
Business objects CAN have business functionality.
Persistence is not a business functionality , but is technical implementation.
Long story short:
Save/Update/Delete/Find etc - keep away from the business objects in a persistence layer.
CalculateSalary, ApplyDiscount etc are business related methods and can be:
methods of the business objects (so BO is self contained representation of entity) or;
separate services implementing particular functionality (so BOs are acting more like DTOs).
As for the point 2.
I should mention that the approach 2.1 tends to make the BOs too bloated and violate SRP. While 2.2 introduces more maintenance complexity.
I usually balance in between 2.1 and 2.2 so that I put trivial things related to the data into Business Objects and create services for a bit more complex scenarious (if there are more than 4 lines of code - make it a service).
This shifts the paradigm of Business Objects to be more Data Transfer Objects instead.
But this all makes project easier to develop, test and maintain.
The answer is the same regardless of platform or language. The key to this question is whether an object should be able to be autonomous or whether it is better for any given behavior to be spread out among objects with more focused responsibility.
For each class the answer might be different. We end up with a spectrum along which we can place classes based upon the Density of Responsibility.
(Level of responsibility for behavior)
Autonomy - - - - - - - - - - - - - - - - - - - Dependence
High
C - <<GOD object>> <<Spaghetti code>>
l -
a -
s -
s -
-
s -
i -
z -
e - <<Template>> <<Framework>>
low
Let's say you favor letting the class perform all the behaviours itself, or as many as you can. Starting on the left side of this graph, when you make your class more autonomous, the size of the class will grow unless you continuously refactor it to make it more generic. This leads to a template. If no refactoring is done, the temdency is for the class to become more "god-like" because if there is some behavior it needs, it has a method for that. The number of fields and methods grow and soon become both unmanageable and unavoidable. Since the class already does so much, coders would rather add to the monstrosity than try to piece it apart and cut the Gordian knot.
The right side of the graph has classes that depend on other classes to a large degree. If the dependency level is high but the individual class is small, that is a sign of a framework; each class doesn't do much and requires lots of dependent classes to accomplish some function. On the other hand, a highly-dependent class that also has a large amount of code is a sign that the class is full of Spaghetti.
The key to this question is to determine where you feel more comfortable on the graph. In any event, individual classes will end up spread out on the graph unless some organizational principle is applied, which is how you can achieve the results of Template or Framework.
Having just written that, I would say that there is a correlation between class size and degree of organization. Robert C. Martin (or "Uncle Bob") covers similar ground with package dependencies in his very thorough paper on Design Principles and Design Patterns. JDepend is an implementation of the ideas behind the graph on page 26 and complements static analysis tools such as Checkstyle and PMD.
I think it makes more sense for business objects to know how to "handle" themselves, then to have to put that burden elsewhere in the system. In your example, the most logical place to deal with how to "save" customer data would be, to me, in the Customer object.
This may be because I consider the database to be the "data container", so I'm in favor of "business objects" being the higher level that protects the data container from direct access AND enforces standard "business rules" about how that data is accessed/manipulated.
We've used Rocky Lhotka's CSLA framework for years and love the way it is designed. In that framework all of the functionality is contained in the objects. While I can see the value of separting the logic out, I don't think we'll switch away from this philosophy anytime soon.
Business objects should be about encapsulating data and associated behaviors of the business entity modeled by that object. Think of it like this: one of the major tenets of object-oriented programming is encapsulating data and associated behaviors on that data.
Persistence is not a behavior of the modeled object. I find development progresses more smoothly if business objects are persistence ignornant. Developing new code and unit testing new code happen more quickly and more smoother if the business objects are not specifically tied to the underlying plumbing. This is because I can mock those aspects and forget about having to go through hoops to get to the database, etc. My unit tests will execute more quickly (a huge plus if you have thousands of automated tests that run with each build) and I will have less stress because I won't have tests failing because of database connection issues (great if you often work offline or remotely and can't always access your database and oh, by the way, those aspects (database connectivity etc.) should be tested elsewhere!).
The other line of reasoning says, no, if I have a customer object I just want to call Customer.Save and be done with it. Why do I need to know about how to save a customer if I'm consuming the object?
Knowing that Customer has a Save method is already knowing how to save a customer object. You haven't avoided the problem by embedding that logic in your business object. Instead, you've made your code base more tightly coupled and therefore harder to maintain and test. Push off the responsibility of persisting the object to someone else.
The Business objects, as they are named, should obviously coutain their own business logic, the dynamic of the business logic among the domain being in the service layer.
On the other side, could the BO be a data container (DTO?) composition and methods; meaning BO are pure functionnal? That could avoid all the conversions between BO and DTO.
In an MVC architecture,
Can we say that Model contains business objects.

Most common examples of misuse of singleton class

When should you NOT use a singleton class although it might be very tempting to do so? It would be very nice if we had a list of most common instances of 'singletonitis' that we should take care to avoid.
Do not use a singleton for something that might evolve into a multipliable resource.
This probably sounds silly, but if you declare something a singleton you're making a very strong statement that it is absolutely unique. You're building code around it, more and more. And when you then find out after thousands of lines of code that it is not a singleton at all, you have a huge amount of work in front of you because all the other objects expect "the" sacred object of class WizBang to be a singleton.
Typical example: "There is only one database connection this application has, thus it is a singleton." - Bad idea. You may want to have several connections in the future. Better create a pool of database connections and populate it with just one instance. Acts like a Singleton, but all other code will have growable code for accessing the pool.
EDIT: I understand that theoretically you can extend a singleton into several objects. Yet there is no real life cycle (like pooling/unpooling) which means there is no real ownership of objects that have been handed out, i.e. the now multi-singleton would have to be stateless to be used simultaneously by different methods and threads.
Well singletons for the most part are just making things static anyway. So you're either in effect making data global, and we all know global variables are bad or you're writing static methods and that's not very OO now is it?
Here is a more detailed rant on why singletons are bad, by Steve Yegge. Basically you shouldn't use singletons in almost all cases, you can't really know that it's never going to be needed in more than one place.
I know many have answered with "when you have more than one", etc.
Since the original poster wanted a list of cases when you shouldn't use Singletons (rather than the top reason), I'll chime in with:
Whenever you're using it because you're not allowed to use a global!
The number of times I've had a junior engineer who has used a Singleton because they knew that I didn't accept globals in code-reviews. They often seem shocked when I point out that all they did was replace a global with a Singleton pattern and they still just have a global!
Here is a rant by my friend Alex Miller... It does not exactly enumerate "when you should NOT use a singleton" but it is a comprehensive, excellent post and argues that one should only use a singleton in rare instances, if at all.
I'm guilty of a big one a few years back (thankfully I've learned my lession since then).
What happened is that I came on board a desktop app project that had converted to .Net from VB6, and was a real mess. Things like 40-page (printed) functions and no real class structure. I built a class to encapsulate access to the database. Not a real data tier (yet), just a base class that a real data tier could use. Somewhere I got the bright idea to make this class a singleton. It worked okay for a year or so, and then we needed to build a web interface for the app as well. The singleton ended up being a huge bottleneck for the database, since all web users had to share the same connection. Again... lesson learned.
Looking back, it probably actually was the right choice for a short while, since it forced the other developers to be more disciplined about using it and made them aware of scoping issues not previously a problem in the VB6 world. But I should have changed it back after a few weeks before we had too much built up around it.
Singletons are virtually always a bad idea and generally useless/redundant since they are just a very limited simplification of a decent pattern.
Look up how Dependency Injection works. It solves the same problems, but in a much more useful way--in fact, you find it applies to many more parts of your design.
Although you can find DI libraries out there, you can also roll a basic one yourself, it's pretty easy.
I try to have only one singleton - an inversion of control / service locator object.
IService service = IoC.GetImplementationOf<IService>();
One of the things that tend to make it a nightmare is if it contains modifiable global state. I worked on a project where there were Singletons used all over the place for things that should have been solved in a completely different way (pass in strategies etc.) The "de-singletonification" was in some cases a major rewrite of parts of the system. I would argue that in the bigger part of the cases when people use a Singleton, it's just wrong b/c it looks nice in the first place, but turns into a problem especially in testing.
When you have multiple applications running in the same JVM.
A singleton is a singleton across the entire JVM, not just a single application. Even if multiple threads or applications seems to be creating a new singleton object, they're all using the same one if they run in the same JVM.
Sometimes, you assume there will only be one of a thing, then you turn out to be wrong.
Example, a database class. You assume you will only ever connect to your app's database.
// Its our database! We'll never need another
class Database
{
};
But wait! Your boss says, hook up to some other guys database. Say you want to add phpbb to the website and would like to poke its database to integrate some of its functionality. Should we make a new singleton or another instance of database? Most people agree that a new instance of the same class is preferred, there is no code duplication.
You'd rather have
Database ourDb;
Database otherDb;
than copy-past Database and make:
// Copy-pasted from our home-grown database.
class OtherGuysDatabase
{
};
The slippery slope here is that you might stop thinking about making new instance of classes and instead begin thinking its ok to have one type per every instance.
In the case of a connection (for instance), it makes sense that you wouldn't want to make the connection itself a singleton, you might need four connections, or you may need to destroy and recreate the connection a number of times.
But why wouldn't you access all of your connections through a single interface (i.e. connection manager)?