SOLID - SRP, One job or one reason for change [closed] - solid-principles

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
There is a lot of confusing around the internet about the SRP.
Does SRP requires:
classes/functions should do one job?
classes/functions should have only one reason for change (and we don
not care how many jobs our classes is performing, at least when we
think about SRP)
eg.
Lets assume that we have one class that performs a lot of work/jobs (I know this is bad, we should not put everything into one class)
Also, lets assume that this one class serves one feature, and this feature has only one reason for change, i.e. reason for change can came only from one actor (e.g. our CTO)
Does this code still apply to SRP?
Additionally quoting Clean Architecture by Robert C. Martin
The SOLID principles, the Single Responsibility Principle (SRP) might
be the least well understood. That’s likely because it has a
particularly inappropriate name. It is too easy for programmers to
hear the name and then assume that it means that every module should
do just one thing.
Make no mistake, there is a principle like that. A function should do
one, and onlyone, thing. We use that principle when we are refactoring
large functions intosmaller functions; we use it at the lowest levels.
But it is not one of the SOLID principles — it is not the SRP.

As always, it depends. "Single Responsibility" means just that, to be responsible for one thing.
The "one thing" could be a narrow or a some sort of a wide field. An simple example:
Imagine a class that calculates a cryptographic signature of a string and another class for encrypting a string. Both classes respect SRP because they each do just one thing.
If you tie them together in one class with two methods, one for encrypting a string and one for calculating the signature, you're clearly violating SRP. Because encrypting and signing are not related together.
But now imagine, you have a system which exchanges signed and encrypted strings that conform to some standard. So of course these two functions are related and one class has to handle both operations.
A client of this class even is not interested how the signing and encryption are related. A client just provides a string to be prepared for transmission and the class signs and encrypt the string. So this class of course respect SRP regardless of doing two things, signing and encrypting.
Back to your (bad) example with the class that performs a lot of work/jobs. When the jobs the class performs are related, there is a chance that SRP is respected. But when the jobs are not related, the class clearly violates SRP.

Related

Acronym, do you use them? And Why? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Personally I think it's a bad practice in software development (See Clean Code by Robert C. Martin). But I'm curious to know why people still use them.
I'm talking about acronyms in filenames, variable names, class names, function names, etc.
If you answer, please specify the context (Ex: language, big/small company).
EDIT: I'm not talking about technical acronyms that are common knowledge (ex: sql, html, css, etc.), but rather acronyms within a business.
Two examples:
1) putting two letters which represent the company before each class name
SuperCompany: SCNode, SCObject
2) a specific module name
Graphic: GRTexture, GRMaterial
There is no correct answer to this question, but it is my opinion that you should only use an acronym if another programmer immediately knows it's expansion or meaning. Common examples would be names like dvdPlayer or cssClass, where a longer version would decrease the readability of your code.
If you are in doubt, don't use acronyms, but don't call your class HypertextTransferProtocolRequest instead of HttpRequest, because of a strict no-acronym codex.
Context: Medium Company
Field: Medical Engineering
Languages: Python, JavaScript, C/C++, Perl, etc. etc.
There are lots of reasons NOT to use acronyms in your source code, but in our situation we are gated/regulated by the FDA and several other government agencies that require us to put non-code relevant comments throughout our entire "system" (for auditing/documentation purposes) -- I can't see how we could get through this process without using acronyms.
On the flip-side: if I was given the choice, I'd not add 90% of what they require us to add to our source code, which would effectively eliminate all the esoteric ambiguity (acronyms and regulation tracking numbers) in our code.
So, YES, I use them, NO, I'd prefer not - but my industry requires it.
are you sure that 'clean code' says anything about acronyms? i think it says about readability. acronyms not always are unreadable and meaningless. there are at least two cases when acronyms are necessary.
one is a technical language that is well understood by other programmers (css, html, DAO, DTO, regExp, sql etc) you shouldn't avoid them, they are first class citizens. try to replace them and you will have a lot misunderstandings with other developers.
second rule is: use same language that your clients use. they won't change the names they use. they have their own acronyms (as we have SQL, CSS etc). if you start to change it in your code, you will quickly have a lot of misunderstandings with business

Do preconditions ALWAYS have to be checked? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
These days I'm used to checking every single precondition for every function since I got the habit from an OS programming course back at uni.
On the other hand, at the software engineering course we were taught that a common precondition should only be checked once, so for example, if a function is delegating to another function, the first function should check them but checking them again in the second one is redundant.
I do see the redundancy point, but I certainly feel it's safer to always check them, plus you don't have to keep track of where they were checked previously.
What's the best practice here?
I have seen no "hard and fast" rule on how to check preconditions, but I generally treat it like method documentation. If it is publicly scoped, I assert that the preconditions are met. The logic behind this would be that the scope dictates that you are expecting consumption on a broader scale and with less influence.
Personally, the effort to put assertions around private methods is something I reserve for "mission critical" methods, which would basically be ones that either perform a critical task, are subject to external compliance requirements, or are non-recoverable in the event of an exception. These are largely "judgment calls".
The time saved can be reinvested in thorough unit and integration test enhancement to try and flush out these issues and puts the tooling in place to help enforce the quality of the input assertions as it would be consumed by a client, whether it is a class in your control or not.
I think it depends on how the team is organized: check inputs which come from outside your team.
Check inputs from end-users
Check inputs from software components written by other teams
Trust inputs received from within your own component / within your own team.
The reason for this is for support and maintenance (i.e. bug-fixing): when there's a bug report, you want to be able as quickly as possible to know which component is at fault, i.e. which team to assign the bug to.
if a function is delegating to another
function, the first function should
check them but checking them again in
the second one is redundant.
What if you change the way those functions call each other? Or you introduce new validation requirements in the second function, that the first one delegates to? I'd say it's safer to always check them.
I've made the habit of distinguishing between checking and asserting the preconditions, depending (as people pointed out in the comments) on whether a call comes from the outside (unchecked exception, may happen) or the inside (assert, shouldn't happen).
Ideally, the production system won't have the penalty of the asserts, and you could even use a Design By Contract(TM)-like mechanism to discharge the assertions statically.
In my experience, it depends on your encapsulation.
If the inner function is private then you can be pretty sure that its preconditions are set.
Guess it's all about the Law of Demeter, talk only to your friends and so on.
As a basis for best practise, if the call is public, you should check your inputs.
I think that best practice is to do these checks only if they're going to fail some day. If will help when you do the following.
Debugging
There's no point to check them when several private functions in one module, which has a single maintainer, exchange data. Of course, there are exceptions, especially if your language doesn't have a static type system, or your data are "stringly typed".
However, if you expose public API, one day, someone will fail your precondition. The further is the person that maintains the calling module from you (in organizational structure and in physical location), the more likely it will happen. And when it happens, a clear statement of precondition failure, with a specification where it happened, may save hours of debugging. The Law of Leaky Abstractions is still true...
QA
Precondition failure helps QA to debug their tests. If a unit-test for a module causes the module to yield precondition failure, it means that the test is incorrect, not your code. (Or, that your precondition check is incorrect, but that's less likely).
If one of the means to perform QA is static analysis, then precondition checks, if they have a specific notation (for example, only these checks use assert_precondition macro), will also help. In static analysis it's very important to distinguish incorrect input and source code errors.
Documentation
If you don't have much time to create documentation, you may make your code aid the text that accompanies it. Clear and visible precondition checks, which are perceived separate from the rest of the implementation, "document" possible inputs to some extent. (Another way to document your code this way is writing unit tests).
As with everything, evaluate your requirements to find the best solution for each situation.
When preconditions are easier to check ("pointer isn't null"), you might as well do that often. Preconditions which are hard to check ("points to a valid null-terminated string") or are expensive in time, memory, or other resources may be handled a different way. Use the Pareto principle and gather the low-hanging fruit.
// C, C++:
void example(char const* s) {
// precondition: s points to a valid null-terminated string
assert(s); // tests that s is non-null, which is required for it to point to
// a valid null-terminated string. the real test is nearly impossible from
// within this function
}
Guaranteeing preconditions is the responsibility of the caller. Because of this, several languages offer an "assert" construct that can optionally be skipped (e.g. defining NDEBUG for C/C++, command-line switch for Python) so that you can more extensively test preconditions in special builds without impacting final performance. However, how to use assert can be a heated debate—again, figure out your requirements and be consistent.
It is a little bit old question, but no, preconditions do not have to be checked every time. It really depends.
For example, what if you have binary search over vector. Precondition is sorted vector. Now, if you check every time if vector is sorted this takes a linear time (for each vector), so it is not efficient. Client must be aware of precondition and be sure to meet it.
The best practice is to always check them.

What is the best Pro OOP argument? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am trying to get a couple team-members on to the OOP mindset, who currently think in terms of procedural programming.
However I am having a hard time putting into terms the "why" all this is good, and "why" they should want to benefit from it.
They use a different language than I do, and I am lacking the communication skills to explain this to them in a way that makes them "want" to learn the OOP way of doing things.
What are some good language independent books, articles, or arguments anyone can give or point to?
OOP is good for a multi-developer team because it easily allows abstraction, encapsulation, inheritance and polymorphism. These are the big buzz words of OOP and they are the big buzz words for good reasons.
Abstraction: Allows other members of your team to use code that you write without having to understand the implementation details. This reduces the amount of necessary communication. Think of The Mythical Man Month wherein it is detailed that communication is one of the highest costs facing a development team.
Encapsulation: Allows you to change your implementation details without impacting users of your code. As such, it reduces code maintenance costs.
Inheritance: Allows your team to reuse and extend your implementations with reduced costs.
Polymorphism: Allows your team to use different implementations of a given abstraction. If your team is writing code to read and parse data from a Stream, because of polymorphism it can now work with FileStreams, MemoryStreams and PigeonStreams seamlessly and with significantly reduced costs.
OOP is not a holy grail. It is inappropriate for some teams because the costs of using it could be higher than the costs of not using it. For example, if you try to design for polymorphism but never have multiple implementations of a given abstraction then you have probably increased your costs.
Always give examples.
Take a bit of their code you think is bad. Re-write it to be better. Explain why it is better. Your colleagues will either agree or disagree.
Nobody uses (or should use) techniques because they're good techniques, they (should) use them because they produce good results. The advantages of very simple use of classes and objects are usually fairly easy to see, for instance when you have an array of objects with n properties instead of n arrays, one for each field you care about, and so on.
Comparing procedural to OOP, the biggest winner by far is encapsulation. OOP doesn't mean that you get encapsulation automatically, but the process of doing it is free compared with procedural code.
Abstraction helps manage the complexity of an application: only the information that's required is exposed.
There are many ways to go about this: OOP is not the only discipline to promote this strategy.
Of course, it is not because one claims to do OOP that one builds an application without abundant "abstraction leaks" thereby defeating the strategy...
I have a bit strange thought. I don't know but there probably some areas exist where OOP is unnecessary or even bad (very-very IMHO: javascript programming).
You and your team probably work in one of these areas. In other case you'd failed many years ago due to teams which use oop and all its benefits (like different frameworks, UML and so on) would simply do their job more efficiently.
I mean that if you still work well without oop then, maybe, just leave it.
The killer phrase: With OOP you can model the world "as it is" *cough*.
OOP didn't make sense to me until I was working on an application that connected to two different databases. I needed a function called getEmployeeName() for both databases. I decided to create two objects, one for each database, to encapsulate the functions that ran against each one (there were no functions that ran against both simultaneously). Not the epitomy of OOP, but a good start for me.
Most of my code is still procedural, but I'm more aware of situations where objects would make sense in my code. I'm not of the mindset that everything needs to be one way or the other.
Re-use of existing code through hierarchies.
The killer argument is IMHO that it takes much less time to re-design your code. Here is a similar question explaining why.
Having the ability to pass an entire object around that has a bunch of methods/functions you can call using that object. For example, let's say you have want to pass a message around you only need to pass one object and everyone who gets that object will have access to all it's functions.
Also, you can declare some objects' functions as public and some as private. There is also the concept of a friend function where only objects that are related through OO hierarchies have access to their friend's functions.
Objects help keep functions near the data they use and encapsulates it all into one entity that can be easily passed around.

Namespace Rule of Thumb [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Is there a general rule of thumb as to how many classes, interfaces etc should go in to a given name space before the items should be further classfied in to a new name space? Like a best practice or a community preference? Or is this all personal preference?
namespace: MyExample.Namespace
interface1
interface2
interface3
interface4
interface5
interface6
interface7
interface8
interface9
Or
namespace: MyExample.Namespace.Group1
interface1
interface2
interface3
namespace: MyExample.Namespace.Group2
interface4
interface5
interface6
namespace: MyExample.Namespace.Group3
interface7
interface8
interface9
I have not seen any rule of thumb at any reliable source but there are a few common preferences that I haven seen while working with most developers. There are a few things that help you make the namespaces.
Domain of the class
Is it a class or an interface (I have seen some developers prefer namespaces like ShopApp.Model.Interfaces ). Works really well if your interfaces are some service or data contract.
Dont have namespaces that are too deep, 3 (.) is enough. More than that may get annoying.
Be open to reorganize namespace if at anytime u feel it has become illogical or hard to manage.
Do not create namespaces just for the sake of it.
If building a library or a module, it is generally better to use only one namespace, since the primary function of a namespace is to avoid name collisions and you have the control over what names get assigned to classes and interfaces.
I don't know of any rule of thumb for the number of items, but those kinds of rules tend to be over-generalized garbage anyway. Make sure there is a logical connection between items in the same namespace. If a namespace is getting too crowded (unlikely, I hope), or the things in the namespace are only loosely related at best, consider breaking it up into multiple namespaces.
I would argue that the namespace hierarchy should only be gouverned by considerations of design and the hierarchy of the model/API.
If one namespace sports huge number of unrelated classes, rethink your design.
Contrary to what Andrew said, I would not worry about namespaces containing few classes – although it's of course true that the hierarchy should only be as fine-grained as needed to express the design.
On the other hand, I find it completely reasonable for a namespace to contain only one highly special class, or perhaps just a very small set of types, of which one encodes the task and the others provide an API (exceptions, enums for arguments …).
As an example, take System.Text.RegularExpressions (in .NET). Granted, slightly more than one class, but only just.
It is generally considered bad form to have a small number of classes in a namespace. I have always attributed this to the fact that many namespaces leads to confusion.
I would suggest that you break the classes into logical namespaces being as reasonable and practical as possible. However if you end up with only one or two classes per namespace then you might be fracturing too much and should think about consolidating.

When evaluating a design, how do you evaluate complexity? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
We all know to keep it simple, right?
I've seen complexity being measured as the number of interactions between systems, and I guess that's a very good place to start. Aside from gut feel though, what other (preferably more objective) methods can be used to determine the level of complexity of a particular design or piece of software?
What are YOUR favorite rules or heuristics?
Here are mine:
1) How hard is it to explain to someone who understands the problem but hasn't thought about the solution? If I explain the problem to someone in the hall (who probably already understands the problem if they're in the hall) and can explain the solution, then it's not too complicated. If it takes over an hour, chances are good the solution's overengineered.
2) How deep in the nested functions do you have to go? If I have an object which requires a property held by an object held by another object, then chances are good that what I'm trying to do is too far removed from the object itself. Those situations become problematic when trying to make objects thread-safe, because there'd be many objects of varying depths from your current position to lock.
3) Are you trying to solve problems that have already been solved before? Not every problem is new (and some would argue that none really are). Is there an existing pattern or group of patterns you can use? If you can't, why not? It's all good to make your own new solutions, and I'm all for it, but sometimes people have already answered the problem. I'm not going to rewrite STL (though I tried, at one point), because the solution already exists and it's a good one.
Complexity can be estimated with the coupling and how cohesive are all your objects. If something is have too much coupling or is not enough cohesive, than the design will start to be more complex.
When I attended the Complex Systems Modeling workshop at the New England Complex Systems Institute (http://necsi.org/), one of the measures that they used was the number of system states.
For example if you have two nodes, which interact, A and B, and each of these can be 0 or 1, your possible states are:
A B
0 0
1 0
0 1
1 1
Thus a system of only 1 interaction between binary components can actually result in 4 different states. The point being that the complexity of the system does not necessarily increase linearly as the number of interactions increases.
good measures can be also number of files, number of places where configuration is stored, order of compilation on some languages.
Examples:
.- properties files, database configuration, xml files to hold related information.
.- tens of thousands of classes with interfaces, and database mappings
.- a extremely long and complicated build file (build.xml, Makefile, others..
If your app is built, you can measure it in terms of time (how long a particular task would take to execute) or computations (how much code is executed each time the task is run).
If you just have designs, then you can look at how many components of your design are needed to run a given task, or to run an average task. For example, if you use MVC as your design pattern, then you have at least 3 components touched for the majority of tasks, but depending on your implementation of the design, you may end up with dozens of components (a cache in addition to the 3 layers, for example).
Finally something LOC can actually help measure? :)
i think complexity is best seen as the number of things that need to interact.
A complex design would have n tiers whereas a simple design would have only two.
Complexity is needed to work around issues that simplicity cannot overcome, so it is not always going to be a problem.
There is a problem in defining complexity in general as complexity usually has a task associated with it.
Something may be complex to understand, but simple to look at (very terse code for example)
The number of interactions getting this web page from the server to your computer is very complex, but the abstraction of the http protocol is very simple.
So having a task in mind (e.g. maintenance) before selecting a measure may make it more useful. (i.e. adding a config file and logging to an app increases its objective complexity [yeah, only a little bit sure], but simplifies maintenance).
There are formal metrics. Read up on Cyclomatic Complexity, for example.
Edit.
Also, look at Function Points. They give you a non-gut-feel quantitative measurement of system complexity.