Reusability, testability, code complexity reduction and showing-off-ability programming importance - language-agnostic

There are lots of programming and architecture patterns. Patterns allow to make code cleaner, reusable, maintainable, more testable & at last (but not at least) to feel the follower a real cool developer.
How do you rank these considerations? What does appeal you most when you decide to apply pattern?
I wonder how many times code reusability (especially for MVP, MVC patterns) was important? For example DAL library often shared between projects (it's reusable) but how often controllers/views (abstracted via interfaces) are reused?

I think you missed the single most important one from your list - more maintainable. Code that is well and consistently structured (as you get with easily reusable code) is much more easily maintained.
And as for reusablilty, then yes, on a number of occasions, usually something like : create a web page to save/update some record. Some months later - we need to expose this as a service for a third party to consume - if your code is structured well, this should be easy and low risk, as you're only adding a new front end.

I hope most people use patterns to learn how to solve design problems in certain context. All those non-functional requrements you mention can be really important depending on stakeholder needs for a project.
As for MVC etc. it is not meant only to be reused between projects, that is often not possible or a good idea. The benefits you get from MVC should be important in the project you use that architecture. You can change independently details in view and models, you can reuse views with controllers for different models, you should be able to change persistence details without affecting your controllers and views. All this is imho very important during development of a single project.

"Code reusability" as defined in many books is more or less a myth. Try to focus more on easy to read - easy to maintain. Don't start with "reusability" in mind, will be better if you will start to think first on testability and then to reuse something. Is important to deliver, to test, to have clean code, to refactor, to not repeat yourself and less important to build from the start components that can be reused between projects. Whatever is to be reused must be a natural process, more like a discovery: you see a repetition so you build something that can be reused in that specific situation.

Code complexity reduction ranks high, if I keep things simple, I can maintain the project better and work on it faster to add/change features.
Reusability is a tool, one that has its uses, but not in every place. I usually refactor for reusability those components that show a clear history of identical use in more than three places. Otherwise, I risk running into the need of specialized behavior in a place or two, and end up splitting a component in a couple of more specialized ones that share a similar structure, but would be hard to understand if kept together.
Testability is not something I personally put a lot of energy in. However it derives in many cases from the reduced code complexity: if there are not a lot of dependencies and intricate code paths, there will be less dangers to break tests or make them more difficult to perform.
As for showing-off-ability... well... the customer is interested in how well the app performs in terms of what he wants from it, not in terms of how "cool" my code is. 'nuff said

Related

What programming practice that you once liked have you since changed your mind about? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
As we program, we all develop practices and patterns that we use and rely on. However, over time, as our understanding, maturity, and even technology usage changes, we come to realize that some practices that we once thought were great are not (or no longer apply).
An example of a practice I once used quite often, but have in recent years changed, is the use of the Singleton object pattern.
Through my own experience and long debates with colleagues, I've come to realize that singletons are not always desirable - they can make testing more difficult (by inhibiting techniques like mocking) and can create undesirable coupling between parts of a system. Instead, I now use object factories (typically with a IoC container) that hide the nature and existence of singletons from parts of the system that don't care - or need to know. Instead, they rely on a factory (or service locator) to acquire access to such objects.
My questions to the community, in the spirit of self-improvement, are:
What programming patterns or practices have you reconsidered recently, and now try to avoid?
What did you decide to replace them with?
//Coming out of university, we were taught to ensure we always had an abundance
//of commenting around our code. But applying that to the real world, made it
//clear that over-commenting not only has the potential to confuse/complicate
//things but can make the code hard to follow. Now I spend more time on
//improving the simplicity and readability of the code and inserting fewer yet
//relevant comments, instead of spending that time writing overly-descriptive
//commentaries all throughout the code.
Single return points.
I once preferred a single return point for each method, because with that I could ensure that any cleanup needed by the routine was not overlooked.
Since then, I've moved to much smaller routines - so the likelihood of overlooking cleanup is reduced and in fact the need for cleanup is reduced - and find that early returns reduce the apparent complexity (the nesting level) of the code. Artifacts of the single return point - keeping "result" variables around, keeping flag variables, conditional clauses for not-already-done situations - make the code appear much more complex than it actually is, make it harder to read and maintain. Early exits, and smaller methods, are the way to go.
Trying to code things perfectly on the first try.
Trying to create perfect OO model before coding.
Designing everything for flexibility and future improvements.
In one word overengineering.
Hungarian notation (both Forms and Systems).
I used to prefix everything. strSomeString or txtFoo.
Now I use someString and textBoxFoo. It's far more readable and easier for someone new to come along and pick up. As an added bonus, it's trivial to keep it consistant -- camelCase the control and append a useful/descriptive name. Forms Hungarian has the drawback of not always being consistent and Systems Hungarian doesn't really gain you much. Chunking all your variables together isn't really that useful -- especially with modern IDE's.
The "perfect" architecture
I came up with THE architecture a couple of years ago. Pushed myself technically as far as I could so there were 100% loosely coupled layers, extensive use of delegates, and lightweight objects. It was technical heaven.
And it was crap. The technical purity of the architecture just slowed my dev team down aiming for perfection over results and I almost achieved complete failure.
We now have much simpler less technically perfect architecture and our delivery rate has skyrocketed.
The use of caffine. It once kept me awake and in a glorious programming mood, where the code flew from my fingers with feverous fluidity. Now it does nothing, and if I don't have it I get a headache.
Commenting out code. I used to think that code was precious and that you can't just delete those beautiful gems that you crafted. I now delete any commented-out code I come across unless there's a TODO or NOTE attached because it's too perilous to leave it in. To wit, I've come across old classes with huge commented-out portions and it really confused me why they were there: were they recently commented out? is this a dev environment change? why does it do this unrelated block?
Seriously consider not commenting out code and just deleting it instead. If you need it, it's still in source control. YAGNI though.
The overuse / abuse of #region directives. It's just a little thing, but in C#, I previously would use #region directives all over the place, to organize my classes. For example, I'd group all class properties together in a region.
Now I look back at old code and mostly just get annoyed by them. I don't think it really makes things clearer most of the time, and sometimes they just plain slow you down.
So I have now changed my mind and feel that well laid out classes are mostly cleaner without region directives.
Waterfall development in general, and in specific, the practice of writing complete and comprehensive functional and design specifications that are somehow expected to be canonical and then expecting an implementation of those to be correct and acceptable. I've seen it replaced with Scrum, and good riddance to it, I say. The simple fact is that the changing nature of customer needs and desires makes any fixed specification effectively useless; the only way to really properly approach the problem is with an iterative approach. Not that Scrum is a silver bullet, of course; I've seen it misused and abused many, many times. But it beats waterfall.
Never crashing.
It seems like such a good idea, doesn't it? Users don't like programs that crash, so let's write programs that don't crash, and users should like the program, right? That's how I started out.
Nowadays, I'm more inclined to think that if it doesn't work, it shouldn't pretend it's working. Fail as soon as you can, with a good error message. If you don't, your program is going to crash even harder just a few instructions later, but with some nondescript null-pointer error that'll take you an hour to debug.
My favorite "don't crash" pattern is this:
public User readUserFromDb(int id){
User u = null;
try {
ResultSet rs = connection.execute("SELECT * FROM user WHERE id = " + id);
if (rs.moveNext()){
u = new User();
u.setFirstName(rs.get("fname"));
u.setSurname(rs.get("sname"));
// etc
}
} catch (Exception e) {
log.info(e);
}
if (u == null){
u = new User();
u.setFirstName("error communicating with database");
u.setSurname("error communicating with database");
// etc
}
u.setId(id);
return u;
}
Now, instead of asking your users to copy/paste the error message and sending it to you, you'll have to dive into the logs trying to find the log entry. (And since they entered an invalid user ID, there'll be no log entry.)
I thought it made sense to apply design patterns whenever I recognised them.
Little did I know that I was actually copying styles from foreign programming languages, while the language I was working with allowed for far more elegant or easier solutions.
Using multiple (very) different languages opened my eyes and made me realise that I don't have to mis-apply other people's solutions to problems that aren't mine. Now I shudder when I see the factory pattern applied in a language like Ruby.
Obsessive testing. I used to be a rabid proponent of test-first development. For some projects it makes a lot of sense, but I've come to realize that it is not only unfeasible, but rather detrimental to many projects to slavishly adhere to a doctrine of writing unit tests for every single piece of functionality.
Really, slavishly adhering to anything can be detrimental.
This is a small thing, but: Caring about where the braces go (on the same line or next line?), suggested maximum line lengths of code, naming conventions for variables, and other elements of style. I've found that everyone seems to care more about this than I do, so I just go with the flow of whoever I'm working with nowadays.
Edit: The exception to this being, of course, when I'm the one who cares the most (or is the one in a position to set the style for a group). In that case, I do what I want!
(Note that this is not the same as having no consistent style. I think a consistent style in a codebase is very important for readability.)
Perhaps the most important "programming practice" I have since changed my mind about, is the idea that my code is better than everyone else's. This is common for programmers (especially newbies).
Utility libraries. I used to carry around an assembly with a variety of helper methods and classes with the theory that I could use them somewhere else someday.
In reality, I just created a huge namespace with a lot of poorly organized bits of functionality.
Now, I just leave them in the project I created them in. In all probability I'm not going to need it, and if I do, I can always refactor them into something reusable later. Sometimes I will flag them with a //TODO for possible extraction into a common assembly.
Designing more than I coded.
After a while, it turns into analysis paralysis.
The use of a DataSet to perform business logic. This binds the code too tightly to the database, also the DataSet is usually created from SQL which makes things even more fragile. If the SQL or the Database changes it tends to trickle to everything the DataSet touches.
Performing any business logic inside an object constructor. With inheritance and the ability to create overloaded constructors tend to make maintenance difficult.
Abbreviating variable/method/table/... Names
I used to do this all of the time, even when working in languages with no enforced limits on lengths of names (well they were probably 255 or something). One of the side-effects were a lot of comments littered throughout the code explaining the (non-standard) abbreviations. And of course, if the names were changed for any reason...
Now I much prefer to call things what they really are, with good descriptive names. including standard abbreviations only. No need to include useless comments, and the code is far more readable and understandable.
Wrapping existing Data Access components, like the Enterprise Library, with a custom layer of helper methods.
It doesn't make anybody's life easier
Its more code that can have bugs in it
A lot of people know how to use the EntLib data access components. No one but the local team knows how to use the in house data access solution
I first heard about object-oriented programming while reading about Smalltalk in 1984, but I didn't have access to an o-o language until I used the cfront C++ compiler in 1992. I finally got to use Smalltalk in 1995. I had eagerly anticipated o-o technology, and bought into the idea that it would save software development.
Now, I just see o-o as one technique that has some advantages, but it's just one tool in the toolbox. I do most of my work in Python, and I often write standalone functions that are not class members, and I often collect groups of data in tuples or lists where in the past I would have created a class. I still create classes when the data structure is complicated, or I need behavior associated with the data, but I tend to resist it.
I'm actually interested in doing some work in Clojure when I get the time, which doesn't provide o-o facilities, although it can use Java objects if I understand correctly. I'm not ready to say anything like o-o is dead, but personally I'm not the fan I used to be.
In C#, using _notation for private members. I now think it's ugly.
I then changed to this.notation for private members, but found I was inconsistent in using it, so I dropped that too.
I stopped going by the university recommended method of design before implementation. Working in a chaotic and complex system has forced me to change attitude.
Of course I still do code research, especially when I'm about to touch code I've never touched before, but normally I try to focus on as small implementations as possible to get something going first. This is the primary goal. Then gradually refine the logic and let the design just appear by itself. Programming is an iterative process and works very well with an agile approach and with lots of refactoring.
The code will not look at all what you first thought it would look like. Happens every time :)
I used to be big into design-by-contract. This meant putting a lot of error checking at the beginning of all my functions. Contracts are still important, from the perspective of separation of concerns, but rather than try to enforce what my code shouldn't do, I try to use unit tests to verify what it does do.
I would use static's in a lot of methods/classes as it was more concise. When I started writing tests that practice changed very quickly.
Checked Exceptions
An amazing idea on paper - defines the contract clearly, no room for mistake or forgetting to check for some exception condition. I was sold when I first heard about it.
Of course, it turned to be such a mess in practice. To the point of having libraries today like Spring JDBC, which has hiding legacy checked exceptions as one of its main features.
That anything worthwhile was only coded in one particular language. In my case I believed that C was the best language ever and I never had any reason to code anything in any other language... ever.
I have since come to appreciate many different languages and the benefits/functionality they offer. If I want to code something small - quickly - I would use Python. If I want to work on a large project I would code in C++ or C#. If I want to develop a brain tumour I would code in Perl.
When I needed to do some refactoring, I thought it was faster and cleaner to start straightaway and implement the new design, fixing up the connections until they work. Then I realized it's better to do a series of small refactorings to slowly but reliably progress towards the new design.
Perhaps the biggest thing that has changed in my coding practices, as well as in others, is the acceptance of outside classes and libraries downloaded from the internet as the basis for behaviors and functionality in applications. In school at the time I attended college we were encouraged to figure out how to make things better via our own code and rely upon the language to solve our problems. With the advances in all aspects of user interface and service/data consumption this is no longer a realistic notion.
There are certain things which will never change in a language, and having a library that wraps this code in a simpler transaction and in fewer lines of code that I have to write is a blessing. Connecting to a database will always be the same. Selecting an element within the DOM will not change. Sending an email via a server-side script will never change. Having to write this time and again wastes time that I could be using to improve my core logic in the application.
Initializing all class members.
I used to explicitly initialize every class member with something, usually NULL. I have come to realize that this:
normally means that every variable is initialized twice before ever being read
is silly because in most languages automatically initialize variables to NULL.
actually enforces a slight performance hit in most languages
can bloat code on larger projects
Like you, I also have embraced IoC patterns in reducing coupling between various components of my apps. It makes maintenance and parts-swapping much simpler, as long as I can keep each component as independent as possible. I'm also utilizing more object-relational frameworks such as NHibernate to simplify database management chores.
In a nutshell, I'm using "mini" frameworks to aid in building software more quickly and efficiently. These mini-frameworks save lots of time, and if done right can make an application super simple to maintain down the road. Plug 'n Play for the win!

At what point should architecture become layered?

Obviously, "Hello World" doesn't require a separated, modular front-end and back-end. But any sort of Enterprise-grade project does.
Assuming some sort of spectrum between these points, at which stage should an application be (conceptually, or at a design level) multi-layered? When a database, or some external resource is introduced? When you find that the you're anticipating spaghetti code in your methods/functions?
when a database, or some external resource is introduced.
but also:
always (except for the most trivial of apps) separate AT LEAST presentation tier and application tier
see:
http://en.wikipedia.org/wiki/Multitier_architecture
Layers are a mean to keep a design loosely coupled and highly cohesive.
When you start to have a few classes (either implemented or just sketched with UML), they can be grouped logically, into layers - or more generally packages, or modules. This is called the art of separating the concerns.
The sooner the better: if you do not start layering early enough, then you risk to have never do it as the effort can be too important.
Here are some criteria of when to...
Any time you anticipate the need to
replace one part of it with a
different part.
Any time you find
yourself need to divide work amongst
parallel team.
There is no real answer to this question. It depends largely on your application's needs, and numerous other factors. I'd suggest reading some books on design patterns and enterprise application architecture. These two are invaluable:
Design Patterns: Elements of Reusable Object-Oriented Software
Patterns of Enterprise Application Architecture
Some other books that I highly recommend are:
The Pragmatic Programmer: From Journeyman to Master
Refactoring: Improving the Design of Existing Code
No matter your skill level, reading these will really open your eyes to a world of possibilities.
I'd say in most cases dealing with multiple distinct levels of abstraction in the concepts your code deals with would be a strong signal to mirror this with levels of abstraction in your implementation.
This does not override the scenarios that others have highlighted already though.
I think once you ask yourself "hmm should I layer this" the answer is yes.
I've worked on too many projects that probably started off as proof of concept/prototype that ended up being full projects used in production, which are horribly written and just wreak of "get it done quick, we'll fix it later." Trust me, you wont fix it later.
The Pragmatic Programmer lists this as the Broken Window Theory.
Try and always do it right from the start. Separate your concerns. Build it with modularity in mind.
And of course try and think of the poor maintenance programmer who might take over when you're done!
Thinking of it in terms of layers is a little limiting. It's what you see in whitepapers about a product, but it's not how products really work. They have "boxes" that depend on each other in various ways, and you can make it look like they fit into layers but you can do this in several different configurations, depending on what information you're leaving out of the diagram.
And in a really well-designed application, the boxes get very small. They are down to the level of individual interfaces and classes.
This is important because whenever you change a line of code, you need to have some understanding of the impact your change will have, which means you have to understand exactly what the code currently does, what its responsibilities are, which means it has to be a small chunk that has a single responsibility, implementing an interface that doesn't cause clients to be dependent on things they don't need (the S and the I of SOLID).
You may find that your application can look like it has two or three simple layers, if you narrow your eyes, but it may not. That isn't really a problem. Of course, a disastrously badly designed application can look like it has layers tiers if you squint as hard as you can. So those "high level" diagrams of an "architecture" can hide a multitude of sins.
My generic rule of thumb is to at least to separate the problem into a model and view layer, and throw in a controller if there is a possibility of more than one ways of handling the model or piping data to the view.
(Or as the first answer, at least the presentation tier and the application tier).
Loose coupling is all about minimising dependencies, so I would say 'layer' when a dependency is introduced. i.e. a database, third party application, etc.
Although 'layer' is probably the wrong term these days. Most of the time I use Dependency Injection (DI) through an Inversion of Control container such as Castle Windsor. This means that I can code on one part of my system without worrying about the rest. It has the side effect of ensuring loose coupling.
I would recommend DI as a general programming principle all of the time so that you have the choice on how to 'layer' your application later.
Give it a look.
R

Do design patterns increase or decrease the complexity of an Application?

I was just looking at this question about SQL, and followed a link about DAO to wikipedia. And it mentions as a disadvantage:
"As with many design patterns, a design pattern increases the complexity of the application." -Wikipedia
Which suddenly made me wonder where this idea came from (because it lacks a citation). Personally I always considered patterns reduce to complexity of an application, but I might be delusional, so I'm wondering if this complexity is based on something or not.
Thanks.
If the person reading the code is aware of design patterns and their concept and is able to identify design patterns in practical use (not just the book examples) then they really do reduce the complexity.
However I've found with a lot of junior developers, which haven't heard much about design patterns or weren't aware of them at all, that they believe their use increases the complexity of the code.
I can understand it: You suddenly have more classes or code to go through to solve what seems to be a simple problem at first. If you're not aware of the benefits of design patterns, hacked solutions always look better.
I love design patterns, but they (apart from simple ones like Singleton) definitely add complexity to an application. They add some dimension to a design that is not intuitively obvious to a novice designer (and not part of the features of the programming language).
Some people might feel patterns reduce complexity because of the benefits they bring in terms of the software's non-functional requirements such as maintainability, extendability, reusability, etc. However, I disagree and see the benefits as a return on complexity investment. Perhaps in some cases patterns reduce complexity, but a theoretical discussion like that sheds more heat than light. Almost none of the answers so far used concrete examples, save https://stackoverflow.com/a/760968/1168342.
To be specific, many patterns increase accidental complexity of a design by introducing new structures (interfaces, methods, etc.) that weren't present in the design before the pattern was applied.
Let's use Visitor as an example.
Visitor is a way of separating operations from an object structure on which they operate.
Before the solution with Visitor, the operations are hard-coded into each Element of the object structure. The challenge for the developer is that adding new operations involves modifying the code in the various elements.
After the application of the Visitor pattern, there is an additional class hierarchy of visitors, which encapsulate the operations.
The flow of
control in the solution is definitely more complex, and will be harder
to debug (anyone who has implemented Visitor and tried to follow the
program flow of double-dispatched calls with accept/visit will know this).
Understanding and maintaining Visitor functions in terms of
cohesive units is less complicated than the alternative of coding
functions into each of the Elements in the fixed structure that is
visited. This is the benefit of the pattern.
It's difficult to say quantitatively how much increase there is in accidental complexity or how much easier it is to add new operations. I certainly don't agree with answers that make a blanket statement saying in the long-term, complexity is reduced with applying a pattern. It's not like your design "forgets" the double-dispatch added by Visitor's approach, just because you have code which more easily allows operations to be added. The complexity is a price (or tax) you pay to get the benefit in maintainability.
Patterns still have to be applied
Regardless of one's supposed familiarity with patterns, any given pattern must be applied to a solution. That application is going to be different every time (Martin Fowler said patterns are only half-baked solutions). Developers will always have to understand what classes are playing what roles in the existing design, which is subject to the essential complexity (the application problem's complexity) that is often non-trivial.
In the best case, understanding a design pattern applied in an application that's already complex may be trivial, but it's not 0 effort:
Patterns aren't always applied the same way. There are many variants of patterns -- Proxy comes to mind. I'm not sure that everyone agrees about how any given pattern should be applied.
Introducing one pattern (e.g., Strategy to encapsulate algorithms) often leads to other patterns to manage things properly (e.g., Factory to instantiate the concrete Strategies).
Introducing a pattern often leads to more responsibilities. Object cleanup when a Factory is used is not trivial (and also not documented in GoF). How many know about the so-called Lapsed-listener problem?
What happens if there is a change in the assumptions made about the need for the pattern (e.g., there is no longer a need to have multiple encapsulated algorithms provided by the Strategy pattern)? It's going to be extra work to remove the pattern later. If you don't remove it, new developers could be duped by its presence when they come on board. Patterns are intertwined between the classes playing the roles in the pattern. Removal is not trivial.
Erich Gamma gave an anecdote at ECOOP 2006 that designers in one case decided to remove the Abstract Factory pattern from a commercial multi-platform GUI widget framework (the classic Abstract Factory example!). As I remember the anecdote, the multiple-levels of indirection (polymorphic calls) in complex GUIs was a significant performance hit in the client code. Customers complained about GUIs being sluggish, and the "optimization" was to remove the indirections. In this case, performance trumped maintainability; the pattern was only making the coders happy, not the end users.
DAO example
In terms of the DAO example you cite in the question, if you're coding an application that will never need to run with varying databases, then the DAO pattern is an unneeded level of complexity. In general, if your code doesn't need the benefit that a pattern is supposed to provide, applying that pattern will increase your application's complexity unnecessarily.
Revolving door metaphor
Using buildings as a metaphor, let's consider a revolving door as a building design pattern. The following image comes from Wikipedia:
You can "see" the additional complexity in such a door. The benefits of revolving doors are in energy savings. They attempt to solve the problem where there are people frequently going in and out of a building, and opening/closing a standard door allows too much air to be exchanged between the inside the outside of the building each time.
It probably wouldn't make sense to install a revolving door as the entrance of a two-bedroom house, because there is not enough traffic to justify the additional complexity. The revolving door would still work in a two-bedroom house. The benefits in terms of energy savings would be small (and might actually be worse because of size and air-tightness relative to a conventional door). The door would surely cost more and would take up more space than a traditional door.
Design patterns often lead to additional levels of abstraction around a problem, and if not handled correctly then too much abstraction can lead to complexity.
However, since design patterns provide a common vocabulary to communicate ideas they also reduce complexity and increase maintainability.
At the end of the day it's a double-edged sword, but I can't imagine a situation where I'd avoid using a design pattern...
There's an infamous disease known as "Small Boy With A Pattern Syndrome" that usually strikes someone who has recently read the GoF book for the first time and suddenly sees patterns everywhere. That can add complexity and unnecessary abstraction.
Patterns are best added to code as a discovery or refactoring to solve a particular problem, in my opinion.
In the short term, design patterns will often increase the complexity of the code. They add extra abstractions in places they might not be strictly necessary. However, in the long term they reduce complexity because future enhancements and changes fit into the patterns in a simple way. Without the patterns, these changes would be much more intrusive and complexity would likely be much higher.
Take for example a decorator pattern. The first time you use it, it will add complexity because now you have to define an interface for the object and create another class to add the decoration. This could likely be done with a simple property and be done with. However, when you get to 5 or 20 decoarations, the complexity with a decorator is much less than with properties.
As Grover said, the power of Design patterns is dependent on the ability of programmers to recognize them when they see them. It's like reducing a mathematical problem to a simpler problem, and them solving the simpler one. To someone who doesn't realize this, though, it seems like you've just created another problem.
I think it's always a good idea to document explicitly, using comments and/or descriptive names, when you're using a pattern to solve a problem. This might even educate another programmer who comes across it about the pattern if he wasn't aware of this.
I think it depends on the "audience" i.e. the maintaining developers of the code base. If they are design pattern illiterate then yes it can increase complexity, because most things one doesn't understand are "complex".
If the team is design pattern literate, i.e. they understand the basics and understand the premise behind why design patterns are useful (and as important when they're not) then I think they reduce complexity.
After all Computer Science maybe a fledging science but it's got decades of experience under its belt. The chances are somebody has already solved your problem once before. Whether the answer is a design pattern, data structure or algorithm.
I rather like this humorous explanation by Dylan Beattie. I recommend the read (if nothing else to waste five minutes on a Friday morning!)
Design Patterns work like algorithms for Object Oriented Programming. Shows you how to put together objects in a meaningful relationship that performs a particular task. So I would say yes they reduce complexity by allowing you to understand the design of the software better. The same way with algorithms in procedural programs.
If I told you that X was using a Linked List with a Bubble Sort it would be a lot easier to following along with what the programmer was doing.
Design patterns increase the code and divide it into multiple parts. If the design pattern and concept is known then it doesn't sound complex but code based on design pattern you don't know then it looks complex.
I made some research about this topic in the scope of GoF patterns. I observed OO Metric value fluctuations after the design pattern refactorings, you can check it out using this link.
Design patterns definitely adds complexity upfront in return for more modular, maintainable, flexible and extensible code in the long run. Perhaps the iterator pattern epitomizes it best.
Using index to iterate a list is clearly very simple and intuitive thing to do yet design patterns encapsulates it an iterator with is definitely more complex than a simple index.
The trade off is that it will be fair to say it has taken away the (implementation) simplicity but in return has made it more flexible removing iteration responsibility from the container and objectifying it which can be reusable.
That's design patterns for you.
Design Patterns dont increase complexity, it can make things alot easier to read and maintain. It can be harder to new comers to integrate your developer team, but this effort will be benefitial as a whole.
Problem are Design Patterns as a whole, but the its abuse
They should neither increase nor decrease the complexity.
You should always use an appropriate design for your code. This may use common design patterns or not.
The main benefits to design patterns are
By learning them you have added more design tools to your toolbox
By learning their names, if you use them and put a comment stating the pattern you're using, it helps readers understand your design intent more concisely
When I teach patterns at Hopkins, the two big things I stress are:
Patterns are all about Communication of Intent
Don't use any of the specific patterns as a Golden Hammer; lock them in your toolbox and only pull them out if it makes sense for your application.

How do you plan an application's architecture before writing any code? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
One thing I struggle with is planning an application's architecture before writing any code.
I don't mean gathering requirements to narrow in on what the application needs to do, but rather effectively thinking about a good way to lay out the overall class, data and flow structures, and iterating those thoughts so that I have a credible plan of action in mind before even opening the IDE. At the moment it is all to easy to just open the IDE, create a blank project, start writing bits and bobs and let the design 'grow out' from there.
I gather UML is one way to do this but I have no experience with it so it seems kind of nebulous.
How do you plan an application's architecture before writing any code? If UML is the way to go, can you recommend a concise and practical introduction for a developer of smallish applications?
I appreciate your input.
I consider the following:
what the system is supposed to do, that is, what is the problem that the system is trying to solve
who is the customer and what are their wishes
what the system has to integrate with
are there any legacy aspects that need to be considered
what are the user interractions
etc...
Then I start looking at the system as a black box and:
what are the interactions that need to happen with that black box
what are the behaviours that need to happen inside the black box, i.e. what needs to happen to those interactions for the black box to exhibit the desired behaviour at a higher level, e.g. receive and process incoming messages from a reservation system, update a database etc.
Then this will start to give you a view of the system that consists of various internal black boxes, each of which can be broken down further in the same manner.
UML is very good to represent such behaviour. You can describe most systems just using two of the many components of UML, namely:
class diagrams, and
sequence diagrams.
You may need activity diagrams as well if there is any parallelism in the behaviour that needs to be described.
A good resource for learning UML is Martin Fowler's excellent book "UML Distilled" (Amazon link - sanitised for the script kiddie link nazis out there (-: ). This book gives you a quick look at the essential parts of each of the components of UML.
Oh. What I've described is pretty much Ivar Jacobson's approach. Jacobson is one of the Three Amigos of OO. In fact UML was initially developed by the other two persons that form the Three Amigos, Grady Booch and Jim Rumbaugh
I really find that a first-off of writing on paper or whiteboard is really crucial. Then move to UML if you want, but nothing beats the flexibility of just drawing it by hand at first.
You should definitely take a look at Steve McConnell's Code Complete-
and especially at his giveaway chapter on "Design in Construction"
You can download it from his website:
http://cc2e.com/File.ashx?cid=336
If you're developing for .NET, Microsoft have just published (as a free e-book!) the Application Architecture Guide 2.0b1. It provides loads of really good information about planning your architecture before writing any code.
If you were desperate I expect you could use large chunks of it for non-.NET-based architectures.
I'll preface this by saying that I do mostly web development where much of the architecture is already decided in advance (WebForms, now MVC) and most of my projects are reasonably small, one-person efforts that take less than a year. I also know going in that I'll have an ORM and DAL to handle my business object and data interaction, respectively. Recently, I've switched to using LINQ for this, so much of the "design" becomes database design and mapping via the DBML designer.
Typically, I work in a TDD (test driven development) manner. I don't spend a lot of time up front working on architectural or design details. I do gather the overall interaction of the user with the application via stories. I use the stories to work out the interaction design and discover the major components of the application. I do a lot of whiteboarding during this process with the customer -- sometimes capturing details with a digital camera if they seem important enough to keep in diagram form. Mainly my stories get captured in story form in a wiki. Eventually, the stories get organized into releases and iterations.
By this time I usually have a pretty good idea of the architecture. If it's complicated or there are unusual bits -- things that differ from my normal practices -- or I'm working with someone else (not typical), I'll diagram things (again on a whiteboard). The same is true of complicated interactions -- I may design the page layout and flow on a whiteboard, keeping it (or capturing via camera) until I'm done with that section. Once I have a general idea of where I'm going and what needs to be done first, I'll start writing tests for the first stories. Usually, this goes like: "Okay, to do that I'll need these classes. I'll start with this one and it needs to do this." Then I start merrily TDDing along and the architecture/design grows from the needs of the application.
Periodically, I'll find myself wanting to write some bits of code over again or think "this really smells" and I'll refactor my design to remove duplication or replace the smelly bits with something more elegant. Mostly, I'm concerned with getting the functionality down while following good design principles. I find that using known patterns and paying attention to good principles as you go along works out pretty well.
http://dn.codegear.com/article/31863
I use UML, and find that guide pretty useful and easy to read. Let me know if you need something different.
UML is a notation. It is a way of recording your design, but not (in my opinion) of doing a design. If you need to write things down, I would recommend UML, though, not because it's the "best" but because it is a standard which others probably already know how to read, and it beats inventing your own "standard".
I think the best introduction to UML is still UML Distilled, by Martin Fowler, because it's concise, gives pratical guidance on where to use it, and makes it clear you don't have to buy into the whole UML/RUP story for it to be useful
Doing design is hard.It can't really be captured in one StackOverflow answer. Unfortunately, my design skills, such as they are, have evolved over the years and so I don't have one source I can refer you to.
However, one model I have found useful is robustness analysis (google for it, but there's an intro here). If you have your use-cases for what the system should do, a domain model of what things are involved, then I've found robustness analysis a useful tool in connecting the two and working out what the key components of the system need to be.
But the best advice is read widely, think hard, and practice. It's not a purely teachable skill, you've got to actually do it.
I'm not smart enough to plan ahead more than a little. When I do plan ahead, my plans always come out wrong, but now I've spend n days on bad plans. My limit seems to be about 15 minutes on the whiteboard.
Basically, I do as little work as I can to find out whether I'm headed in the right direction.
I look at my design for critical questions: when A does B to C, will it be fast enough for D? If not, we need a different design. Each of these questions can be answer with a spike. If the spikes look good, then we have the design and it's time to expand on it.
I code in the direction of getting some real customer value as soon as possible, so a customer can tell me where I should be going.
Because I always get things wrong, I rely on refactoring to help me get them right. Refactoring is risky, so I have to write unit tests as I go. Writing unit tests after the fact is hard because of coupling, so I write my tests first. Staying disciplined about this stuff is hard, and a different brain sees things differently, so I like to have a buddy coding with me. My coding buddy has a nose, so I shower regularly.
Let's call it "Extreme Programming".
"White boards, sketches and Post-it notes are excellent design
tools. Complicated modeling tools have a tendency to be more
distracting than illuminating." From Practices of an Agile Developer
by Venkat Subramaniam and Andy Hunt.
I'm not convinced anything can be planned in advance before implementation. I've got 10 years experience, but that's only been at 4 companies (including 2 sites at one company, that were almost polar opposites), and almost all of my experience has been in terms of watching colossal cluster********s occur. I'm starting to think that stuff like refactoring is really the best way to do things, but at the same time I realize that my experience is limited, and I might just be reacting to what I've seen. What I'd really like to know is how to gain the best experience so I'm able to arrive at proper conclusions, but it seems like there's no shortcut and it just involves a lot of time seeing people doing things wrong :(. I'd really like to give a go at working at a company where people do things right (as evidenced by successful product deployments), to know whether I'm a just a contrarian bastard, or if I'm really as smart as I think I am.
I beg to differ: UML can be used for application architecture, but is more often used for technical architecture (frameworks, class or sequence diagrams, ...), because this is where those diagrams can most easily been kept in sync with the development.
Application Architecture occurs when you take some functional specifications (which describe the nature and flows of operations without making any assumptions about a future implementation), and you transform them into technical specifications.
Those specifications represent the applications you need for implementing some business and functional needs.
So if you need to process several large financial portfolios (functional specification), you may determine that you need to divide that large specification into:
a dispatcher to assign those heavy calculations to different servers
a launcher to make sure all calculation servers are up and running before starting to process those portfolios.
a GUI to be able to show what is going on.
a "common" component to develop the specific portfolio algorithms, independently of the rest of the application architecture, in order to facilitate unit testing, but also some functional and regression testing.
So basically, to think about application architecture is to decide what "group of files" you need to develop in a coherent way (you can not develop in the same group of files a launcher, a GUI, a dispatcher, ...: they would not be able to evolve at the same pace)
When an application architecture is well defined, each of its components is usually a good candidate for a configuration component, that is a group of file which can be versionned as a all into a VCS (Version Control System), meaning all its files will be labeled together every time you need to record a snapshot of that application (again, it would be hard to label all your system, each of its application can not be in a stable state at the same time)
I have been doing architecture for a while. I use BPML to first refine the business process and then use UML to capture various details! Third step generally is ERD! By the time you are done with BPML and UML your ERD will be fairly stable! No plan is perfect and no abstraction is going to be 100%. Plan on refactoring, goal is to minimize refactoring as much as possible!
I try to break my thinking down into two areas: a representation of the things I'm trying to manipulate, and what I intend to do with them.
When I'm trying to model the stuff I'm trying to manipulate, I come up with a series of discrete item definitions- an ecommerce site will have a SKU, a product, a customer, and so forth. I'll also have some non-material things that I'm working with- an order, or a category. Once I have all of the "nouns" in the system, I'll make a domain model that shows how these objects are related to each other- an order has a customer and multiple SKUs, many skus are grouped into a product, and so on.
These domain models can be represented as UML domain models, class diagrams, and SQL ERD's.
Once I have the nouns of the system figured out, I move on to the verbs- for instance, the operations that each of these items go through to commit an order. These usually map pretty well to use cases from my functional requirements- the easiest way to express these that I've found is UML sequence, activity, or collaboration diagrams or swimlane diagrams.
It's important to think of this as an iterative process; I'll do a little corner of the domain, and then work on the actions, and then go back. Ideally I'll have time to write code to try stuff out as I'm going along- you never want the design to get too far ahead of the application. This process is usually terrible if you think that you are building the complete and final architecture for everything; really, all you're trying to do is establish the basic foundations that the team will be sharing in common as they move through development. You're mostly creating a shared vocabulary for team members to use as they describe the system, not laying down the law for how it's gotta be done.
I find myself having trouble fully thinking a system out before coding it. It's just too easy to only bring a cursory glance to some components which you only later realize are much more complicated than you thought they were.
One solution is to just try really hard. Write UML everywhere. Go through every class. Think how it will interact with your other classes. This is difficult to do.
What I like doing is to make a general overview at first. I don't like UML, but I do like drawing diagrams which get the point across. Then I begin to implement it. Even while I'm just writing out the class structure with empty methods, I often see things that I missed earlier, so then I update my design. As I'm coding, I'll realize I need to do something differently, so I'll update my design. It's an iterative process. The concept of "design everything first, and then implement it all" is known as the waterfall model, and I think others have shown it's a bad way of doing software.
Try Archimate.

Do you use design patterns?

What's the penetration of design patterns in the real world? Do you use them in your day to day job - discussing how and where to apply them with your coworkers - or do they remain more of an academic concept?
Do they actually provide actual value to your job? Or are they just something that people talk about to sound smart?
Note: For the purpose of this question ignore 'simple' design patterns like Singleton. I'm talking about designing your code so you can take advantage of Model View Controller, etc.
Any large program that is well written will use design patterns, even if they aren't named or recognized as such. That's what design patterns are, designs that repeatedly and naturally occur. If you're interfacing with an ugly API, you'll likely find yourself implementing a Facade to clean it up. If you've got messaging between components that you need to decouple, you may find yourself using Observer. If you've got several interchangeable algorithms, you might end up using Strategy.
It's worth knowing the design patterns because you're more likely to recognize them and then converge on a clean solution more quickly. However, even if you don't know them at all, you'll end up creating them eventually (if you are a decent programmer).
And of course, if you are using a modern language, you'll probably be forced to use them for some things, because they're baked into the standard libraries.
In my opinion, the question: "Do you use design pattern?", alone is a little flawed because the answer is universally YES.
Let me explain, we, programmers and designers, all use design patterns... we just don't always realise it. I know this sounds cliché, but you don't go to patterns, patterns come to you. You design stuff, it might look like an existing pattern, you name it that way so everyone understand what you are talking about and the rationale behind your design decision is stronger, knowing it has been discussed ad nauseum before.
I personally use patterns as a communication tool. That's it. They are not design solutions, they are not best practices, they are not tools in a toolbox.
Don't get me wrong, if you are a beginner, books on patterns will show you how a solution is best solved "using" their patterns rather than another flawed design. You will probably learn from the exercise. However, you have to realise that this doesn't mean that every situation needs a corresponding pattern to solve it. Every situation has a quirk here and there that will require you to think about alternatives and take a difficult decision with no perfect answer. That's design.
Anti-pattern however are on a totally different class. You actually want to actively avoid anti-patterns. That's why the name anti-pattern is so controversial.
To get back to your original question:
"Do I use design patterns?", Yes!
"Do I actively lean toward design patterns?", No.
Yes. Design patterns can be wonderful when used appropriately. As you mentioned, I am now using Model-View-Controller (MVC) for all of my web projects. It is a very common pattern in the web space which makes server-side code much cleaner and well-organized.
Beyond that, here are some other patterns that may be useful:
MVVM (Model-View-ViewModel): a similar pattern to MVC; used for WPF and Silverlight applications.
Composition: Great for when you need to use a hierarchy of objects.
Singleton: More elegant than using globals for storing items that truly need a single instance. As you mentioned, a simple pattern but it does have its uses.
It is worth noting a design pattern can also highlight a lack of language features and/or deficiencies in a language. For example, iterators are now built in as part of newer languages.
In general design patterns are quite useful but you should not use them everywhere; just where they are a good fit for your needs.
I try to, yes. They do indeed help maintainability and readability of your code. However, there are people who do abuse them, usually (from what I've seen) by forcing a system into a pattern that doesn't exist.
I try to use patterns if they are applicable. I think it's kind of sad seeing developers implement design patterns in code just for the sake of it. For the right task though, design patterns can be very useful and powerful.
There are many design patterns beyond the simple that are used in "real world". Good example Stackoverflow uses the Model View Controller Pattern. I have used Class Factories multiple times in projects for my employer, and I have seen many already written projects using them as well.
I am not saying every design pattern is being used but many are.
Yes we do, it usually happens when we start designing something and then someone notices that it resembles an existing pattern. We then take a look at it and see how it would help us achieve our goal.
We also use patterns that are not documented but that emerge from designing a lot.
Mind you, we don't use them a lot.
Yes, Factory, Chain of Responsibility, Command, Proxy, Visitor, and Observer, among others, are in use in a codebase I work with daily. As far as MVC goes, this site seems to use it quite well, and the devs couldn't say enough good things in the latest podcast.
Yes, I use a lot of well known design patterns, but I also end up building some software that I later find out uses a 'named' design pattern. Most elegant, reusable designs could be called a 'pattern'. It's a lot like dance moves. We all know the waltz, and the 2-step, but not everyone has a name for the 'bump and scoot' although most of us do it.
MVC is very well known so yes we use design patterns quite a lot. Now if your asking about the Gang of Four patterns, there are several that I use because other maintainers will know the design and what we are working towards in the code. There are several though that remain fairly obscure for what we do, so if I use one I don't get the full benefits of using a pattern.
Are they important, yes because it gives you a method of talking about software design in a quick efficient and generally accepted way. Can you do better custom solutions, well yes (sorta)?
The original GoF patterns were pulled from production code, so they catalogued what was already being used in the wild. They aren't purely or even mostly an academic thing.
I find the MVC pattern really useful to isolate your model logic, which can than be reused or worked on without too much trouble. It also helps de-coupling your classes and makes unit testing easier. I wrote about it recently (yes, shameless plug here...)
Also, I've recently used a factory pattern from a base class to generate and return the proper DataContext class that I needed on the fly, using LINQ.
Bridges are used when trying when trying to glue together two different technologies (like Cocoa and Ruby on the Mac, for example)
I find, however, that whenever I implement a pattern, it's because I knew about it before hand. Some extra thought generally goes into it as I find I must modify the original pattern slightly to accommodate my needs.
You just need to be careful not to become and architecture astronaut!
Yes, design patterns are largely used in the real world - and daily by many of the people I work with.
In my opinion the biggest value provided by design patterns is that they provide a universal, high level language for you to convey software design to other programmers.
For instance instead of describing your new class as a "utility that creates one of several other classes based on some combination of input criteria", you can simply say it's an "abstract factory" and everyone instantly understands what you're talking about.
Yes, design patterns or abstractly patterns are part of my life, where I look, I begin to see them. Therefore, I am surrounded by them. But, as you know, little knowledge is a dangerous thing. Therefore, I strongly recommend you to read GoF book.
One of the main problems about design patterns, most developers just do not get the idea, or do not believe in them. And most of the time they argue about the variables, loops, or switches. But, I strongly believe that if you do not speak the pattern language, your software will not go far and you will find yourselves in a maintenance nightmare.
As you know, anti-pattern is also dangerous thing and it happens when you have little expertise on design patterns. And refactoring anti-patterns is much more harder. As a recommended book about this problem please read "AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis".
Yes.
We are even using them in my current job: Mainframe coding with COBOL and PL/I.
So far I have seen Adaptor, Visitor, Facade, Module, Observer and something very close to Composite and Iterator. Due to the nature of the languages it's mostly strutural patterns that are used. Also, I'm not always sure that the people who use them do so conciously :D
I absolutely use design patterns. At this point I take MVC for granted as a design pattern. My primary reason for using them is that I am humble enough to know that I am likely not the first person to encounter a particular problem. I rarely start a piece of code knowing which pattern I am going to use; I constantly watch the code to see if it naturally develops into an existing pattern.
I am also very fond of Martin Fowler's Patterns of Enterprise Application Architecture. When a problem or task presents itself, I flip to related section (it's mostly a reference book) and read a few overviews of the patterns. Once I have a better idea of the general problem and the existing solutions, I begin to see the long term path my code will likely take via the experience of others. I end up making much better decisions.
Design patterns definitely play a big role in all of my "for the future" ideas.