Is design now a subset of refactoring? - language-agnostic

Looking at the cool new principles of software development:
Agile
You Ain't Gonna Need It
Less As A Competitive Advantage
Behaviour-Driven Development
The Evils Of Premature Optimization
The New Way seems to be to dive in and write what you need to achieve the first iteration of scope objectives, and refactor as necessary to have elegant solutions. Your codebase grows incrementally, and never has a big planning/hierarchical design stage. That, to me, suggests that software design (worthy though it is) has been subsumed into refactoring, because that's where the elegant code comes from, not the incremental additions to functionality.
Am I wrong?

Well the trouble with refactoring is that you need to know good design before you jump head first in. BDD/TDD are supposed to make the design emerge but without other factors, such as Domain knowledge and technical competence you will end up in trouble.

I'd say that doing it the way you describe is a recipe for disaster, I would still recommend to do the overall design up front. Of course during the project you will need to change the design, it is never set in stone, flexibility is a must! (That's where the refactoring has to come in). I would also recommend to do the more detailed design for a module just before you start coding it.
But a solid general design is worth its weight in gold: It gives all developers a common base from which to start, a common idea or perhaps vision, of the goal.
Without that everybody will do as he/she thinks best, with the result that everybody does things in a different way. And suddenly you have to refactor a lot, just to align everybody to what has emerged as the apps architecture. The resulting code is ... not very elegant.

Wrong? Partially.
"Your codebase grows incrementally, and never has a big planning/hierarchical design stage."
Correct.
"That, to me, suggests that software design (worthy though it is) has been subsumed into refactoring..."
Not quite correct.
There's a huge gulf between Big Design Up Front (BDUF) and a more Agile design approach.
BDUF dictates that all design is completed before any coding. This is still popular (just read an RFP yesterday which absolutely required all design be reviewed by the customer before any coding could begin. Sigh.)
Agile suggests that perhaps all design isn't a helpful goal. You need to do enough design that TDD will work. You can't, for example, start TDD until you have a working infrastructure that allows someone to write tests and incrementally evolve a solution knowing that there won't be a weird production deployment problem to solve.
Design is still king. Agile Design is better than monolithic design.
A consequence of Agile Design is YAGNI, DRY and Less-is-More. These don't replace design, they're a consequence of how you prioritize and do design.
BDD and TDD are ways to structure your time so you have focus on what people need, what they do and what really matters. TDD, in particular, focuses on testable behaviors of the software. Not zero-value nuance, but actual behavior.
Premature optimization is interesting, but unrelated. Even Agile teams can run down a low-value rat-hole pursuing a nuance or optimization that doesn't add any value. Premature optimization is a habit of overthinking (== "hand wringing") a technology choice without facts about actual performance.
Agile is supposed to help you focus on the big picture: What actual people will actually do with the actual software and avoid technology rat-holes.
It doesn't replace engineering. It refocuses it.

Reminds me of the following quote:
The goal is clean code that works. [...] First we'll solve the "that works" part of the problem. Then we'll solve the "clean code" part. This is the opposite of architecture-driven development, where you solve "clean code" first, then scramble around trying to integrate into the design the things you learn as you solve the "that works" problem. (Kent Beck, "TDD by Example")

There's nothing in the agile manifesto, which says that you're not allowed to think before you act. Of course you can be agile and still design up front. Architecture is best designed, so before you start coding/refactoring, you should have some ideas as to how your application should be constructed.
The point is you don't have to complete each and every step before moving on. Do as much design as you need to get started.
When you have code, you can refactor as needed, but changing the fundamental architecture through refactoring becomes hard if you start from a simple dummy application every time.

The nature of design is changing, I'd say the new way is to think before coding, but just about what will be implemented in the next iteration. See "Is design dead".
Design for today, code for tomorrow.

Depends what you're designing. If it's a complex algorithm thats going to be the next video compression standard, you can iteratate and refactor 'till the cows come home and it isn't going to get any faster. The perfomance comes from design.
Similarly, if you are writing a very large application, that will grow through regular releases, you need to put in place an architecture that will support growth, and this will be by design. You can go down a long road by jumping head first into code just to discover a dead end.
Am I wrong?
IMHO, pretty much.
Edit: The reason I make many design decisions early in the process rather than mid flow is this can often be the cheapest time to do so. For example, if we start writing an application using platform dependent technology early on, it may be a very expensive decision to reverse. If we take time to consider the platforms we want to support before starting to code, it is much cheaper. We can't and won't get everything right first time, but this does not relieve us of the duty of exploring all important design choices up front. Every tried refactoring a MS MDI program to MVC? I have, and wouldn't recommend it ;)

New? No. I was reading about agile ten years ago. And agile is just the crystalization of ideas that have been around for longer than that. It's hardly new, it just hasn't diffused everywhere yet.
As for your view of design, I think there's still a place for an overarching idea and some up front design. It's the waterfall notion that you can't make a move before requirements and design are 100% complete that has been discredited everywhere but in the large firms that still cling to it.

Let's see if we can get a definition. I'm going to suggest that there may be a book we could reference. How about the one by Martin Fowler?
"Refactoring: Improving the Design of Existing Code"
Now let's take as an example the TDD mantra:
until done do
Red (we wrote a test and it failed because there was nothing that could pass it)
Green (we designed and wrote some code and the test now succeeds)
Refactor (we need to integrate the design we did with everything else)
end
I know the Agile books tend to just say "write enough code to pass the test", but there's design implicit in that statement. By necessity. Choosing a variable name is a design decision. Not a big one, but a decision nonetheless. naming a function or method is a slightly bigger one, and so on up the food chain.
There's nothing in Agile that can ensure you always make good design decisions. Nothing in waterfall or any other process either. Agile does assert that you can't figure it all out upfront and tries - with some success in my experience - to give you a set of tools to help you make better decisions throughout the whole exercise.

Related

Is TDD overkill for small projects?

I have been reading quite a bit recently about TDD and such and I'm not quite sold on it just yet.. I make a lot of small hobby projects(just me) and I'm concerned if trying to do TDD is overkill for such a thing. Though I have seen small open source projects with like 3 developers that do TDD. (though I have seen a few one-person projects that also do TDD)
So is TDD always a good thing to do or at what threshold does it make sense to use?
Small Projects can have a habit of turning into big projects without you realizing, then you wish you'd started with TDD :)
TDD shines in small projects. It's often much easier to adhere to TDD in a small project, and it's a great time to practice and get the discipline required to follow TDD.
In my experience larger projects tend to be the ones that abandon TDD at some threshold. (I'm not suggesting this is a good thing).
I think larger projects tend to abandon it for a couple of reasons:
Developer inexperience --- either in general or with TDD
Time Constraints --- Larger projects are inherently more complex
Added complexity leads to deadline overruns and unit tests tend to get ditched first
This can be exacerbated by an inexperienced team
From my personal experience I can say the following:
Every single time I started one of my personal little hobby projects, I vowed to develop it using TDD.
Every single time I didn't.
Every single time I regretted it.
Everything has a cost-benefit curve.
Ignoring many of the oft' disputed benefits of TDD, TDD is worth it if your implementation will change sufficiently often that the benefit of having an automated test suite outweighs whatever extra cost might be involved in initial development.
I always find a question like this funny. What is the alternative? Writing code that you only compile and never run to verify its correctness? Do you wait until you deploy to production to find out if your class works at all?
If we never had a practice called TDD, or before JUnit was invented back in 1997, was there no code testing? Of course there was. So why is it such a big deal now that you have testing frameworks to help you with this?
Even a small project on a tight deadline won't want to wait until production to find out if it works. Write the tests.
It's not necessary for any project that's "small", but I define "small" as less than one class.
Overkill? Not at all. In addition to the main benefit, which is writing code you can rely on because you've thought about ways it can break, you'll be more disciplined and potentially more productive with test driven development. Pick up any of the Pragmatic Programmer books for tips and inspiration.
I would take the opportunity of using TDD with a small project just to get your feet wet. It would be a good learning experience even if you realize it's not for you.
Something I've heard from one member of our local Agile group is that she doesn't find TDD useful for the very earliest stages of the project, where you're essentially making quick sketches and you're not really sure what shape the thing is taking yet. But as soon as you have some ideas of what the interfaces look like, you can start using tests to help you define them.
TDD is another tool, like documentation, to improve the clarity of the code. This is critical when other people need to work with your code, but many of us find it's also very helpful when looking back at our own code. Ever had a hobby project you picked back up after being away from it for a while, came across a weird bit of code, and wondered "why the heck did I write that?"
I and others use TDD on any project that is more than say a few lines of code.
Once you get the testing bug, it's hard not to use TDD for anything. Personally I've found my code has improved several times over due to TDD.
I believe it's worth it for most any project. TDD and the consequent regression testing enables you not only to determine if components work as you write them, but as you introduce changes and refactorings. Provided your tests are sufficiently complete, you can cover scenarios for infrequent/unlikely edge cases and produce/maintain more reliable code.
Going forwards through the project lifecycle, the continuous testing cycles will save you the manual repetition of tests, and negate the obvious chance of repeating these incorrectly/incompletely.
Well, it is not really the amount of people that is the deciding factor for TDD (at least this is what your question kind of infers), but much rather the size of the project.
Advantages of TDD are, that all the code you are developing will pretty much be unit-tested. That way you save a lot of hassle when refactoring later. Of course this is really just necessary when your project has a decent size.
In my experience 90% of the time those who are dubious about the benefits have not tried it.
Try it, with a skeptical mind. Measure what you are hoping to gain from it, before and after.
I can point to way less time spent/wasted fixing bugs found in production. I see/measure better productivity (faster time to market), improvements in code quality (across a variety of metrics), closer match to requirements (ie less rework because the requirements were not clear), etc.
I "feel" better about projects using TDD, but then I am "test-infected". Developer morale on projects using TDD is generally higher, as a subjective opinion.
If you don't get those results, don't use it. If you don't care enough about those results to measure them, then use TDD or not as makes you feel better.
TDD has a learning curve. If you are not willing to put the effort in to give it a serious attempt, don't bother.
A small project is great way to give it a serious try without risking much.
When you think that consequential errors that you don't expect might happen as a result of your writing code, TDD makes sense.
I'd say it totally depends on the given time frame. If you've got the time to spend almost twice the time you'd usually require, then go for it.
But in my opinion speed is these days one of the most important factors (for competitive companies).
A project developed with good OO code is inherently well-suited for testing, and arguably can acquire a test-driven focus later in the development style. I'd actually say that when you're waterfalling on emerging technologies with a limited budget, considering all that is TDD is completely optional.
I think TDD is worth it no matter the size (EVEN if it's one class - since writing the tests first can help you come up with a more sane design).
The place that I feel it may not be necessary is when you are building a project in which you aren't sure what you want, and you are unlikely to care about maintainability. I find that there aren't ANY projects that fit that category at work, but I have found that occasionally I am developing personal projects in which this is the case. In these cases, I am usually learning a new framework and have no idea what i'm doing from the beginning, so my tests would be more likely to break over time for the wrong reasons, thus decreasing their value.
However, I am also acknowledging that not using TDD is costing me maintainability - once I know what i'm doing, I promptly fall back to red/green/refactor.
Summary
You have a lot of answers above. Your question is years-old, but allow me to chime in: Yes—do TDD! Test your code! Be smart about it.
Design-by-Contract
TDD and BDD are best understood in the context of Hoare-logic preconditions and post-conditions (as well as other forms of code-correctness Boolean assertions). The best application of it I have ever used is Eiffel in EiffelStudio.
The Code-Fail-Correct model is okay until one starts to measure how much time developers and QA people spend on correcting bugs.
You can also go hugely wrong with TDD and BDD as well. TDD can end up generating massive code-bloat, where your test code is larger and harder to maintain than your production code. BDD—which is really mostly DbC—can be misunderstood, misapplied, and mismanaged with its own complexities, bloat, and cost-overhead as well.
The Need
The deepest need is for a language specification, compiler, IDE, and testing system where TDD + BDD (aka DbC) is baked in, with all the proper parts in their proper place instead of bolt-on Frankenstein nonsense trying to masquerade as TDD + BDD.
I find it humorous to watch programmers twisting in the wind of trying to shoehorn common implementations of TDD and BDD into mainstream languages that have no sense of Design-by-Contract at all. Everyone interprets TDD + BDD through this language-spec/compiler/IDE lens as though they truly "get" what it is. They never actually see just how silly and distorted it is.
In From the Cold
TDD + BDD (DbC) get distorted just like other technologies and topics. For example: Do not attempt to use Java as a lens for understand Object Oriented Theory. The same is true for C++ or other C-derived languages. Trying to use a language as a means to learn OO is like thinking that knowing your calculator is going to cause you to understand calculus.
The only language specification, compiler, IDE, and testing system I am aware of that is built from a theory understanding of TDD and BDD is Eiffel and EiffelStudio. I have been using it for some 20 years. I've been around this block many times. It frustrates me to see you all suffering and twisting about on subjects that (to me) are as clear as a cloudless summer day in springtime.

How to convince team other parts of software development are important?

Sometimes, when I present a part of the software development process to certain people, say the supervisor or the manager that they don't have experience say
Automated unit tests and integration tests vs. their manual functional testing.
Using code generators, and scripts for repetitive tasks.
I sometimes met with resistance. Some of the reasons are the following:
They say that that's the way we do things here. Our system works and there is no need to add in our process.
They are busy being busy. They say is their job is to get us projects and our job is to deliver it to them to their satisfaction. They are satisfied when if it is a manual system, repetitive but on time.
They are very conservative about code generators. I gave them an estimate that it takes a significant time overhead for the first project to use this and time to train my teammates since this approach is relatively new to them. The overhead for the first project to them overshadows the benefit in the long run, but I explained the convenience it is to us developers, but they are always stuck to do things the old way.
What would be your strategy for this?
Wait for a problem to show up and then make your move.
You have to be a salesman, at the end of the day. You have to tell people why your proposals will make their lives easier.
If you can back up your claims with some sort of time spent/time saved data, you're onto a winner. Another thing is to get yourself a reputation gradually, by agreeing changes be implemented in phases. Implement a simple change on a small piece of the project and prove that it made a difference to them. Then roll it out a bit more, and move onto the next thing like unit testing or code generation. Given time it'll work itself out.
I don't believe you can't force people to read books, they'll shelve 'em and think you're being obnoxious. Best thing is to get small results, and use those as stepping stones to be allowed to aim for higher goals as people realise that maybe there are better ways of doing things after all.
If you're really passionate about it, you can always invest a little of your own time, and prepare a short demo (30 mins tops) that shows them how quickly you can create a tiny app without code gen, then the same app with a couple of bits code-genned. The proof of the pudding is in the eating.
I think that only way to convince someone about something is to reveal benefits what it provides.
It's easier to ask for forgiveness than it is to get permission.
There's no objective return-on-investment style measurements for "improving" a software development process. Software development is inherently hard -- it's knowledge capture -- there must be unknowns. If everything was known, you'd already have the software in hand.
Consequently, you can't ever convince a manager of anything up front.
You can only demonstrate that you are able to done something better, cheaper or faster. When they ask what the secret to your productivity is, you can show them your tools, method or approach.
Until they ask, you don't really have enough evidence to change anyone's mind. When they finally ask, then you don't need to change their mind, you need to show them your solution.
Since they don't want to derail their "do everything by hand" schedule to invest in your tools, you have to build your tools in steps, one project at a time.
"You can get a lot farther with a smile and a gun than you can with just a smile."
- Al Capone
Just kidding, but its the first thing that cross my mind :)
The gun is a metaphor (duh), like for a bug that someone spent days figuring out that with a good process he good spend in a more fun ways.

How do you plan an application's architecture before writing any code? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
One thing I struggle with is planning an application's architecture before writing any code.
I don't mean gathering requirements to narrow in on what the application needs to do, but rather effectively thinking about a good way to lay out the overall class, data and flow structures, and iterating those thoughts so that I have a credible plan of action in mind before even opening the IDE. At the moment it is all to easy to just open the IDE, create a blank project, start writing bits and bobs and let the design 'grow out' from there.
I gather UML is one way to do this but I have no experience with it so it seems kind of nebulous.
How do you plan an application's architecture before writing any code? If UML is the way to go, can you recommend a concise and practical introduction for a developer of smallish applications?
I appreciate your input.
I consider the following:
what the system is supposed to do, that is, what is the problem that the system is trying to solve
who is the customer and what are their wishes
what the system has to integrate with
are there any legacy aspects that need to be considered
what are the user interractions
etc...
Then I start looking at the system as a black box and:
what are the interactions that need to happen with that black box
what are the behaviours that need to happen inside the black box, i.e. what needs to happen to those interactions for the black box to exhibit the desired behaviour at a higher level, e.g. receive and process incoming messages from a reservation system, update a database etc.
Then this will start to give you a view of the system that consists of various internal black boxes, each of which can be broken down further in the same manner.
UML is very good to represent such behaviour. You can describe most systems just using two of the many components of UML, namely:
class diagrams, and
sequence diagrams.
You may need activity diagrams as well if there is any parallelism in the behaviour that needs to be described.
A good resource for learning UML is Martin Fowler's excellent book "UML Distilled" (Amazon link - sanitised for the script kiddie link nazis out there (-: ). This book gives you a quick look at the essential parts of each of the components of UML.
Oh. What I've described is pretty much Ivar Jacobson's approach. Jacobson is one of the Three Amigos of OO. In fact UML was initially developed by the other two persons that form the Three Amigos, Grady Booch and Jim Rumbaugh
I really find that a first-off of writing on paper or whiteboard is really crucial. Then move to UML if you want, but nothing beats the flexibility of just drawing it by hand at first.
You should definitely take a look at Steve McConnell's Code Complete-
and especially at his giveaway chapter on "Design in Construction"
You can download it from his website:
http://cc2e.com/File.ashx?cid=336
If you're developing for .NET, Microsoft have just published (as a free e-book!) the Application Architecture Guide 2.0b1. It provides loads of really good information about planning your architecture before writing any code.
If you were desperate I expect you could use large chunks of it for non-.NET-based architectures.
I'll preface this by saying that I do mostly web development where much of the architecture is already decided in advance (WebForms, now MVC) and most of my projects are reasonably small, one-person efforts that take less than a year. I also know going in that I'll have an ORM and DAL to handle my business object and data interaction, respectively. Recently, I've switched to using LINQ for this, so much of the "design" becomes database design and mapping via the DBML designer.
Typically, I work in a TDD (test driven development) manner. I don't spend a lot of time up front working on architectural or design details. I do gather the overall interaction of the user with the application via stories. I use the stories to work out the interaction design and discover the major components of the application. I do a lot of whiteboarding during this process with the customer -- sometimes capturing details with a digital camera if they seem important enough to keep in diagram form. Mainly my stories get captured in story form in a wiki. Eventually, the stories get organized into releases and iterations.
By this time I usually have a pretty good idea of the architecture. If it's complicated or there are unusual bits -- things that differ from my normal practices -- or I'm working with someone else (not typical), I'll diagram things (again on a whiteboard). The same is true of complicated interactions -- I may design the page layout and flow on a whiteboard, keeping it (or capturing via camera) until I'm done with that section. Once I have a general idea of where I'm going and what needs to be done first, I'll start writing tests for the first stories. Usually, this goes like: "Okay, to do that I'll need these classes. I'll start with this one and it needs to do this." Then I start merrily TDDing along and the architecture/design grows from the needs of the application.
Periodically, I'll find myself wanting to write some bits of code over again or think "this really smells" and I'll refactor my design to remove duplication or replace the smelly bits with something more elegant. Mostly, I'm concerned with getting the functionality down while following good design principles. I find that using known patterns and paying attention to good principles as you go along works out pretty well.
http://dn.codegear.com/article/31863
I use UML, and find that guide pretty useful and easy to read. Let me know if you need something different.
UML is a notation. It is a way of recording your design, but not (in my opinion) of doing a design. If you need to write things down, I would recommend UML, though, not because it's the "best" but because it is a standard which others probably already know how to read, and it beats inventing your own "standard".
I think the best introduction to UML is still UML Distilled, by Martin Fowler, because it's concise, gives pratical guidance on where to use it, and makes it clear you don't have to buy into the whole UML/RUP story for it to be useful
Doing design is hard.It can't really be captured in one StackOverflow answer. Unfortunately, my design skills, such as they are, have evolved over the years and so I don't have one source I can refer you to.
However, one model I have found useful is robustness analysis (google for it, but there's an intro here). If you have your use-cases for what the system should do, a domain model of what things are involved, then I've found robustness analysis a useful tool in connecting the two and working out what the key components of the system need to be.
But the best advice is read widely, think hard, and practice. It's not a purely teachable skill, you've got to actually do it.
I'm not smart enough to plan ahead more than a little. When I do plan ahead, my plans always come out wrong, but now I've spend n days on bad plans. My limit seems to be about 15 minutes on the whiteboard.
Basically, I do as little work as I can to find out whether I'm headed in the right direction.
I look at my design for critical questions: when A does B to C, will it be fast enough for D? If not, we need a different design. Each of these questions can be answer with a spike. If the spikes look good, then we have the design and it's time to expand on it.
I code in the direction of getting some real customer value as soon as possible, so a customer can tell me where I should be going.
Because I always get things wrong, I rely on refactoring to help me get them right. Refactoring is risky, so I have to write unit tests as I go. Writing unit tests after the fact is hard because of coupling, so I write my tests first. Staying disciplined about this stuff is hard, and a different brain sees things differently, so I like to have a buddy coding with me. My coding buddy has a nose, so I shower regularly.
Let's call it "Extreme Programming".
"White boards, sketches and Post-it notes are excellent design
tools. Complicated modeling tools have a tendency to be more
distracting than illuminating." From Practices of an Agile Developer
by Venkat Subramaniam and Andy Hunt.
I'm not convinced anything can be planned in advance before implementation. I've got 10 years experience, but that's only been at 4 companies (including 2 sites at one company, that were almost polar opposites), and almost all of my experience has been in terms of watching colossal cluster********s occur. I'm starting to think that stuff like refactoring is really the best way to do things, but at the same time I realize that my experience is limited, and I might just be reacting to what I've seen. What I'd really like to know is how to gain the best experience so I'm able to arrive at proper conclusions, but it seems like there's no shortcut and it just involves a lot of time seeing people doing things wrong :(. I'd really like to give a go at working at a company where people do things right (as evidenced by successful product deployments), to know whether I'm a just a contrarian bastard, or if I'm really as smart as I think I am.
I beg to differ: UML can be used for application architecture, but is more often used for technical architecture (frameworks, class or sequence diagrams, ...), because this is where those diagrams can most easily been kept in sync with the development.
Application Architecture occurs when you take some functional specifications (which describe the nature and flows of operations without making any assumptions about a future implementation), and you transform them into technical specifications.
Those specifications represent the applications you need for implementing some business and functional needs.
So if you need to process several large financial portfolios (functional specification), you may determine that you need to divide that large specification into:
a dispatcher to assign those heavy calculations to different servers
a launcher to make sure all calculation servers are up and running before starting to process those portfolios.
a GUI to be able to show what is going on.
a "common" component to develop the specific portfolio algorithms, independently of the rest of the application architecture, in order to facilitate unit testing, but also some functional and regression testing.
So basically, to think about application architecture is to decide what "group of files" you need to develop in a coherent way (you can not develop in the same group of files a launcher, a GUI, a dispatcher, ...: they would not be able to evolve at the same pace)
When an application architecture is well defined, each of its components is usually a good candidate for a configuration component, that is a group of file which can be versionned as a all into a VCS (Version Control System), meaning all its files will be labeled together every time you need to record a snapshot of that application (again, it would be hard to label all your system, each of its application can not be in a stable state at the same time)
I have been doing architecture for a while. I use BPML to first refine the business process and then use UML to capture various details! Third step generally is ERD! By the time you are done with BPML and UML your ERD will be fairly stable! No plan is perfect and no abstraction is going to be 100%. Plan on refactoring, goal is to minimize refactoring as much as possible!
I try to break my thinking down into two areas: a representation of the things I'm trying to manipulate, and what I intend to do with them.
When I'm trying to model the stuff I'm trying to manipulate, I come up with a series of discrete item definitions- an ecommerce site will have a SKU, a product, a customer, and so forth. I'll also have some non-material things that I'm working with- an order, or a category. Once I have all of the "nouns" in the system, I'll make a domain model that shows how these objects are related to each other- an order has a customer and multiple SKUs, many skus are grouped into a product, and so on.
These domain models can be represented as UML domain models, class diagrams, and SQL ERD's.
Once I have the nouns of the system figured out, I move on to the verbs- for instance, the operations that each of these items go through to commit an order. These usually map pretty well to use cases from my functional requirements- the easiest way to express these that I've found is UML sequence, activity, or collaboration diagrams or swimlane diagrams.
It's important to think of this as an iterative process; I'll do a little corner of the domain, and then work on the actions, and then go back. Ideally I'll have time to write code to try stuff out as I'm going along- you never want the design to get too far ahead of the application. This process is usually terrible if you think that you are building the complete and final architecture for everything; really, all you're trying to do is establish the basic foundations that the team will be sharing in common as they move through development. You're mostly creating a shared vocabulary for team members to use as they describe the system, not laying down the law for how it's gotta be done.
I find myself having trouble fully thinking a system out before coding it. It's just too easy to only bring a cursory glance to some components which you only later realize are much more complicated than you thought they were.
One solution is to just try really hard. Write UML everywhere. Go through every class. Think how it will interact with your other classes. This is difficult to do.
What I like doing is to make a general overview at first. I don't like UML, but I do like drawing diagrams which get the point across. Then I begin to implement it. Even while I'm just writing out the class structure with empty methods, I often see things that I missed earlier, so then I update my design. As I'm coding, I'll realize I need to do something differently, so I'll update my design. It's an iterative process. The concept of "design everything first, and then implement it all" is known as the waterfall model, and I think others have shown it's a bad way of doing software.
Try Archimate.

What's your most controversial programming opinion?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.
The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).
So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.
Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.
Programmers who don't code in their spare time for fun will never become as good as those that do.
I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.
(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)
The only "best practice" you should be using all the time is "Use Your Brain".
Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)
EDIT:
Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?
"Googling it" is okay!
Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.
Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.
What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.
(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)
Most comments in code are in fact a pernicious form of code duplication.
We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.
I think eventually many people just blank them out, especially those flowerbox monstrosities.
Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.
On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.
XML is highly overrated
I think too many jump onto the XML bandwagon before using their brains...
XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.
My 5 cents
Not all programmers are created equal
Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.
It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)
I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.
For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax
For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.
Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.
If you only know one language, no matter how well you know it, you're not a great programmer.
There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.
It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.
Performance does matter.
Print statements are a valid way to debug code
I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.
Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)
Your job is to put yourself out of work.
When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.
If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.
Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.
1) The Business Apps farce:
I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.
Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.
How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?
The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.
I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.
2) The n-years-of-experience-required:
Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.
3) The common "computer science" degree curriculum:
The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.
Getters and Setters are Highly Overused
I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).
I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!
UPDATE:
This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).
First of all: anyone who uses public fields deserves jail time
Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.
Many people think:
private fields + public accessors == encapsulation
I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.
Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):
There is a reason that we keep our
variables private. We don't want
anyone else to depend on them. We want
the freedom to change their type or
implementation on a whim or an
impulse. Why, then, do so many
programmers automatically add getters
and setters to their objects, exposing
their private fields as if they were
public?
UML diagrams are highly overrated
Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.
Opinion: SQL is code. Treat it as such
That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.
I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?
Readability is the most important aspect of your code.
Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
If you're a developer, you should be able to write code
I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.
I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!
Edit:
There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.
Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.
The use of hungarian notation should be punished with death.
That should be controversial enough ;)
Design patterns are hurting good design more than they're helping it.
IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.
And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
Less code is better than more!
If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.
PHP sucks ;-)
The proof is in the pudding.
Unit Testing won't help you write good code
The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.
And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.
In fact, I'll generalize that even further,
Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.
They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.
Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.
I think that a method should be created wherever you can name one.
It's ok to write garbage code once in a while
Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.
Code == Design
I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.
Here's an article on the topic of Code as Design.
Software development is just a job
Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.
But in the grand scheme of things, it is just a job.
It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.
I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.
I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.
Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.
Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)
Software Architects/Designers are Overrated
As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.
How's that for controversial?
Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.
There is no "one size fits all" approach to development
I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.
Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.
People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.
It isn't.
Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.

Development Cost of Procedural Programming vs. OOP?

I come from a fairly strong OO background, the benefits of OOD & OOP are second nature to me, but recently I've found myself in a development shop tied to a procedural programming habits. The implementation language has some OOP features, they are not used in optimal ways.
Update: everyone seems to have an opinion about this topic, as do I, but the question was:
Have there been any good comparative studies contrasting the cost of software development using procedural programming languages versus Object Oriented languages?
Some commenters have pointed out the dubious nature of trying to compare apples to oranges, and I agree that it would be very difficult to accurately measure, however not entirely impossible perhaps.
Most all of these questions are confounded by the problem that individual programmer productivity varies by an order of magnitude or more; if you happen to have an OO programmer who is one of the gruop at productivity x, and a "procedural" programmer who is a 10x programmer, the procedural programmer is liable to win even if OO is faster in some sense.
There's also the problem that coding productivity is usually only 10-20 percent of the total effort in a realistic project, so higher productivity doesn't have much impact; even that hypothetical 10x programmer, or an infinitely fast programmer, can't cut the overall effort by more that 10-20 percent.
You might have a look at Fred Brooks' paper "No Silver Bullet".
After poking around with google I found this paper here. The search terms I used are Productivity object oriented.
The opening paragraphs goes on to say
Introduction of object-oriented
technology does not appear to hinder
overall productivity on new large
commercial projects, but it neither
seems to improve it in the first two
product generations. In practice, the
governing influence may be the
business workflow and not the
methodology.
I think you will find that Object Oriented Programming is better in specific circumstances but neutral for everything else. What sold my bosses on converting my company's CAD/CAM application to a object oriented framework is that I precisely showed the exact areas in which it will help. The focus wasn't on the methodology as a whole but how it will help us sold some specific problem we had. For us was having a extensible framework for adding more shapes, reports, and machine controllers, and using collections to remove the memory limitation of the older design.
OO or procedural offer to different way to develop and both can be costly if badly managed.
If we suppose that the works are done by the best person in both case, I think the result might be equal in term of cost.
I believe the cost difference will be on how you will be the maintenance phase where you will need to add features and modify current features. Procedural project are harder to have automatic testing, are less subject to be able to expand without affecting other part and is more harder to understand the concept part by part (because cohesive part aren't grouped together necessary).
So, I think, the OO cost will be lower in the long run compared to Procedural.
i think S.Lott was referring to the "unrepeatable experiment" phenomenon, i.e. you cannot write application X procedurally then rewind time and write it OO to see what the difference is.
you could write the same app twice two different ways, but
you would learn something about the app doing it the first way that would help you in the second way, and
you may be better at OO than at procedural, or vice-versa, depending on your experience and the nature of the application and the tools chosen
so there really is no direct basis for comparison
empirical studies are likewise useless, for similar reasons - different applications, different teams, etc.
paradigm shifts are difficult, and a small percentage of programmers may never make the transition
if you are free to develop your way, then the solution is simple: develop things your way, and when your co-workers notice that you are coding circles around them and your code doesn't break nearly as often etc. and they ask you how you do it, then teach them OOP (along with TDD and any other good practices you may use)
if not, well, it might be time to polish the resume... ;-)
Good idea. A head-to-head comparison. Write application X in a procedural style, and in an OO style and measure something. Cost to develop. Return on Investment.
What does it mean to write the same application in two styles? It would be a different application, wouldn't it? The procedural people would balk that the OO folks were cheating when they used inheritance or messaging or encapsulation.
There can't be such a comparison. There's no basis for comparing two "versions" of an application. It's like asking if apples or oranges are more cost-effective at being fruit.
Having said that, you have to focus on things other folks can actually see.
Time to build something that works.
Rate of bugs and problems.
If your approach is better, you'll be successful, and people will want to know why.
When you explain that OO leads to your success... well... you've won the argument.
The key is time. How long does it take the company to change the design to add new features or fix existing ones. Any study you make should focus on that area.
My company had a event driven procedure oriented design for a CAM software in the mid 90's created using VB3. It was taking a long time to adapt the software to new machines. A long time to test the effects of bug fixes and new features.
With VB6 came along I was able to graph out the current design and a new design that fixed the testing and adaptation problem. The non-technical boss grasped what I was trying doing right away.
The key is to explain WHY OOP will benefit the project. Use things like Refactoring by Fowler and Design Patterns to show how a new design will lower the time to do things. Also include how you get from Point A to Point B. Refactoring will help with showing how you can have working intermediate stages that can be shipped.
I don't think you'll find a study like that. At least you should define what you mean by "cost". Because OOP designing is somehow slower, so on the short term development is maybe faster with procedural programming. On very short term maybe spaghetti coding is even more faster.
But when project begins growing things are opposite, because OOP designing is best featured to manage code complexity.
So in a small project maybe procedural design MAY be cheaper, because it's faster and you don't have drawbacks.
But in a big project you'll get stick very quickly using only a simple paradigm like procedural programming
I doubt you will find a definitive study. As several people have mentioned this is not a reproducible experiment. You will find anecdotal evidence, a lot of it. Some people may find some statistical studies, but I would examine them carefully. I am not aware of any really good ones.
I also will make another point, there is no such thing as purely object oriented or purely procedural in the real world. Many if not most object methods are written with procedural code. At the same time many procedural programs use OO methodologies such as encapsulation (also call abstraction by some).
Don't get me wrong, OO and procedural programs look and are different, but it is a matter of dark gray vs light gray instead of black and white.
This article says nothing about OOP vs Procedural. But I'd think that you could use similar metrics from your company for a discussion.
I find it interesting as my company is starting to explore the ROWE initiative. In our first session, it was apparent that we don't currently capture enough metrics on outcomes.
So you need to focus on 1) Is the maintenance of current processes impeding future development? 2) How are different methods going to affect #1?