Do we need to avoid duplicate at any cost? - function

In our application framework ,whenever a code duplicate starts happen (even in slightest degree) people immediately wanted it to be removed and added as separate function. Though it is good ,when it is being done at the early stage of development soon that function need to be changed in an unexpected way. Then they add if condition and other condition to make that function stay.
Is duplicate really evil? I feel we can wait for triplication (duplicate in three places ) to occur before making it as a function. But i am being seen as alien if i suggest this. Do we need to avoid duplicate at any cost ?

There is a quote from Donald Knuth that pin points your question:
The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.
I think that duplication is needed so we are able to understand what are the critical points that need to be DRYed.
Duplicated code is something that should be avoided but there are some (less frequent) contexts where abstractions may lead you to poor code, for example when writing test specs some degree of repetition may keep your code easier to maintain.
I guess you should always ask what is the gain of another abstraction and if there is no quick win you may leave some duplicated code and add TODO comment, this will make you code faster, deliver earlier and DRY only what is most valued.
A similar approach is described by Three Strikes And You Refactor:
The first time you do something, you just do it.
The second time you do something similar, you wince at the duplication, but you do the duplicate thing anyway.
(Or if you're like me, you don't wince just yet -- because your DuplicationRefactoringThreshold is 3 or 4.)
The third time you do something similar, you refactor.

This is the holy war between management, developers and QC.
So to make this war end you need to make a conman concept for all three of them.
Abstraction VS Encapsulation, Interface VS Class, Singleton VS Static.
you all need to give yourself time to understand each other point of view to make this work.
For example if you are using scrum then you can go sprints for developing then make a sprint for abstraction and encapsulation.

Related

When is it time to refactor code?

On on hand:
1. You never get time to do it.
2. "Context switching" is mentally expensive (difficult to leave what you're doing in the middle of it).
3. It usually isn't an easy task.
4. There's always the fear you'll break something that's now working.
On the other:
1. Using that code is error-prone.
2. Over time you might realize that if you had refactored the code the first time you saw it, that would have saved you time on the long run.
So my question is - Practically - When do you decide it's time to refactor your code?
Thanks.
A couple of observations:
On on hand:
1. You never got time to do it.
If you treat re-factoring as something separate from coding (instead of an intrinsic part of coding decently), and if you can't manage time, then yeah, you'll never have time for it.
"Context switching" is mentally expensive (difficult to leave what you're doing in the middle of it).
See previous point above. Refactoring is an active component of good coding practices. If you separate the two as if they were two different tasks, then 1) your coding practices need improvement/maturing, and 2) you will engage in severe context switching if your code is in a severe need of refactoring (again, code quality.)
It's usually isn't an easy task.
Only if the code you produce is not amenable to refactoring. That is, code that is hard to refactor exhibits one or more of the following (list is not universally inclusive):
High cyclomatic complexity,
No single responsibility per class (or procedure),
High coupling and/or poor low cohesion (aka poor LCOM metrics),
poor structure
Not following the SOLID principles.
No adherence to the Law of Demeter when appropriate.
Excessive adherence to the Law of Demeter when inappropriate.
Programming against implementations instead of interfaces.
There's always the fear you'll break something that's now working.
Testing? Verification? Analysis? Any of these before being checked into source control (and certainly before being delivered to the users)?
On the other:
1. Using that code is error-prone.
Only if it has never tested/verified and/or if there is no clear understanding of the conditions and usage patterns under which the potentially error-prone code operates acceptably.
Over time you might realize that if you would have refactored the code the
first time you saw it - That would have save you time on the long run.
That realization should not occur over time. Good engineering and work ethics calls for that realization to occur when the artifact (being hardware or software) is in the making.
So my question is - Practically - When do you decide it's time to refactor your code?
Practically, when I'm coding; I detect an area that needs improvement (or something that needs correction after a change on requirements or expectations); and I get an opportunity to improve it without sacrificing a deadline. If I cannot re-factor at that moment, I simply document the perceived defect and create a workable, realistic plan to revisit the artifact for refactoring.
In real life, there will be moments that we'll code some ugly kludge just to get things running, or because we are drained and tired or whatever. It's reality. Our job is to make sure that those incidents do not pile up and remain unattended. And the key to this is to refactor as you code, keep the code simple and with a good, simple and elegant structure. And by "elegant" I don't mean "smart-ass" or esoteric, but that displays what is typically considered readable, simple, composable attributes (and mathematical attributes when they apply practically.)
Good code lends itself to refactoring; it displays good metrics; its structure resembles both computer science function composition and mathematical function composition; it has a clear responsibility; it makes its invariants, pre and post-conditions evident; and so on and so on.
Hope it helps.
One of the most common mistakes i see is people associating the word "Refactor" with "Big Change".
Refactoring code does not always have to be big. Even small changes such as changing a bool to a proper enum, or renaming a method to be closer to the actual function vs. the intent is refactoring your code. With the exception of the end of a milestone, I try to make at least a very small refactoring every single time I check in. And you'd be amazed at how quickly this makes a visible difference in the code.
Bigger changes do take bigger planning though. I try and schedule about 1/2 a day every two weeks during a normal development cycle to tackle a bigger refactoring change. This is enough time to make a substantial improvement to the code base. If the refactoring fails 1/2 a day is not that much of a loss. And it's rarely a total loss because even the failed refactoring will teach you something about your code.
Whenever it smells, I refactor. I may not make it perfect now, but I can at least make a small step towards a better state. And those small changes do add up over time...
If I am in the middle of something when I notice the smell, and fixing it isn't trivial (or I am just before release), I may make a (mental or paper) note to return to it once I am finished with the primary task.
Practice makes one better :-) But if I don't see a solution to a problem, I put it aside and let it brew for a while, discuss it with coworkers, or even post it on SO ;-)
If I don't have unit tests and the fix isn't trivial, I start with the tests. If the tests aren't trivial either, I apply point 2.
I start to refactor as soon as I find I am repeating my self. DRY principles ftw.
Also, if methods/functions get too long, to the point where they look unwieldy, or their purpose is being obscured by the length of the function, I break it into private subfunctions that explain what is really going on.
Lastly, if everything's up and running, and the code is dog-slow, I start to look at refactoring for the sake of performance.
When implementing a new feature I often notice that the task would be much simpler if the code I'm working on was structured in a different way. In this case I usually step back, try to do the refactoring first, and only after this is done I continue implementing the new feature.
I also have a habit to track all potential improvements that come to my mind either in notes or the bug tracker. The ideas bake there for some time, some of them don't feel so compelling anymore, and the reasonable ones are implemented during a day which I dedicate to smaller tasks.
Refactor code when it needs to be refactored. Some symptoms I look for:
duplicate code in similar objects.
duplicate code in within methods of one object.
anytime the requirements have changed twice or more.
anytime somebody says "we will clean that up later".
any time I read through code and shake my head thinking "what goofball did this" (even when the goofball in question is me)
In general, less design and/or less clear requirements means more oppurtunities for refactoring.
This might sound like a joke, but really, I only refactor when things "get messy". When a simple task starts taking more time and usual, when I have to twist my mind around to remember what function is doing what and such. Also, if the code starts running slow and it's not because I'm running in a development enviroment (a lot of variable outputs and such) if I can't optimise it, I refactor the code. As you said, it's worthed on the long run.
Still, I allways make sure I have enough time to think things through before I start so I don't get in this sittuation.
Cheers!
I usually refactor when one of the following is true:
I have nothing better to do and waiting for the next project to come to my inbox
The additions/changes I'm making to the code cannot work unless, or would be better if, I refactor
I am aesthetically displeased with the way the code is laid out
Martin Fowler in his book if the same name suggests you do it the third time you're in a block of code to make another change. First time in the block, you happen to notice you should refactor, but don't have time. Second time back...same thing. Third time back-now refactor.
Also, I read the developers of a current release of smalltalk (squeak.org, I think) say they go through a couple weeks of intense coding...then they step back and look at what can be refactored.
Personally I have to resist the impulse to refactor as I code or I get 'paralyzed'.

How do you refactor a large messy codebase?

I have a big mess of code. Admittedly, I wrote it myself - a year ago. It's not well commented but it's not very complicated either, so I can understand it -- just not well enough to know where to start as far as refactoring it.
I violated every rule that I have read about over the past year. There are classes with multiple responsibilities, there are indirect accesses (I forget the term - something like foo.bar.doSomething()), and like I said it is not well commented. On top of that, it's the beginnings of a game, so the graphics is coupled with the data, or the places where I tried to decouple graphics and data, I made the data public in order for the graphics to be able to access the data it needs...
It's a huge mess! Where do I start? How would you start on something like this?
My current approach is to take variables and switch them to private and then refactor the pieces that break, but that doesn't seem to be enough. Please suggest other strategies for wading through this mess and turning it into something clean so that I can continue where I left off!
Update two days later: I have been drawing out UML-like diagrams of my classes, and catching some of the "Low Hanging Fruit" along the way. I've even found some bits of code that were the beginnings of new features, but as I'm trying to slim everything down, I've been able to delete those bits and make the project feel cleaner. I'm probably going to refactor as much as possible before rigging my test cases (but only the things that are 100% certain not to impact the functionality, of course!), so that I won't have to refactor test cases as I change functionality. (do you think I'm doing it right or would it, in your opinion, be easier for me to suck it up and write the tests first?)
Please vote for the best answer so that I can mark it fairly! Feel free to add your own answer to the bunch as well, there's still room for you! I'll give it another day or so and then probably mark the highest-voted answer as accepted.
Thanks to everyone who has responded so far!
June 25, 2010: I discovered a blog post which directly answers this question from someone who seems to have a pretty good grasp of programming: (or maybe not, if you read his article :) )
To that end, I do four things when I
need to refactor code:
Determine what the purpose of the code was
Draw UML and action diagrams of the classes involved
Shop around for the right design patterns
Determine clearer names for the current classes and methods
Pick yourself up a copy of Martin Fowler's Refactoring. It has some good advice on ways to break down your refactoring problem. About 75% of the book is little cookbook-style refactoring steps you can do. It also advocates automated unit tests that you can run after each step to prove your code still works.
As for a place to start, I would sit down and draw out a high-level architecture of your program. You don't have to get fancy with detailed UML models, but some basic UML is not a bad idea. You need a big picture idea of how the major pieces fit together so you can visually see where your decoupling is going to happen. Just a page or two of some basic block diagrams will help with the overwhelming feeling you have right now.
Without some sort of high level spec or design, you just risk getting lost again and ending up with another unmaintainable mess.
If you need to start from scratch, remember that you never truly start from scratch. You have some code and the knowledge you gained from your first time. But sometimes it does help to start with a blank project and pull things in as you go, rather than put out fires in a messy code base. Just remember not to completely throw out the old, use it for its good parts and pull them in as you go.
What was most important for me on different occasions were unit tests: I took a few days to write tests for the old code and then I was free to refactor with confidence. How exactly is a different question, but having the tests made it possible for me to make real, substantial changes to the code.
I'll second everyone's recommendations for Fowler's Refactoring, but in your specific case you may want to look at Michael Feathers' Working Effectively with Legacy Code, which is really perfect for your situation.
Feathers talks about Characterization Tests, which are unit tests not to assert known behaviour of the system but to explore and define the existing (unclear) behaviour -- in the case where you've written your own legacy code, and fixing it yourself, this may not be so important, but if your design is sloppy then it's quite possible there are parts of the code that work by 'magic' and their behaviour isn't clear, even to you -- in that case, characterization tests will help.
One great part of the book is the discussion about finding (or creating) seams in your codebase -- seams are natural 'fault lines', if you like, where you can break into the existing system to start testing it, and pulling it towards a better design. Hard to explain but well worth a read.
There's a brief paper where Feathers fleshes out some of the concepts from the book, but it really is well worth hunting down the whole thing. It's one of my favourites.
Just an additional refactoring that is more important than you think: Name things correctly!
This goes for any variable name and method name. If the name does not accurately reflect what the thing is used for, then rename it to something more accurate. This might require several iterations. If you cannot find a short, and entirely accurate name, then that item does too much and you have an excellent candidate for a code snippet that needs to be split. The names also clearly indicate where the cuts are to be made.
Also, document your stuff. Whenever the answer to WHY? is not clearly conveyed by the answer to HOW? (being the code) you will need to add some documentation. Capturing design decisions is probably the most important task as it is very hard to do in code.
You could always start from "scratch". That doesn't mean scrap it and start from nothing, but try to rethink high-level things from the beginning, since you seem to have learned a lot since the last time you worked on it.
Start from a higher level, and as you build the scaffolding of your new and improved structure, take all the code you can reuse, which will probably be more than you think if you're willing to read through it and make some small changes.
When you're making the changes, be sure to be strict with yourself about following all the good practices you now know, because you will really thank yourself later.
It can be surprisingly refreshing to properly re-make program to do exactly what it did before, only more "cleanly". ;)
As others have mentioned as well, unit-tests are your best friend! They help you ensure that your refactoring works, and if you're starting from "scratch", it's the perfect time to write them.
You're in a much better position than many people facing this problem in that you understand what the code is supposed to do.
Taking variables out of a shared scope, as you're doing, is a great start, in that you're partitioning responsibilities. Ultimately you want each class to express a single responsibility. A few other things you might look at:
Easy targets for refactoring are code that's duplicated in lots of places and long methods.
If you're managing application state through statically initialized singletons or worse, a global state that everything is talking to, consider moving it to a managed initialization system (i.e. a dependency injection framework like spring or guice) or at least make sure that the initialization isn't entangled with the rest of the code.
Centralize and standardize how you're accessing outside resources, especially if you've got things like file locations or urls hardcoded.
Buy an IDE that has good refactoring support. I think IntelliJ is the best, but Eclipse has it now, too.
The unit test idea is key as well. You will want to have a suite of large, overall transactions that will give you the overall behavior of the code.
Once you have those, start creating unit tests for classes and smaller packages. Write the tests to demonstrate proper behavior, make your changes, and re-run the tests to demonstrate that you haven't broken everything.
Track code coverage as you go. You'll want to work it up to 70% or better. For the classes you change, you'll want those to be 70% or better before you make your changes.
Build up that safety net over time and you'll be able to refactor with some confidence.
very slowly :D
No seriously... take it one step at a time. For instance, refactor something only if it affects or helps you write the current bug/feature that you are working on right now and do no more than that. And before you refactor make darn sure that you have some kind of automated test in place that gets run on each build that will actually test what you are writing/refactoring. Even if you don't have unit tests, it is never too late to start adding them for all new and modified code that is being written. Over time, your code base will get better in small increments daily or weekly instead of worse - all without you making monumental heaps of changes.
In my personal opinion and experience, it's not worth it to just refactor a (legacy) codebase en masse for the sake of refactoring. In those cases, it's best to just start from scratch and do it right all over again (and very rarely are you afforded the opportunity to do such a thing). Hence, just refactoring incremental is the way to go.
For Java code, my favorite first step is to run Findbugs and then remove all the dead stores, un-used fields, unreachable catch blocks, unused private methods and likely bugs.
Next I run CPD to look for evidence of cut-copy-paste code.
It isn't unusual to be able to reduce the code base by 5% by doing this. It also saves you from refactoring code that is never used.
I think you should use Eclipse as a IDE because it is having many plugins and free of cost.You should now follow the MVC pattern and yes must write test cases using JUnit.Eclipse also have plugin for JUnit and it is providing code refactoring facility too so that will reduce your some work.And always remember that writing a code is not important the main thing is to write clean code.So now give comments everywhere so that not only you but any other person read the code then while reading the code he must feel that he is reading an essay.
Refactor the low-hanging fruit. Nibble away at the easy bits, and as you do that, the harder bits will begin to be a little easier. When there aren't any bits left to refactor, you're done.
The refactorings you'll probably find most useful are Rename Method (and even more trivial Renamings like Field, Variable, and Parameter), Extract Method, and Extract Class. For each refactoring you perform, write the necessary unit tests to make the refactoring safe, and run the entire suite of unit tests after each refactoring. It's tempting - and, let's be honest, pretty safe - to rely on the automated refactorings of your IDE, without the tests - but it's good practice and will be good to have the tests into the future as you add functionality to your project.
You might want to look at Martin Fowler's book Refactoring. This is the book that popularized the term and technique (my thought when taking his course: "I've been doing a lot of this all along, I didn't know it had a name"). A quote from the link:
Refactoring is a controlled technique
for improving the design of an
existing code base. Its essence is
applying a series of small
behavior-preserving transformations,
each of which "too small to be worth
doing". However the cumulative effect
of each of these transformations is
quite significant. By doing them in
small steps you reduce the risk of
introducing errors. You also avoid
having the system broken while you are
carrying out the restructuring - which
allows you to gradually refactor a
system over an extended period of
time.
As others have pointed out, unit tests will allow you to refactor with confidence. And start by reducing code duplication. The book will give you lots of other insights.
Here is a catalog of refactorings.
The correct definition of messy code, is code that hard to maintain and change.
To use more mathematical definition, you can check your code by code metrics tools.
This way, you will keep the code that already good enough, and find very fast, the wrong code.
My experience say, that is very powerful way to improve the quality of your code. (if your tool can show you the result on each build or on realtime)
Throw it away, build it new.

Being pressured to GOTO the dark-side

We have a situation at work where developers working on a legacy (core) system are being pressured into using GOTO statements when adding new features into existing code that is already infected with spaghetti code.
Now, I understand there may be arguments for using 'just one little GOTO' instead of spending the time on refactoring to a more maintainable solution. The issue is, this isolated 'just one little GOTO' isn't so isolated. At least once every week or so there is a new 'one little GOTO' to add. This codebase is already a horror to work with due to code dating back to or before 1984 being riddled with GOTOs that would make many Pastafarians believe it was inspired by the Flying Spaghetti Monster itself.
Unfortunately the language this is written in doesn't have any ready made refactoring tools, so it makes it harder to push the 'Refactor to increase productivity later' because short-term wins are the only wins paid attention to here...
Has anyone else experienced this issue whereby everybody agrees that we cannot be adding new GOTOs to jump 2000 lines to a random section, but continually have Anaylsts insist on doing it just this one time and having management approve it?
tldr;
How can one go about addressing the issue of developers being pressured (forced) to continually add GOTO statements (by add, I mean add to jump to random sections many lines away) because it 'gets that feature in quicker'?
I'm beginning to fear we may lose valuable developers to the raptors over this...
Clarification:
Goto here
alsoThere: No, I'm talking about the kind of goto that jumps 1000 lines out of one subroutine into another one mid way into a while loop. Goto somewhereClose
there: I wasn't even talking about the kind of gotos you can reasonably read over and determine what a program was doing. Goto alsoThere
somewhereClose: This is the sort of code that makes meatballs midpoint: If first time here Goto nextpoint detail:(each one almost completely different) Goto pointlessReturn
here: In this question, I was not talking about the occasionally okay use of a goto. Goto there
tacoBell: and it has just gone back to the drawing board. Goto Jail
elsewhere: When it takes Analysts weeks to decypher what a program is doing each time it is touched, something is deeply wrong with your codebase. In fact, I'm actually up to my hell:if not up-to-date goto 4 rendition of the spec goto detail pointlessReturn: goto tacoBell
Jail: Actually, just a small update with a small victory. I spent 4 hours refactoring a portion of this particular program a single label at a time, saving each iteration in svn as I went. Each step (about 20 of them) was smallish, logical and easy enough to goto bypass nextpoint: spontaneously jump out of your meal and onto you screen through some weird sort of spaghetti-meatball magnetism. Goto elseWhere
bypass: reasonably verify that it should not introduce any logic changes. Using this new more readable version, I've sat down with the analyst and completed almost all of this change now. Goto end
4: first *if first time here goto hell, no second if first time here goto hell, no third if first time here goto hell fourth now up-to-date goto hell
end:
How many bugs have been introduced because of incorrectly written GOTOs? How much money did they cost the company? Turn the issue into something concrete, rather than "this feels bad". Once you can get it recognized as a problem by the people in charge, turn it into a policy like, "no new GOTOs for anything except simplifying the exit logic for a function", or "no new GOTOs for any functions that don't have 100% unit test coverage". Over time, tighten the policies.
GOTOs don't make good code spaghetti, there are a multitude of other factors involved. This linux mailing list discussion can help put a few things into perspective (comments from Linus Torvalds about the bigger picture of using gotos).
Trying to institute a "no goto policy" just for the sake of not having gotos will not achive anything in the long run, and will not make your code more maintainable. The changes will need to be more subtle and focus on increasing the overall quality of the code, think along the lines of using best practices for the platform/language, unit test coverage, static analysis etc.
The reality of development is that despite all the flowery words about doing it the right way, most clients are more interested in doing it the fast way. The concept of a code base rapidly moving towards the point of imploding and the resulting fallout on their business is something that they cannot comprehend because that would mean having to think beyond today.
What you have is just one example. How you stand on this will dictate how you do development in the future. I think you have 4 options:
Accept the request and accept that you will always be doing this.
Accept the request, and immediately start looking for a new job.
Refuse to do and and be prepared to fight to fix the mess.
Resign.
Which option you choose is going to depend on how much you value your job and your self esteem.
Maybe you can use the boyscout-pattern: Leave the place a little more clean than you found it. So for every featurerequest: don't introduce new gotos, but remove one.
This won't spend too much time for improvements, would give more time, to find newly introduced bugs. Maybe it wouldn't cost much additional time, if you remove a goto from the part, which although had to spend time understanding, and bringing the new feature in.
Argue, that a refactoring of 2 hours will save 20 times 15 minutes in the future, and allow faster and deeper improvements.
Back to principles:
Is it readable?
Does it work?
Is it maintainable?
This is the classic "management" vs. "techies" conflict.
In spite of being on the "techie" team, over the years I have settled firmly the "management" side of this debate.
There are a number of reasons for this:
It's quite possible to have well written easy to read properly structured programs with gotos! Ask any assembler programmer; conditional branch and a primitive do loop are all they have to work with. Just because the "style" is not current and doesn't mean its not well written.
If it is a mess then its going to be a real pain to extract the busines rules you will need if you are going for a complete re-write, or, if you are doing a technical refactoring of the program you will never be sure the behaviour of the refactored program is "correct" i.e. it does exactly what the old program does.
Return on investment -- sticking to the original programming style and making minimal changes will take minimum effort and quickly statisfy the customers request. Spending a lot of time and effort refactoring will be more expensive and take longer.
Risk -- rewrites and refactoring are hard to get right, extensive testing of the refactored code is required and things that look like "bugs" may have been "features" as far as the customer is concerned. A particular problem with "improving" legacy code is that business users may have well established work arounds that depend on a bug being there, or, exploit the existense of a bugs to change the business procedures without informing the IT department.
So all in all management is faced with a decision -- "add one little goto" test the change and get it into production in double quick time with little risk -- or -- go in for a major programming effort and have the business scream at them when a new set of bugs crops up.
And if you do decide to refactor in five years time some snotty nosed college graduate will be moaning that your refactored program is not buzzword compliant any more and demanding he be allowed to spend weeks rewriting it.
If it ain't broke dont fix it!
PS: Even Joel thinks this is a bad idea: things you should never do
Update!-
OK if you want to refactor and improve the code you need to go about it properly.
The basic problem is you are saying to the client is "I want to spend n weeks working on this program and, if everything goes well, it will do exactly what it does now." -- this is a hard sell to say the least.
You need to gather long term data on the number of crashes and outages, the time spent analysing and programming seemingly small changes, the number of change requests that were not done because they were too hard, business opertunities lost because the system could not change fast enough. Also gather some acedemic data on the costs of maintaing well structured programs vs. letting it sink.
Unless you have a watertight case to present to the beancounters you will not get the budget. You really have to sell this to the business management, not, your immediate bosses.
I recently had to work on some code that wasn't legacy per se, but the former developers' habits certainly were and thus GOTO's were everywhere. I don't like GOTO's; they make a hideous mess of things and make debugging a nightmare. Worse yet, replacing them with normal code is not always straightforward.
IF you can't unwind your GOTO's I certainly recommend that you no longer use them.
Unfortunately the language this is written in doesn't have any ready made refactoring tools
Don't you have editors with macro capabilities? Don't you have shell scripts? I've done tons of refactoring over the years, very little of it with refactoring browsers.
The underlying problem here seems to be that you have 'analysts' who describe, presumably necessary, functional changes in terms of adding a goto to some code. And then you have 'programmers' whose job appears to be limited to typing in that change and then complaining about it.
To make anything different, that dysfunctional allocation of responsibilities between those people needs to change. There are a lot of different ways to split up the work: The classic, conventional one (that is quite likely the official, but ignored, policy in your work) is to have analysts write a implementation-independent specification document and programmers implement it as maintainably as they can.
The problem with that theoretical approach is it is actually unworkably unrealistic in many common situations. In particular, doing it 'properly' requires employees seen as junior to win an argument with colleagues who are more senior, have better social connections in the office, and more business-savvy. If the argument 'goto is not implementation-independent, so as an analyst you simply can't say that word' doesn't fly at your workspace, then presumably this is the case.
Far better in many circumstances are alternatives like:
analysts write client-facing code and developers write infrastructure.
Analysts write executable specifications which are used as reference implementations for unit tests.
Developers create a domain specific language in which analysts program.
You drop he distinction between one and the other, having only a hybrid.
If a change to the program requires "just one little goto" I would say that the code was very maintainable.
This is a common problem when dealing with legacy code. Stick to the style the program was originally written in or "modernize" the code. For me the answer is usually to stick with the original style unless you have a really big change that would justify a complete re-write.
Also be aware that several "modern" language features like java's "throw" statement, or SQLs "ON ERROR" are really "GO TO"s in disguise.

How to convince your fellow developer to write short methods?

Long methods are evil on several grounds:
They're hard to understand
They're hard to change
They're hard to reuse
They're hard to test
They have low cohesion
They may have high coupling
They tend to be overly complex
How to convince your fellow developer to write short methods? (weapons are forbidden =)
question from agiledeveloper
Ask them to write unit tests for the methods.
That depends on your definitions of "short" and "long".
When I hear someone say "write short methods", I immediately react badly because I've encountered too much spaghetti written by people who think the ideal method is two lines long: One line to do the tiniest possible unit of work followed by one line to call another method. (You say long methods are evil because "they're hard to understand"? Try walking into a project where every trivial action generates a call stack 50 methods deep and trying to figure out which of those 50 layers is the one you need to change...)
On the other hand, if, by "short", you mean "self-contained and limited to a single conceptual function", then I'm all for it. But remember that this can't be measured simply by lines of code.
And, as tydok pointed out, you catch more flies with honey than vinegar. Try telling them why your way is good instead of why their way is bad. If you can do this without making any overt comparisons or references to them or their practices (unless they specifically ask how your ideas would relate to something they're doing), it'll work even better.
You made a list of drawbacks. Try to make a list of what you'll gain by using short methods. Concrete examples. Then try to convince him again.
I read this quote from somewhere:
Write your code as if the person who has to maintain it is a violent psycho, who knows where you live.
In my experience the best way to convince a peer in these cases is by example. Just find opportunities to show them your code and discuss with them the benefits of short functions vs. long functions. Eventually they'll realize what's better spontaneously, without the need to make them feel "bad" programmers.
Code Reviews!
I suggest you try and get some code reviews going. This way you could have a little workshop on best practices and whatever formatting your company adhers to. This adds the context that short methods is a way to make code more readable and easier to understand and also compliant with the SRP.
If you've tried to explain good design and people just aren't getting it, or are just refusing to get it, then stop trying. It's not worth the effort. All you'll get is a bad rep for yourself. Some people are just hopeless.
Basically what it comes down to is that some programmers just aren't cut out for development. They can understand code that's already written, but they can't create it on their own.
These folks should be steered toward a support role, but they shouldn't be allowed to work on anything new. Support is a good place to see lots of different code, so maybe after a few years they'll come to see the benefits of good design.
I do like the idea of Code Reviews that someone else suggested. These sloppy programmers should not only have their own code reviewed, they should sit in on reviews of good code as well. That will give them a chance to see what good code is. Possibly they've just never seen good code.
To expand upon rvanider's answer, performing the cyclomatic complexity analysis on the code did wonders to get attention to the large method issue; getting people to change was still in the works when I left (too much momentum towards big methods).
The tipping point was when we started linking the cyclomatic complexity to the bug database. A CC of over 20 that wasn't a factory was guaranteed to have several entries in the bug database and oftentimes those bugs had a "bloodline" (fix to Bug A caused Bug B; fix to Bug B caused Bug C; etc). We actually had three CC's over 100 (max of 275) and those methods accounted for 40% of the cases in our bug database -- "you know, maybe that 5000 line function isn't such a good idea..."
It was more evident in the project I led when I started there. The goal was to keep CC as low as possible (97% were under 10) and the end result was a product that I basically stopped supporting because the 20 bugs I had weren't worth fixing.
Bug-free software isn't going to happen because of short methods (and this may be an argument you'll have to address) but the bug fixes are very quick and are often free of side-effects when you are working with short, concise methods.
Though writing unit tests would probably cure them of long methods, your company probably doesn't use unit tests. Rhetoric only goes so far and rarely works on developers who are stuck in their ways; show them numbers about how those methods are creating more work and buggy software.
Finding the right blend between function length and simplicity can be complex. Try to apply a metric such as Cyclomatic Complexity to demonstrate the difficulty in maintaining the code in its present form. Nothing beats a non-personal measurement that is based on testing factors such as branch and decision counts.
Not sure where this great quote comes from, but:
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it"
Force him to read Code Complete by Steve McConnell. Say that every good developer has to read this.
Get him drunk? :-)
The serious point to this answer is the question, "why do I consistently write short functions, and hate myself when I don't?"
The reason is that I have difficulty understanding complex code, be that long functions, things that maintain and manipulate a lot of state, or that sort of thing. I noticed many years ago that there are a fair number of people out there that are significantly better at dealing with this sort of complexity than I am. Ironically enough, it's probably because of that that I tend to be a better programmer than many of them: my own limitations force me to confront and clean up that sort of code.
I'm sorry I can't really provide a real answer here, but perhaps this can provide some insight to help lead us to an answer.
Force them to read the book "Clean Code", there are many others but this one is new, good and an easy read.
Asking them to write Unit tests for the complex code is a good avenue to take. This person needs to see for himself what that debt that complexity brings when performing maintenance or analysis.
The question I always ask my team is: "It's 11 pm and you have to read this code - can you? Do you understand under pressure? Can you, over the phone, no remote login, lead them to the section where they can fix an error?" If the answer is no, the follow up is "Can you isolate some of the complexity?"
If you get an argument in return, it's a lost cause. Throw something then.
I would give them 100 lines of code all under 1 method and then another 100 lines of code divided up between several methods and ask them to write down an explanation of what each does.
Time how long it takes to write both paragraphs and then show them the result.
...Make sure to pick code that will take twice or three times as long to understand if it were all under one method - Main() -
Nothing is better than learning by example.
short or long are terms that can be interpreted differently. For one short is a 2 line method while some else will think that method with no more than 100 lines of code are pretty short.
I think it would be better to state that a single method should not do more than one thing at the same time, meaning it should only have one responsibility.
Maybe you could let your fellow developers read something about how to practice the SOLID principles.
I'd normally show them older projects which have well written methods. I would then step through these methods while explaining the reasons behind why we developed them that way.
Hopefully when looking at the bigger picture, they would understand the reasons behind this.
ps. Also, this exercise could be used in conjuction as a mini knowledge transfer on older projects.
Show him how much easier it is to test short methods. Prove that writing short methods will make it easier and faster for him to write the tests for his methods (he is testing these methods, right?)
Bring it up when you are reviewing his code. "This method is rather long, complicated, and seems to be doing four distinct things. Extract method here, here, and here."
Long methods usually mean that the object model is flawed, i.e. one class has too many responsibilities. Chances are that you don't want just more functions, each one shorter, in the same class, but those responsibilies properly assigned to different classes.
No use teaching a pig to sing. It wastes your time and annoys the pig.
Just outshine someone.
When it comes time to fix a bug in the 5000 line routine, then you'll have a ten-line routine and a 4990-line routine. Do this slowly, and nobody notices a sudden change except that things start working better and slowly the big ball of mud evaporates.
You might want to tell them that he might have a really good memory, but you don't. Some people are able to handle much longer methods than others. If you both have to be able to maintain the code, it can only be done if the methods are smaller.
Only do this if he doesn't have a superiority complex
[edit]
why is this collecting negative scores?
You could start refactoring every single method they wrote into multiple methods, even when they're currently working on them. Assign extra time to your schedule for "refactoring other's methods to make the code maintanable". Do it like you think it should be done, and - here comes the educational part - when they complain, tell them you wouldn't have to refactor the methods if they would have made it right the first time. This way, your boss learns that you have to correct other's lazyness, and your co-workers learn that they should make it different.
That's at least some theory.

How do you stop interim solutions from lasting forever?

Say there are two possible solutions to a problem: the first is quick but hacky; the second is preferable but would take longer to implement. You need to solve the problem fast, so you decide to get the hack in place as quickly as you can, planning to start work on the better solution afterwards. The trouble is, as soon as the problem is alleviated, it plummets down the to-do list. You're still planning to put in the better solution at some point, but it's hard to justify implementing it right now. Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
Does this sound familiar? I know it's happened more than once where I work. One colleague describes deliberately making a bad GUI so that it wouldn't be accidentally adopted long-term. Do you have a better strategy?
Write a test case which the hack fails.
If you can't write a test which the hack fails, then either there's nothing wrong with the hack after all, or else your test framework is inadequate. If the former, run away quick before you waste your life on needless optimisation. If the latter, seek another approach (either to flagging hacks, or to testing...)
Strategy 1 (almost never selected): Don't implement the kluge. Don't even let people know it's a possibility. Just do it the right way the first time. Like I said, this one is almost never selected, due to time constraints.
Strategy 2 (dishonest): Lie and Cheat. Tell management that there are bugs in the hack, and they could cause major problems later on. Unfortunately, most of the time, the managers just say to wait until the bugs become a problem, then fix the bugs.
Strategy 2a: Same as strategy 2, except there really are bugs. Same problem, though.
Strategy 3 (and my personal favorite): Design the solution whenever you can, and do it well enough that an intern or code-monkey could do it. It's easier to justify spending the small amount of code-monkey money than to justify your own salary, so it might just get done.
Strategy 4: Wait for a rewrite. Keep waiting. Sooner or later (probably later), someone is going to have to rewrite the thing. Might as well do it right then.
Here is a great related article on technical debt.
Basically, it is an analogy of debt with all the technical decisions you make. There is good debt and bad debt... and you have to pick the debt that is going to achieve the goals you want with the least long term cost.
The worst kind of debt is small little accumulating shortcuts that are analogous to credit card debt... each one doesn't hurt, but pretty soon you are in the poor house.
This is a major issue when doing deadline driven work. I find that adding very detailed comments about why this way was chosen and some hints at how it should be coded help. This way people looking at the code see it and keep it fresh.
Another option that will work is add a bug.feature in your tracking framework (you do have one, right?) detailing the rework. That way it is visible and may force the issue at some point.
The only time you can ever justify fixing these things (because they're not really broken, just ugly) is when you have another feature or bug fix that touches the same section of code, and you might as well re-write it.
You have to do the math on what a developer's time costs. If software requirements are being met, and the only thing wrong is that the code is embarrasing under the hood, it's not really worth fixing.
Whole companies can go out of business because over-zealous engineers insist on a re-architecture every year or so when they get antsy.
If it's bug-free and meets requirements, it's done. Ship it. Move on.
[Edit]
Of course I'm not advocating that everything be hacked in all the time. You have to design and write code carefully in the normal course of the development process. But when you do end up with hacks that just had to be done quickly, you have to do a cost-benefit analysis on whether or not it's worth it to clean up the code. If over the lifetime of the application you will spend more time coding around a messy hack than you would have fixing it, then of course fix it. But if not, it's way too expensive and risky to re-code a working, bug-free application just because looking at the source makes you ill.
YOU DON'T DO INTERIM SOLUTIONS.
Sometimes I think programmers just need to be told this.
Sorry about that, but seriously--a hacky solution is worthless and even on the first iteration can take longer than doing a portion of the solution correctly.
Please stop leaving me your crap code to maintain. Just ALWAYS CODE IT RIGHT. No matter how long it takes and who yells at you.
When you are sitting there twiddling your thumbs after delivering early while everyone else is debugging their stupid hacks, you'll thank me.
Even if you don't think you are a great programmer, always strive to do the best you can, never take shortcuts--it doesn't cost you ANY time to do it right. I can justify this statement if you don't believe me.
Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
If you're cursing it, why is it at the bottom of the TODO list?
If it's not affecting you, why are you cursing it?
If it is affecting you, then it's a problem that needs to be fixed NOW.
I make sure that I'm vocal about the priority of the long term fix ESPECIALLY after the short term fix has gone in.I detail the reasons why it's a hack and not a good long term solution and use those to get the stakeholders (managers, clients, etc) to understand why it needs to be fixed Depending on the case, I may even inject a bit of worst case scenario fear in there. "If this safely line snaps, the whole bridge could collapse!"I take responsibility for coming up with a long term solution and make sure that it gets deployed
It is a hard call. I have done hacks personally cause, sometimes you HAVE to get that product out the door and into the customers hands. However, the way that I take care of it is to just do it.
Tell the project lead or your boss, or the customer: There are some spots that need to be cleaned up, and coded better. I need a week to do it, and it is going to cost less to do it now, then it will be to do it 6 months from now, when we need to implement an extension onto the subsystem.
Usually problems like this arise from bad communication with management or the customer. If the solution works for the customer then they see no reason to ask for it to be changed. So they need to be told about the tradeoffs you made beforehand so they can plan extra time to fix the problems after you've implemented the quick solution.
How to solve it depends a bit on why it's a bad solution. If your solution is bad because it's hard to change or maintain then the first time you have to do maintenance and have a bit more time then that is the right time to upgrade to a better solution. In this case it helps if you tell the customer or your boss that you took a shortcut in the first place. That way they know that they can't expect a fast solution next time around. Cripling the UI can be a good way to make sure the customer comes back to get stuff fixed.
If the solution is bad because it's risky or unstable then you really need to talk to the person doing the planning and have some time planned in to fix the problem asap.
Good luck. In my experience this is almost impossible to achieve.
Once you go down the slippery slope of implementing a hack because you are under pressure then you might as well get used to living with it for all time. There is almost NEVER enough time to re-work something that already works, no matter how badly it is implemented internally. What makes you think you will magically have more time "at some later date" to fix the hack?
The only exception I can think of to this rule is if the hack completely prevents you from implementing another piece of functionality that is needed by a customer. Then you have no choice but to do the re-work.
I try to build the hacky solution so that it can be migrated to the longterm way as painlessly as possible. Say you got a guy who is building a database in SQL Server cuz that's his strongest DB, but your corporate standard is Oracle. Build the db with as few non-transferable features (like Bit datatypes) as possible. In this example, it's not hard to avoid bit types, but it makes transitioning later an easier process.
Educate whoever is in charge of making the final decision why the hacky way of doing things is bad in the long-run.
Describe the problem in terms they can relate to.
Include a graph of cost, productivity, and revenue curves.
Teach them about technical debt.
Regularly refactor if you're pushed forward.
Never call it "refactoring" or "going back and cleaning up" in front of non-technical people. Instead, call it "adapting" the system to handle "new features".
Basically, people who don't understand software don't get the concept of revisiting things that already work. The way they look at it, developers are like mechanics who want to keep taking apart and reassembling the entire car every time someone wants to add a feature, which sounds insane to them.
It helps to make analogies to everyday things. Explain to them how when you made the interim solution, you made choices that suited building it quickly, as opposed to being stable, maintainable, etc. It's like choosing to build with wood instead of steel because wood is easier to cut, and thus, you could build the interim solution quicker. The wood, however, simply can not support the foundation of a 20-story building.
We use Java and Hudson for continuous integration. 'Interim solutions' must be commented with:
// TODO: Better solution required.
Every time Hudson runs a build it provides a report of each TODO item so that we have an up to date, highly visible record of any outstanding items that need improved.
Great question. This bothers me a lot, too - and most of the time I'm the sole person responsible for prioritizing issues in my own projects (yep, small business).
I found out that the problem that needs to be fixed is usually just a subset of the problem. IOW, the customer that needs an urgent fix does not need the whole problem to be solved, just a part of it - smaller or larger. That sometimes enables me to create a workaround that is not solution to the complete problem but just to the customer's subset and that allows me to leave the bigger issue open in the issue tracker.
That may of course not apply at all to your work environment :(
This reminds me of the story of "CTool". In the beginning CTool was put forward by one of our devs, I'll call him Don, as one possible way to solve the problem we were having. Being an earnest hard-working type, Don plugged away and delivered a working prototype. You know where I am going with this. Overnight, CTool became a part of the company work flow with an entire department depending on it. By the second or third day, bitter complaints started streaming in about CTool's shortcomings. Users questioned Don's competence, commitment and IQ. Don's protests that this was never supposed to be a production app fell on deaf ears. This went on for years. Finally, someone got around to re-writing the app, well after Don had departed. By this time, so much loathing had become attached to the name CTool that naming it CTool version 2 was out of the question. There was even a formal funeral for CTool, somewhat reminiscent of the copier (or was it a printer?) execution scene in Office Space.
Some might say Don deserved the slings and arrows for not making it go right to fix CTool. My only point is that saying you should never hack out a solution is probably unjustifiable in the Real World. But if you are the one to do it, tread cautiously.
Get it in writing (an email). So when it becomes a problem later management doesn't "forget" that it was supposed to be temporary.
Make it visible to the users. The more visible it is the less likely people are going to forget to go back and do it the right way when the crisis is over.
Negotiate before the temp solution is in place for a project, resources, and time lines to get the real fix in. Work for the real solution should probably begin as soon as the temp solution is finished.
You file a second very descriptive bug against your own "fix" and put a to-do comment right in the affected areas that says, "This area needs a lot of work. See defect #555" (use the right number of course). People who say "don't put in a hack" don't seem to understand the question. Assume you have a system that needs to be up and running now, your non-hack solution is 8 days of work, your hack is 38 minutes of work, the hack is there to buy you time to do the work and not lose money while you're doing it.
Now you still have to get your customer or management agree to schedule the N*100 minutes of time required to do the real fix in addition to the N minutes needed now to fix it. If you must refuse to implement the hack until you get such agreement, then maybe that's what you have to do, but I've worked with some understanding people in that regard.
The real price of introducing a quick-fix is that when someone else needs to introduce a 2nd quick fix, they will introduce it based on your own quick-fix. So, the longer a quick-fix is in place, the more entrenched it will become. Quite often, a hack takes only a little bit longer than doing things right, until you encounter a 2nd hack which builds on the first.
So, obviously it is (or seems to be) sometimes necessary to introduce a quick fix.
One possible solution, assuming your version control supports it, is to introduce a fork from the source whenever you make such a hack. If people are encouraged to avoid coding new features within these special "get it done" forks, then it will eventually be more work to integrate the new features with the fork than it will be to get rid of the hack. More likely, though, the "good" fork will get discarded. And if you are far enough away from release that making such a fork will not be practical (because it is not worth doing the dual integration mentioned above), then you probably shouldn't even be using a hack anyways.
A very idealistic approach.
A more realistic solution is to keep your program segmented into as many orthogonal components as possible and to occasionally do a complete rewrite of some of the components.
A better question is why the hacky solution is bad. If it is bad because it reduces flexibility, ignore it until you need flexibility. If it is bad because it impacts the programs behavior, ignore it and eventually it will become a bug fix and WILL be addressed. If it is bad because it looks ugly, ignore it, as long as the hack is localized.
Some solutions I've seen in the past:
Mark it with a comment HACK in the code (or similar scheme such as XXX)
Have an automatic report run and emailed weekly to those that care which counts how many times the HACK comments appear
Add a new entry in your bug tracking system with the line number and description of the right solution (so the knowledge gained from the research before writing the hack isn't lost)
write a test case that demonstrates how the hack fails (if possible) and check it into the appropriate test suite (i.e. so that it throws errors that someone will eventually want to cleanup)
once the hack is installed and the pressure is off, immediately start on the right solution
This is an excellent question. One thing I've noticed as I get more experience: hacks buy you a very short amount of time, and often cost you a huge amount more. Closely related is the 'quick fix' that solves what you think is the problem -- only to find when it blows up that that it wasn't the problem at all.
Setting aside the debate about whether you should do it, let's assume that you have to do it. The trick now is to do it in a way that minimizes long range affects, it easily ripped out later, and makes itself a nuisance so you remember to fix it.
The nuisance part is easy: make it issue a warning every time you execute the kludge.
The ripped out part can be easy: I like to do this be putting the kludge behind a subroutine name. That makes it easier to update since you compartmentalize the code. When you get your permanent solution, you're subroutine can either implement it or be a no-op. Sometimes a subclass can work nicely for this too. Don't let other people depend on whatever your quick fix is, though. It's difficult to recommend any particular technique without seeing the situation.
Minimizing long range effects should be easy if the rest of the code is nice. Always go through the published interface, and so on.
Try to make the cost of the hack clear to the business folks. Then they can make an informed decision either way.
You could intentionally write it in way that is overly restrictive and singe purposed and would require a re-write to be modified.
We had to do this once - make a short term demo version that we knew we did not want to keep. The customer wanted it on a winTel box, so we developed the prototype in SGI/XWindows. (We were fluent in both, so it wasn't a problem).
Confession:
I have used '#define private public' in C++ in order to read data from some other code layer. It went in as a hack but works well and fixing it has never become a priority. It is now 3 years later...
One of the main reasons hacks do not get removed is the risk that one introduces new bugs while fixing the hack. (Especially when dealing with pre-TDD code bases.)
My answer is a bit different from the others. My experience is that the following practices help you stay agile and move from hackey first iteration/alpha solutions to beta/production ready:
Test Driven Development
Small units of refactoring
Continous Integration
Good Configuration management
Agile database techniques/database refactoring
And it should go without saying you have to have stakeholder support to do any of these correctly. But with these products in place you have the right tools and processes to quickly change a product in major ways with confidence. Sometimes your ability to change is your ability to manage the risk of the changes and from the development perspective these tools/techniques give you surer footing.