Test Driven Design - where did I go wrong? - language-agnostic

I am playing with a toy project at home to better understand Test Driven Design. At first things seemed to be going well and I got into the swing of failing tests, code, passing test.
I then came to add a test and realised it would be difficult with my current structure and that furthermore I should split a particular class which had too many responsibilities. Adding even more responsibilities for the next test was clearly wrong. I decided to put aside this test, and refactor what I had. This is where things started to go wrong.
It was difficult to refactor without breaking lots of tests at once, and then the only option seemed to be to make many changes and hope I ended up back at something where the tests passed again. The tests themselves were valid, I just had to break nearly all of them while refactoring. The refactoring (which I'm still not that happy with) took me five or six hours before I had returned to all tests passing. The tests did help me along the way.
It feels like I got off the TDD track. What do you think I did wrong?
As this is mostly a learning exercise I'm considering rolling back all that refactoring and trying to move forward again in a better fashion.

Perhaps you went too fast when splitting your class. The steps for the Extract Class Refactoring are as follows:
create the new class
have an instance of that class as a private data member
move field to the new class, one by one
compile and test for each field
move method to the new class, one by one, testing for each
That way you won't break a large number of tests while refactoring your class, and you can rely on the tests to make sure nothing was broken so far all along the class splitting.
Also, make sure you're testing the behavior, not the implementation.

I wanted to comment the accepted answer but my current reputation does not allow me. So here it is as a new answer.
TDD says:
Create a test that fails. Code a little. Make the test pass.
It insists on coding in tiny steps (especially when beginning). View TDD as systematic validation of successive refactorings you perform to build your programs. If you take too big a step, your refactoring will get out of control.

Perhaps you were testing a too low a level. It's hard to say without seeing you code, but typically I test a feature from end to end, and make sure all the behaviour I expected to happen, happened. Testing every single method in isolation will give you the test-web you have created.
You can use tools like NCover and DotCover to check that you haven't missed any code paths.

The only thing "wrong" was to add a test afterwards. In "true" TTD you first declare all tests before the actual implementation. I say "true" because that's often only theory. But in practice you still have the safty given by the tests.

This is, sadly, something TDD proponents don't talk about enough and that makes people try TDD and then abandon it.
What I do is what I call "High Level Testing" which consists in avoiding unit tests and doing exclusively high level tests (which we could could call "integration tests"). It works pretty well and I avoid the (very important) problem you mentioned. I wrote an article about it a while ago:
http://www.hardcoded.net/articles/high-level-testing.htm
Good luck with TDD, don't give up yet.

Also for TDD continuos regression of test cases are also necessary. So Continuos integration with Coverage tools(as mentioned above) are necessary. So that small changes(i.e Refactoring) can be regressed easily and whether any code path which are missed can be found easily.
Also I feel that if tests were not written previously, time should not be wasted to think whether to write tests or not. Tests should be written immediately.

Related

Some questions about the process of BDD from a beginner

These days I've read several articles about BDD to find what it is talking about. Now I get a basic understandings, but still not clear about the whole process.
The following is what I think to do in a BDD process:
All the stakeholders(BA, customer, Dev, QA) are sitting together to discuss the requirements, and write the agreed features on story cards. Here I take a "user registeration" feature as example:
As a user,
I want to register on the system,
so that I can use its services
Create several scenarios in Given/When/Then format, and here is one of them:
Scenario: user successfully register
Given an register page
And an un-registered user
When the user fills username "Jeff" and password "123456"
And click on "Register"
Then the user can see a "Success" message
And the user "Jeff" is created in the system
Implement this scenario with some BDD testing framework, say cucumber-jvm, like:
import cucumber.api.java.en.Given;
public class Stepdefs {
#Given("an register page")
public void an_register_page throws Throwable {
// ...
}
#Given("an un-registered user")
public void an_register_page throws Throwable {
// ...
}
// ...
}
for the steps one by one.
But I soon find myself in trouble: there are pages, models, maybe databases need for this scenario, seems a lot of thing to do.
What should I do now?
What should I do now? Do I need to discuss this scenario with all the stakeholders? For BA/Customer/QA, I don't think they really care about the implementations, is it a good idea to discuss it with some other developers?
Suppose after I discuss it with some other developers, we agree to split it to several small parts. Can we make these small parts as "scenario"s with Scenario/Given/When/Then format as what we just did with cucumber-jvm, or can we use JUnit as we normally do in TDD?
1. If choose "cucumber-jvm", it seems a little heavy for small part
2. If choose JUnit, we need to involve more than one testing framework in a project
3. Is it the best if there is a single testing framework to do both things (not sure if there is)
Suppose I choose option 2, using JUnit for the small tasks
The following is what I will do after this decision:
Now we create new small tests to drive implementations, like creating user in database, as we normally do in TDD. (red->green->refactoring). And we don't care the cucumber test Scenario: user successfully register (which is failed) for now, just leave it there. Right?
We develop more small tests with JUnit, made them red -> green -> refactored. (And the imcomplete cucumber test is always failed)
Until all the small tests are passed, we turn to the cucumber test Scenario: user successfully register. Completed it and make sure it turn green at last.
Now develop another scenario, if it's easy, we can implement it just with cucumber, otherwise we will have to split it and write several jUnit test
There are definitely a lot of mis-understandings, even very basic ones. Because I don't find myself gain much value from BDD except the "discuss with all stakeholders" part.
Where is my mistake? Appreciate for any sugguestions!
Don't start with logging in; start with the thing that's different to the other systems out there. Why is someone logging in? Why do they want to use the service? Hard-code a user, pretend they're logged in, focus on the value.
If you focus on UI details you tie yourself to the UI very strongly, and it makes the UI hard to change. Instead, look at what capabilities the system is delivering. I don't recommend using a login scenario anyway, but if I did, I'd expect it to look more like:
Given Jeff isn't registered with the site
When he registers with the username "Jeff" and password "123456"
Then his account creation should be confirmed
And he should be invited to log in for the first time.
Look up "declarative vs. imperative" here to see more on this.
If your UI is really in flux, try out the scenario manually until the UI has settled down a bit. It will be easier to automate then. As you move into more stable scenarios it will be better to automate first (TDD-style).
What should you do now? Well, most people don't write class-level tests for the UI, so don't worry about it until you start driving out the controller and presenter layers. It's normally easier to have frameworks in the same language, but two different frameworks is fine. Cucumber / RSpec, JBehave / JUnit, SpecFlow / NUnit are pretty typical combinations. Do the smallest amount you need to get that first scenario working. It won't be much, because you can hard-code a lot of it. The second scenario will start introducing more interesting behaviour, and then you'll start to see your class-level tests emerge.
BTW, BDD started at a class level, so you can do the same thing with the classes; think of an example of how you might use it, write "Given, When, Then" in comments if your framework doesn't work that way already, and then fill in the gaps!
Yes, your Cucumber scenario will be red throughout, until it isn't.
Ideally you'll be making one last unit test and the Cucumber scenario pass at the same time, rather than just writing a bit of extra code. It's very satisfying to see it finally go green.
The original point of BDD was to get rid of the word "test", since it causes people to think of things like TDD as being about testing. TDD's really about clean design; understanding the responsibilities and behaviour of your code, in the same way that scenarios help you understand the capabilities and behaviour of your system. It should be normal to write both system-level scenarios and class-level tests too.
You're already ahead of all the people who forget to discuss the scenarios before they start coding, though! The conversations with stakeholders are the most important part. You might get value out of including a tester in those conversations. Testers are very good at spotting scenarios that other people miss.
It looks like you're pretty much on the right track where the rest of the process is concerned. You might find some of the other BDD answers in my profile helpful for you too. Congrats and good luck!
I think doing registration/sign_in first is a really good thing to do when you are learning the mechanics of doing BDD. Pretty much everyone understands why you would want to sign into a system, and everyone understands that the system has to know who you are before you can do this, so you have to register first.
Doing this simple task allows you to concentrate on a smaller subset of BDD. By narrowing your focus you can improve quality, whilst being aware that there is much more to learn a little later on.
To write your sign in scenarios you need to focus on two things:
writing scenarios
implementing step definitions
These are the basic mechanics of BDD, but they are only a small part of the overall process. Still I think you'd benefit from working on them because at the moment you are not executing the mechanics very well, which is to be expected because you are new to this.
When you write scenarios you should concentrate on 'what' you are doing and 'why' you are doing it. Scenarios have no need to know anything about 'how' you do things. Anything to do with filling in stuff, clicking on stuff etc. is a smell. When your scenarios only deal with the what and why they become much simpler.
Feature: Registration
A pre-requistite for signing in, see sign_in.feature
Scenario: Register
Given I am a new user
When I register
Then I should be registered
Feature: Sign in
Dependant on registration ...
I want to sign in so I can have personalised content and ...
Scenario: Sign in
Given I am registered
When I sign in
Then I should be signed in
You really don't much more than this to drive the development of a simple sign_in system. Once you have that running you can deal with some sad paths e.g.
Scenario: Sign in with bad password
Given I am registered
When I sign in with a bad password
Then I should not be signed in
And I should be told ...
If you implement things nicely this sad path scenario should be trivial to implement as all the infrastructure is already in place to sign in, all that is different is you are using a bad password.
You can see an example of this at https://github.com/diabolo/cuke_up. The way to use this example is follow the commit history, and in particular notice how I am using the extract_method refactor to take all the code out of the step definitions. Each method I extract is a tool to reused when writing subsequent scenarios. Making an effective set of tools is the key to productivity when implementing scenarios.
Nowadys sign_up is so simple because we can rely on a 3rd party library and their unit tests. This means we can get pretty good results without ever having to worry about the transition to our own code and doing bits of TDD. So for now there really is no need to think about TDD.
So long as you are aware that you are only doing a small subset of BDD, I think you can successfully use this approach to provide foundations for all the extra stuff you have to deal with when working with the things that differentiate your system from others.
To summarize, just focus on
writing simple scenarios
making your step definitions elegant
creating tools (extracted methods) that can be used in the next scenario you write
You have plenty of time to learn the other stuff, and it will be much easier if your basic mechanics are better developed.

Avoiding TDD making big refactorings harder [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm still relatively a beginner in TDD, and I often end up into the trap where I've designed myself into a corner at some point when trying to add a new piece of functionality.
Mostly it means that the API that grew out of the, say, first 10 requirements, doesn't scale when adding the next requirement, and I realize I have to do a big redesign on the existing functionality, including the structure, to add something the new stuff in a nice way.
This is fine, except in this case the API would then change, so all the initial tests would then have to change. This is usually a bigger thing than just renaming methods.
I guess my question is twofold: How should I have avoided getting into this position in the first place, and given that I get into it, what are safe patterns of refactoring the tests and allowing new functionality with a new API to grow?
Edit: Lots of great answers, will experiment will several techniques. Marked as solution the answer I felt was most helpful.
How should I have avoided getting into this position in the first place
The most general rule is: write tests with such refactorings in mind. In particular:
The tests should use helper methods whenever they construct anything API-specific (i.e. example objects.) This way you have only one place to change if the construction changes (e.g. after adding mandatory fields to the constructed object)
Same goes for checking the output of the API
Tests should be constructed as "diff from default", with the default provided by the above. For example if your test checks the effect of a method on field x, you should only set the x field in the test, with the rest of the fields taken from the default.
In fact, these are the same rules that apply to code in general.
what are safe patterns of refactoring the tests and allowing new functionality with a new API to grow?
Whenever you find out that an API change makes you change the tests in multiple places, try to figure out how to move the code into a single place. This follows from the above.
Make your tests small. A good test calls maybe 1-3 methods on the subject of the test and does some assertions on the result. These test will only need to change when one of these three methods change.
Make your test code clean. If you haven't already read 'Clean Code' by Robert C. Martin. Apply the rules to your production code AND your test code. This has the tendency to reduce the affected surface area of any refactoring.
Do refactor more often. Instead of (possibly unconsiously) waiting until you must do a large refactoring, do small refactorings a lot.
If you are faced with a huge refactoring, break it down in a couple (or if necessary a couple hundred) tiny refactorings.
In this case, I suggest you get the features locked down, and the iterations short.
Because the iterations are short, features will be grouped together into smaller, isolated groups. It lessens the need to think up of some grand design, which might not be adaptive to the needs of the users. The code for this week, will only work with the code for this week. That lessens the chances of new stuff mucking up the old stuff.
Yeah, it's a problem that is hard to avoid with TDD, since the whole idea behind it is to avoid the over-engineering caused by doing big designs upfront. So with TDD, your design is going to change, and often. The more tests you have, the more effort is required for each refactoring, which effectively discourage it, going against the whole idea behind TDD.
Even though your design will change, your basic requirements will be rather stable, so at the "high level", the way your app works shouldn't change too much. That's why I advise to put all your tests at the "high level" (integration testing, kinda). Low level tests are a liability because you have to change them when you refactor. High level testing is a bit more work, but I think it's worth it in the end.
I wrote an article about that a year ago: http://www.hardcoded.net/articles/high-level-testing.htm
The following might help..
Follow good coding standards and practices, including TDD and utilising design patterns to produce well structured designs. The application as a whole should then be easier to extend to include new features.
Good separation of concerns. If your API is well separated from the other functionality (calculations, database access etc) then changing the functionality without changing the API or vice-versa should be easier.
Use BDD to provide automated tests at a higher level (ie. more like user tests than unit tests). These should help to give you confidence while refactoring even if all your unit tests are breaking due to the refactoring.
Use a Dependency Injection Container such as Windsor. By abstracting away your class dependencies when those dependencies change it creates much less re-work (particularly if you have many unit tests) than having them coded into your classes.
You shouldn't find yourself "in a corner" with TDD. The eleventh test shouldn't so seriously change the design that the first ten tests need to change. Think hard about why so many tests need to change - look at it in detail, one by one - and see if you can come up with a way to make the change without breaking your existing tests.
If, for instance, you need to add a parameter to a method that they all call:
you could leave the existing fewer-parameter method in place, delegating to the new method and adding a default parameter;
you could have all the tests call a utility method (or perhaps a setup method) that calls the old method, so you need to change the method call in only one place;
you may be able to let your IDE do all the changes with a single command.
TDD and Refactoring work symbiotically; each helps the other. Because you emerge from TDD with comprehensive unit tests, refactoring is safe; because you have the organizational, intellectual, and editing tools to refactor freely, you can keep your tests well synchronized with your design. You say you are a beginner at TDD; perhaps you need to be growing your refactoring skills while you learn TDD.

When is it time to refactor code?

On on hand:
1. You never get time to do it.
2. "Context switching" is mentally expensive (difficult to leave what you're doing in the middle of it).
3. It usually isn't an easy task.
4. There's always the fear you'll break something that's now working.
On the other:
1. Using that code is error-prone.
2. Over time you might realize that if you had refactored the code the first time you saw it, that would have saved you time on the long run.
So my question is - Practically - When do you decide it's time to refactor your code?
Thanks.
A couple of observations:
On on hand:
1. You never got time to do it.
If you treat re-factoring as something separate from coding (instead of an intrinsic part of coding decently), and if you can't manage time, then yeah, you'll never have time for it.
"Context switching" is mentally expensive (difficult to leave what you're doing in the middle of it).
See previous point above. Refactoring is an active component of good coding practices. If you separate the two as if they were two different tasks, then 1) your coding practices need improvement/maturing, and 2) you will engage in severe context switching if your code is in a severe need of refactoring (again, code quality.)
It's usually isn't an easy task.
Only if the code you produce is not amenable to refactoring. That is, code that is hard to refactor exhibits one or more of the following (list is not universally inclusive):
High cyclomatic complexity,
No single responsibility per class (or procedure),
High coupling and/or poor low cohesion (aka poor LCOM metrics),
poor structure
Not following the SOLID principles.
No adherence to the Law of Demeter when appropriate.
Excessive adherence to the Law of Demeter when inappropriate.
Programming against implementations instead of interfaces.
There's always the fear you'll break something that's now working.
Testing? Verification? Analysis? Any of these before being checked into source control (and certainly before being delivered to the users)?
On the other:
1. Using that code is error-prone.
Only if it has never tested/verified and/or if there is no clear understanding of the conditions and usage patterns under which the potentially error-prone code operates acceptably.
Over time you might realize that if you would have refactored the code the
first time you saw it - That would have save you time on the long run.
That realization should not occur over time. Good engineering and work ethics calls for that realization to occur when the artifact (being hardware or software) is in the making.
So my question is - Practically - When do you decide it's time to refactor your code?
Practically, when I'm coding; I detect an area that needs improvement (or something that needs correction after a change on requirements or expectations); and I get an opportunity to improve it without sacrificing a deadline. If I cannot re-factor at that moment, I simply document the perceived defect and create a workable, realistic plan to revisit the artifact for refactoring.
In real life, there will be moments that we'll code some ugly kludge just to get things running, or because we are drained and tired or whatever. It's reality. Our job is to make sure that those incidents do not pile up and remain unattended. And the key to this is to refactor as you code, keep the code simple and with a good, simple and elegant structure. And by "elegant" I don't mean "smart-ass" or esoteric, but that displays what is typically considered readable, simple, composable attributes (and mathematical attributes when they apply practically.)
Good code lends itself to refactoring; it displays good metrics; its structure resembles both computer science function composition and mathematical function composition; it has a clear responsibility; it makes its invariants, pre and post-conditions evident; and so on and so on.
Hope it helps.
One of the most common mistakes i see is people associating the word "Refactor" with "Big Change".
Refactoring code does not always have to be big. Even small changes such as changing a bool to a proper enum, or renaming a method to be closer to the actual function vs. the intent is refactoring your code. With the exception of the end of a milestone, I try to make at least a very small refactoring every single time I check in. And you'd be amazed at how quickly this makes a visible difference in the code.
Bigger changes do take bigger planning though. I try and schedule about 1/2 a day every two weeks during a normal development cycle to tackle a bigger refactoring change. This is enough time to make a substantial improvement to the code base. If the refactoring fails 1/2 a day is not that much of a loss. And it's rarely a total loss because even the failed refactoring will teach you something about your code.
Whenever it smells, I refactor. I may not make it perfect now, but I can at least make a small step towards a better state. And those small changes do add up over time...
If I am in the middle of something when I notice the smell, and fixing it isn't trivial (or I am just before release), I may make a (mental or paper) note to return to it once I am finished with the primary task.
Practice makes one better :-) But if I don't see a solution to a problem, I put it aside and let it brew for a while, discuss it with coworkers, or even post it on SO ;-)
If I don't have unit tests and the fix isn't trivial, I start with the tests. If the tests aren't trivial either, I apply point 2.
I start to refactor as soon as I find I am repeating my self. DRY principles ftw.
Also, if methods/functions get too long, to the point where they look unwieldy, or their purpose is being obscured by the length of the function, I break it into private subfunctions that explain what is really going on.
Lastly, if everything's up and running, and the code is dog-slow, I start to look at refactoring for the sake of performance.
When implementing a new feature I often notice that the task would be much simpler if the code I'm working on was structured in a different way. In this case I usually step back, try to do the refactoring first, and only after this is done I continue implementing the new feature.
I also have a habit to track all potential improvements that come to my mind either in notes or the bug tracker. The ideas bake there for some time, some of them don't feel so compelling anymore, and the reasonable ones are implemented during a day which I dedicate to smaller tasks.
Refactor code when it needs to be refactored. Some symptoms I look for:
duplicate code in similar objects.
duplicate code in within methods of one object.
anytime the requirements have changed twice or more.
anytime somebody says "we will clean that up later".
any time I read through code and shake my head thinking "what goofball did this" (even when the goofball in question is me)
In general, less design and/or less clear requirements means more oppurtunities for refactoring.
This might sound like a joke, but really, I only refactor when things "get messy". When a simple task starts taking more time and usual, when I have to twist my mind around to remember what function is doing what and such. Also, if the code starts running slow and it's not because I'm running in a development enviroment (a lot of variable outputs and such) if I can't optimise it, I refactor the code. As you said, it's worthed on the long run.
Still, I allways make sure I have enough time to think things through before I start so I don't get in this sittuation.
Cheers!
I usually refactor when one of the following is true:
I have nothing better to do and waiting for the next project to come to my inbox
The additions/changes I'm making to the code cannot work unless, or would be better if, I refactor
I am aesthetically displeased with the way the code is laid out
Martin Fowler in his book if the same name suggests you do it the third time you're in a block of code to make another change. First time in the block, you happen to notice you should refactor, but don't have time. Second time back...same thing. Third time back-now refactor.
Also, I read the developers of a current release of smalltalk (squeak.org, I think) say they go through a couple weeks of intense coding...then they step back and look at what can be refactored.
Personally I have to resist the impulse to refactor as I code or I get 'paralyzed'.

How do you refactor a large messy codebase?

I have a big mess of code. Admittedly, I wrote it myself - a year ago. It's not well commented but it's not very complicated either, so I can understand it -- just not well enough to know where to start as far as refactoring it.
I violated every rule that I have read about over the past year. There are classes with multiple responsibilities, there are indirect accesses (I forget the term - something like foo.bar.doSomething()), and like I said it is not well commented. On top of that, it's the beginnings of a game, so the graphics is coupled with the data, or the places where I tried to decouple graphics and data, I made the data public in order for the graphics to be able to access the data it needs...
It's a huge mess! Where do I start? How would you start on something like this?
My current approach is to take variables and switch them to private and then refactor the pieces that break, but that doesn't seem to be enough. Please suggest other strategies for wading through this mess and turning it into something clean so that I can continue where I left off!
Update two days later: I have been drawing out UML-like diagrams of my classes, and catching some of the "Low Hanging Fruit" along the way. I've even found some bits of code that were the beginnings of new features, but as I'm trying to slim everything down, I've been able to delete those bits and make the project feel cleaner. I'm probably going to refactor as much as possible before rigging my test cases (but only the things that are 100% certain not to impact the functionality, of course!), so that I won't have to refactor test cases as I change functionality. (do you think I'm doing it right or would it, in your opinion, be easier for me to suck it up and write the tests first?)
Please vote for the best answer so that I can mark it fairly! Feel free to add your own answer to the bunch as well, there's still room for you! I'll give it another day or so and then probably mark the highest-voted answer as accepted.
Thanks to everyone who has responded so far!
June 25, 2010: I discovered a blog post which directly answers this question from someone who seems to have a pretty good grasp of programming: (or maybe not, if you read his article :) )
To that end, I do four things when I
need to refactor code:
Determine what the purpose of the code was
Draw UML and action diagrams of the classes involved
Shop around for the right design patterns
Determine clearer names for the current classes and methods
Pick yourself up a copy of Martin Fowler's Refactoring. It has some good advice on ways to break down your refactoring problem. About 75% of the book is little cookbook-style refactoring steps you can do. It also advocates automated unit tests that you can run after each step to prove your code still works.
As for a place to start, I would sit down and draw out a high-level architecture of your program. You don't have to get fancy with detailed UML models, but some basic UML is not a bad idea. You need a big picture idea of how the major pieces fit together so you can visually see where your decoupling is going to happen. Just a page or two of some basic block diagrams will help with the overwhelming feeling you have right now.
Without some sort of high level spec or design, you just risk getting lost again and ending up with another unmaintainable mess.
If you need to start from scratch, remember that you never truly start from scratch. You have some code and the knowledge you gained from your first time. But sometimes it does help to start with a blank project and pull things in as you go, rather than put out fires in a messy code base. Just remember not to completely throw out the old, use it for its good parts and pull them in as you go.
What was most important for me on different occasions were unit tests: I took a few days to write tests for the old code and then I was free to refactor with confidence. How exactly is a different question, but having the tests made it possible for me to make real, substantial changes to the code.
I'll second everyone's recommendations for Fowler's Refactoring, but in your specific case you may want to look at Michael Feathers' Working Effectively with Legacy Code, which is really perfect for your situation.
Feathers talks about Characterization Tests, which are unit tests not to assert known behaviour of the system but to explore and define the existing (unclear) behaviour -- in the case where you've written your own legacy code, and fixing it yourself, this may not be so important, but if your design is sloppy then it's quite possible there are parts of the code that work by 'magic' and their behaviour isn't clear, even to you -- in that case, characterization tests will help.
One great part of the book is the discussion about finding (or creating) seams in your codebase -- seams are natural 'fault lines', if you like, where you can break into the existing system to start testing it, and pulling it towards a better design. Hard to explain but well worth a read.
There's a brief paper where Feathers fleshes out some of the concepts from the book, but it really is well worth hunting down the whole thing. It's one of my favourites.
Just an additional refactoring that is more important than you think: Name things correctly!
This goes for any variable name and method name. If the name does not accurately reflect what the thing is used for, then rename it to something more accurate. This might require several iterations. If you cannot find a short, and entirely accurate name, then that item does too much and you have an excellent candidate for a code snippet that needs to be split. The names also clearly indicate where the cuts are to be made.
Also, document your stuff. Whenever the answer to WHY? is not clearly conveyed by the answer to HOW? (being the code) you will need to add some documentation. Capturing design decisions is probably the most important task as it is very hard to do in code.
You could always start from "scratch". That doesn't mean scrap it and start from nothing, but try to rethink high-level things from the beginning, since you seem to have learned a lot since the last time you worked on it.
Start from a higher level, and as you build the scaffolding of your new and improved structure, take all the code you can reuse, which will probably be more than you think if you're willing to read through it and make some small changes.
When you're making the changes, be sure to be strict with yourself about following all the good practices you now know, because you will really thank yourself later.
It can be surprisingly refreshing to properly re-make program to do exactly what it did before, only more "cleanly". ;)
As others have mentioned as well, unit-tests are your best friend! They help you ensure that your refactoring works, and if you're starting from "scratch", it's the perfect time to write them.
You're in a much better position than many people facing this problem in that you understand what the code is supposed to do.
Taking variables out of a shared scope, as you're doing, is a great start, in that you're partitioning responsibilities. Ultimately you want each class to express a single responsibility. A few other things you might look at:
Easy targets for refactoring are code that's duplicated in lots of places and long methods.
If you're managing application state through statically initialized singletons or worse, a global state that everything is talking to, consider moving it to a managed initialization system (i.e. a dependency injection framework like spring or guice) or at least make sure that the initialization isn't entangled with the rest of the code.
Centralize and standardize how you're accessing outside resources, especially if you've got things like file locations or urls hardcoded.
Buy an IDE that has good refactoring support. I think IntelliJ is the best, but Eclipse has it now, too.
The unit test idea is key as well. You will want to have a suite of large, overall transactions that will give you the overall behavior of the code.
Once you have those, start creating unit tests for classes and smaller packages. Write the tests to demonstrate proper behavior, make your changes, and re-run the tests to demonstrate that you haven't broken everything.
Track code coverage as you go. You'll want to work it up to 70% or better. For the classes you change, you'll want those to be 70% or better before you make your changes.
Build up that safety net over time and you'll be able to refactor with some confidence.
very slowly :D
No seriously... take it one step at a time. For instance, refactor something only if it affects or helps you write the current bug/feature that you are working on right now and do no more than that. And before you refactor make darn sure that you have some kind of automated test in place that gets run on each build that will actually test what you are writing/refactoring. Even if you don't have unit tests, it is never too late to start adding them for all new and modified code that is being written. Over time, your code base will get better in small increments daily or weekly instead of worse - all without you making monumental heaps of changes.
In my personal opinion and experience, it's not worth it to just refactor a (legacy) codebase en masse for the sake of refactoring. In those cases, it's best to just start from scratch and do it right all over again (and very rarely are you afforded the opportunity to do such a thing). Hence, just refactoring incremental is the way to go.
For Java code, my favorite first step is to run Findbugs and then remove all the dead stores, un-used fields, unreachable catch blocks, unused private methods and likely bugs.
Next I run CPD to look for evidence of cut-copy-paste code.
It isn't unusual to be able to reduce the code base by 5% by doing this. It also saves you from refactoring code that is never used.
I think you should use Eclipse as a IDE because it is having many plugins and free of cost.You should now follow the MVC pattern and yes must write test cases using JUnit.Eclipse also have plugin for JUnit and it is providing code refactoring facility too so that will reduce your some work.And always remember that writing a code is not important the main thing is to write clean code.So now give comments everywhere so that not only you but any other person read the code then while reading the code he must feel that he is reading an essay.
Refactor the low-hanging fruit. Nibble away at the easy bits, and as you do that, the harder bits will begin to be a little easier. When there aren't any bits left to refactor, you're done.
The refactorings you'll probably find most useful are Rename Method (and even more trivial Renamings like Field, Variable, and Parameter), Extract Method, and Extract Class. For each refactoring you perform, write the necessary unit tests to make the refactoring safe, and run the entire suite of unit tests after each refactoring. It's tempting - and, let's be honest, pretty safe - to rely on the automated refactorings of your IDE, without the tests - but it's good practice and will be good to have the tests into the future as you add functionality to your project.
You might want to look at Martin Fowler's book Refactoring. This is the book that popularized the term and technique (my thought when taking his course: "I've been doing a lot of this all along, I didn't know it had a name"). A quote from the link:
Refactoring is a controlled technique
for improving the design of an
existing code base. Its essence is
applying a series of small
behavior-preserving transformations,
each of which "too small to be worth
doing". However the cumulative effect
of each of these transformations is
quite significant. By doing them in
small steps you reduce the risk of
introducing errors. You also avoid
having the system broken while you are
carrying out the restructuring - which
allows you to gradually refactor a
system over an extended period of
time.
As others have pointed out, unit tests will allow you to refactor with confidence. And start by reducing code duplication. The book will give you lots of other insights.
Here is a catalog of refactorings.
The correct definition of messy code, is code that hard to maintain and change.
To use more mathematical definition, you can check your code by code metrics tools.
This way, you will keep the code that already good enough, and find very fast, the wrong code.
My experience say, that is very powerful way to improve the quality of your code. (if your tool can show you the result on each build or on realtime)
Throw it away, build it new.

Maintaining code that is close to software rot

Let's say you're the lucky programmer who inherited a code that is close to software rot. Software rot as defined in Pragmatic Programmer is code that is too ugly (in this case, unrefactored code) it is compared to a broken window that no one wants to fix it and in turn can damage a house and can cause criminals to thrive in that city.
But it is the same code that Joel Spolsky in JoelOnSoftware values in such a way that it contains valuable patches which have been debugged throughout its lifetime (which can look unstructured and ugly).
How would you maintain this?
Have a look at Working Effectively with Legacy Code by Michael Feathers. Lots of good advice there.
Welc is a great book. You should certainly check it out.
If you don't want to wait for the book arrive, I can summarise the bits I think are important
You need to understand your system. Do some throwaway coding to understand the part you need to work on. E.g. be prepared to try and do some work to get the system under test based upon the knowledge that you will probably break it. (understand what went wrong)
Look for areas where you can break dependencies. Michael Feathers calls these seams. They are points where you can take abit of the legacy system and refactor it so it will be testable.
As you work on the system add tests as you go.
You can do a few things:
Refactor the code to make it more maintainable. If the code is being used for feature development as well then refactoring will make sense.
If the code is legacy code and is touched only for bug fixes then I would suggest you only fix as much as required and when required.
Often, the first impression people get from such legacy acquired code is that its messy. Give it some time and get comfortable with it. You may see some valid reasons as to why the code looks this way with time to come...
First, make sure that you have a robust test procedure for it, and that it will actually be tested again in depth, by several people (you, QA, ...).
Then, take some time, day after day, to improve the small parts you have to modify. The key is to have a management that understands "why it takes longer as expected". Explain that you have to do refactoring and that it is important for both short and long term, ask other developers to review the existing code and confirm your arguments.