As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
While solving any programming problem, what is your modus operandi? How do you fix a problem?
Do you write everything you can about the observable behaviors of the bug or problem?
Take me through the mental checklist of actions you take.
(As they say - First, solve the problem. Then, write the code)
Step away from the computer and grab some paper and a pen or pencil if you prefer.
If I'm around the computer then I try to program a solution right then and there and it normally doesn't work right or it's just crap. Pen and paper force me to think a little more.
First, I go to one bicycle shop; or another.
Once I figure nobody invented that particular bicycle,
Figure out appropriate data structures for the domain and the problem, and then map needed algorithms for dealing with those data structures in ways you need.
Divide and conquer. Solve subsets of the problem
This algorithm has never failed me:
Take Action. Often just sitting there and being terrified or miffed by the problem will not help solve it. Also, often, no amounting of thinking will solve the problem. So you have to get your hands dirty and grapple with the problem head on.
Test. Under exactly what conditions, input values or states, does the problem occur? Make a mental model of why these particular conditions might cause the problem. Check similar conditions that don't cause the problem. Test enough so that you have a clear understanding of the problem.
Visualise. Put debug code in, dump variable contents, single step code whatever. Do anything that clarifies exactly what is going on where - within the problem conditions.
Simplify. Remove or comment code, poke values into variables, run particular functions with certain values. Try your hardest to get to the nub of the problem by cutting away the chaff or stuff that doesn't have a relevance to the problem at hand. Copy code into a separate project and run it, if you have to, to remove dependencies.
Accept. A great man said: "whatever remains, however improbable, must be the truth". In other words, after simplifying as much as you can, whatever is left must be the problem, no matter how bizarre it may seem at first.
Logic. Double, triple check the logic of the problem. Does it make sense? What would have to be true for it to make sense? Is there something you're missing? Is your understanding of the algorithm wrong? If all else fails, re-engineer the problem away.
As an adjunct to step 3, as a last resort, I often employ the binary search method of finding wayward code. Simply comment half the code and see if the problem disappears. If it does then it must be in that half (and vice versa). Half the remaining code and continue.
Google is great for searching for
error messages and common problems. Somewhere, someone has usually encountered your problem before and found a solution.
Pencil and paper. Pseudo Code and
workflow diagrams.
Talk to other developers about it. It
really helps when you have to force
yourself to simplify the problem for
someone else to understand. They may also have another angle. Sometimes it's hard to see the forest through the trees.
Go for a walk. Take your head out of
the problem. Take a step back and try
to see the bigger picture of what you
want to achieve. Make sure the problem you are 'trying' to solve is the one your 'need' to solve.
A big whiteboard is great to work on. Use it to write out workflows and relationships. Talk through what is happening with another team member
Move on. Do something else. Let your subconscious work on the problem. Allow the solution to come to you.
write down the problem
think very hard
write down the answer
I can't believe no one posted this already:
Write up your problem on StackOverflow, and let a bunch of other people solve it for you.
My method, something analytic-sinthetic:
Calm down. Take a deep breath. Focus your attention in what you're going to solve. This may include going for a walk, cleaning the whiteboard, getting scratch paper and pencils ordered, some snacks, etc. Avoid stress.
High level understanding of the problem. In case it is a bug, when does it happened? in what circumstances? If it is a new task, try to diverge of what results are needed. Recollect data, evidence, get acceptance descriptions, maybe documentation or a talk with someone that knows about the issue.
Setup the test playground. Try to feel happy with the tools needed. Use the data collected in the previous step to automate something, hopefully the bug if that's the case, some failing tests otherwise.
Start sinthesizing, summarizing what you know, reflecting that on code. Executing once and once more. If you are not happy with the results, return to step two with renewed ideas, diverge more: maybe apply tools (in order of cost) that helped before, i.e. divide and conquer, debug, multithread, dissassemble, profile, static analysis tools, metrics, etc. Get in this loop until you can isolate the problem and pass the over the phone test.
Now it's time to fix it but you have all the tools set up. It won't be so much trouble. Start writing code, apply refactoring, enjoy describing your solution in the docs.
Get someone to try your solution. She can eventually get you to step 2 but that's ok. Refine your solution and redeploy.
I'm interpreting this as fixing a bug, not a design problem.
Isolate the problem. Does it always occur? Does it occur only the first time run on a set of new data? Does it occur with specific values, but not with others?
Is the system generating any error message that appear related to the problem? Verify that the error messages are not generated when the problem does not occur.
Has anything been changed recently? Those are likely places to start looking.
Identify the gap between what I know is working (e.g. I can start up the app and attempt to do a query) and what I know is not working (e.g. it gives me an error instead of the expected results). Find an intermediate point in the code where it seems possible to look for a problem (does this contain valid data at this point?). This allows me to isolate the problem on one side or the other of the point I looked.
Read the stack traces. If you have a stack trace, find the first line that mentions in-house code. The problem is not in your libraries. Maybe it will turn out to be, but just forget about that possibly first. The error is in your code. It's not a bug in java, it's not a bug in apache commons HTTP client, it's in code written in your organization.
Think. Come up with something the system could be doing that can cause the symptoms you see. Find a way to validate whether that is what the system is doing.
No possibility the bug is in your code? Google for anything you can think of related. Maybe it is a bug in the library, or poor documentation leading you to use it wrong.
Logic.
Break the problem down, use your own brain and knowledge of each component of the system to determine exactly what is happening and why; then on the basis of this you will discover where the problem isn't, and hence determine where it must be.
I stop working on it until tomorrow. I usually solve my problem in the shower the next day. I find stepping away from the issue, and allowing my brain to clear, allows a fresh perspective on the issue.
Answer these three questions in this order:
Q1: What is the desired output?
I don't care if this is a napkin with scribble on it. I want something tangible that shows me what the end result is supposed to look like. If I don't get at least this far then I stop.
Q2: What is the input?
I find out what data I have coming in. Where this data is coming from from. What formulas I may need. What dependencies there might be on A happening before B. What permissions if any are necessary to get this data. I then ask Question 3.
Q3: Is there enough input to create the output?
If the answer is No then I go back to Q2 and get more input from whoever can give it to me.
For very large problems I break them down in Phases and apply Q1 Q2 and Q3 to each phase.
To paraphrase Douglas Adams, programming is easy. You only need to stare at a blank screen until your forehead bleeds. For people who are squeamish about their foreheads, my ideal architect-and-build for the bigger problems would go something like this. (For smaller problems, like George Jempty I can only recommend Feynman's Algorithm.)
What I write is couched in an on-site business setting but there are analogues in open-source or distributed teams. And I can't pretend that every, or even most, projects pan out this way. This is just the series of events that I dream about, and occasionally come to pass.
Get advanced, concise warning of what the problem is likely to look like. This is not the full, final meeting, but an informal discussion. Uncertainty in certain specification details is fine, as long as the client (or manager) is honest. Then take a piece of paper or text editor, and try to condense what you've learned down to five essential points, and then try to condense those to a single sentence. Be happy you can picture the core problem(s) to be solved without referencing any of your documentation.
Think about it for maybe a couple of hours, maybe playing with code and prototyping, but not with a view to the full architecture: you should even do other stuff, if you've time, or go for a walk. It's great if you can learn about a job an hour before home time in order to deliver a decision around midday the next day, so you get to sleep on it. Spend your time looking at potential libraries, frameworks, data standards. Try to tie together at least two languages or resources (say, Javascript on PHP-generated HTML; or get a Python stub talking RPC to a web service). Flesh out the core problems; zoom in on the details; zoom out to make sure the whole shape is still distinct and makes sense.
Send any questions to the client or manager well in advance of a meeting to discuss both the problem and your proposed solution. Invite as many stakeholders and your programming peers along as is convenient (and as your manager is happy with.) Explain the problem back to them, as you see it, then propose your solution. Explain as much as you can; pitch the technical details at your audience, but also let your explanations fill in more details in your own mental model.
Iterate on 2 and 3 until everyone is happy. Happiness is domain-specific. Your industry might require UML diagrams and line-item quotations, or it might be happy with something jotted on a whiteboard with an almost invisible drywipe marker. Make sure everyone has the same expectations of what you're about to build.
When your client or manager is happy for you to start, clear everything. Close Twitter, instant messenger, IRC and email for an hour or two. Start with the overall structure as you see it. Drop some of your prototype code in and see if it feels right. If it doesn't, change the structure as early as possible. But most of all make sure your colleagues give you a couple of hours of space. Try not to fight fires in this time. Begin with a good heart and cheer, and interest in the project. When you're bogged down later on you'll be glad of the clarity that came out of those first few hours.
How your programming proceeds from there depends on what it actually is, and what tasks the finished code needs to perform. And how you ultimately architect your code, and what external resources you use, will always be dictated by your experience, preference and domain knowledge. But give your project and its stakeholder team the most hopeful, most exciting and most engaged start you can.
Pencil, paper and a whiteboard. If you need more organization, use a tool like MindManager.
Andy Hunt's Pragmatic Thinking and Learning has a lot to say on this question.
Question: How do you eat an elephant?
Answer: One bite at a time.
One technique I like using for really big projects is to get into a room with a whiteboard and a pile of square Post-it Notes.
Write your tasks on the Post-it Notes then start sticking them on the whiteboard.
As you go, you can replace tasks that are too big with multiple notes.
You can shift notes around to change the order that the tasks happen in.
Use different colours to indicate different information; I sometimes use a different colour to indicate stuff that we need to do more research on.
This is a great technique for working with a team. Everybody can see the big picture and can contribute in a highly interactive way.
I think about it. I take anywhere from a couple minutes to a few weeks to mull over the problem and develop a general plan of attack.
Hammer out an initial solution. This solution is probably half-baked and one or more aspects may not work.
Refine that solution. Keep working on the problem till i have something that solves the problem.
(and this may be done at any step in the process) Ask questions on stack overflow to clear up any difficulties i'm having at the moment.
One of my ex-colleagues had a unique Modus Operandi. Whenever faced with a hard programming problem (e.g. Knapsack problem or some kind of non-standard optimization problem) he would get stoned on weed, claiming his ability to visualize complex state (such as that of recursive function doing operations on set passed on the stack) was greatly improved. The only difficulty, the next day he could not understand his own code. So eventually I showed him TDD and he has quit smoking...
I write it on a piece of paper and start with my horrible class diagram or flowchart. Then I write it on sticky notes to break it down to "TO DO's".
1 sticky note = 1 task. 1 dumped sticky note = 1 finished task. This works really well for me so far.
Add the problem to StackOverflow, wait about 5-10 minutes and you usually have a brilliant solution! :)
The following applies to a bug rather than building a project from scratch (but even then it could do both if reworded a bit):
Context: What is the problem at hand? What is it preventing, doing wrong, or not doing?
Control: What variables (in the wide sense of the word) are involved? Can the problem be reproduced?
Hypothesise: With enough data on what is occurring or required, it is possible to hypothesise, that is, to draw a mental image of the problem in question.
Evaluate: How much effort, cost, etc, will the correction require? Determine if it's a show stopper or a minor irritant. At this point, it may be too early to tell, but even that is a form of evaluation. This will allow prioritisation.
Plan: How will the problem be approached? Does it require specifications? If so, do them first.
Execute: A.K.A. The fun part.
Test: A.K.A. The not-so-fun-part.
Repeat to satisfaction. Finally:
Feedback: how did it come to be this way? What lead us here? Could this have been prevented, and if so, how?
EDIT:
Really summarised, stop, analyse, act.
Probably a gross oversimplification:
But really, this holds 100% true.
CONCEIVE
What are you without an idea? You may have a problem, but first you must define it more explicitly. You have a frozen pizza that you want to eat. You need to cook that pizza! In programming, this is usually your brainstorming session for coming up with a solution from the hip. Here you decide what your approach is.
PLAN
Well, of course you need to cook that pizza! But HOW! Will you use the oven? No. Too easy. You want to build a solar cooker, so you can eat that frozen pizza anywhere that the sun grants you power to do so. This is your design phase. This is your pencil and paper phase. This is where you start to form a cohesive, step-by-step method to implementation.
EXECUTE
Well, you are going to build a solar oven to cook your frozen pizza; you've decided. NOW DO IT. Write code. Test. Commit. Refactor. Commit.
Related question that may be useful:
Helpful points of view, concepts or ways to think about problems every newbie should know
Every problem I've ever had to solve on a computer has had something to do with solving a task in the real world. Therefore, I've learned to look at how I would accomplish something in the real world and map that to the computer problem.
Example:
I need to keep track of my student's grades and come up with a final grade that is an average of all the grades throughout the year?
Well, I'd save the grades in a log (database) and I'd have a page for every student (Field StudentID) and so on...
I always take a problem to a blog first. Stackoverflow would be a good place to start. Why waste your time re-inventing the wheel when someone else may have already solved a similar problem in the past? If anything you will get some good ideas to solve it yourself.
I use the scientific method:
Based on the available information about the programming problem I come up with a hypothesis about what the reason could be.
Then I design / think up an experiment that will reject or confirm the hypothesis. This could be observing something in a debugger or screen/file output. Or changing the program slightly.
If the hypothesis is rejected then repeat 1. The information gathered in 2. may help in coming up with a new hypothesis.
If the hypothesis is confirmed then the hypothesis may be refined/become more specific (repeat 1.). Or it may already be clear what the problem is.
This directed way of find the problem is much more effective than changing things at random, observe what happens and try to (inappropriately) use statistics.
No one has mentioned truth tables! But that's probably because they're usually only mildly helpful ;) (although your mileage may vary) I used one for the first time yesterday in my 8 years of programming.
Diagramming on whiteboards or paper have always been very helpful for me.
When faced with very weird bugs. Like this: JPA stops working after redeploy in glassfish
I start from scratch. Make a new project. Does it work? Yes. Start to recreate the components of my app one piece of a time. DB. Check. Deploy. Check. Until it breaks. Continue until it breaks. If it never breaks. Ok. You just recreated your entire app. Discard of the old one. When it breaks. You pinpointed the exact problem.
I think - what am i looking for?
What method best solves this problem?
Implement it with solid logic - no code
Pseudo code
code a rough cut
execute
These is my prioritized methods
Analyse
a. Try to find the source of your problem
b. Define desired outcome
c. Brainstorm about solutions
Try on error (If I dont want to analyse)
Google a bit around
a. Of course, look around on stackoverflow
When you get mad, walk away from pc for a cup of coffee
When you still mad after 10 cups of coffee, Go sleep a night to think about the problem
GOLDEN TIP
Never give up. Persistence will always win
Related
G'day,
This is related to my question on star developers and to this question regarding telling someone that they're writing bad code but I'm looking at a situation that is more specific.
That is, how do I tell a "star" that their changes to a program that I'd written are poorly made and inconsistently implemented without just sounding like I'm annoyed at someone "playing with my stuff"?
The new functionality added was deliberatly left out of the original version of this shell script to keep the it as simple as possible until we got an idea of the errors we were going to see with the system under load.
Basically, I'd argued that to try and second guess all error situations was impossible and in fact could leave us heading down a completely wrong path after having done a lot of work.
After seeing what needed to be added, someone dived in and made the additions but unfortunately:
the logic is not consistent
the variable names no longer describe the data they contain
there are almost no comments at all
the way in which the variables are used is not easy to follow and massively decreases readability and hence maintainability.
I always try and approach coding from the Damien Conway point of view "Always code as if your system is going to be maintained by a psychopath who knows where you live." That is, I try to make it easy for follow and not as an advertisement for my own brilliance. "What does this piece of code do?" exercises are fun and are best left to obfuscation contests IMHO.
Any suggestions greatly received.
cheers,
I would just be honest about it. You don't necessarily need to point every little detail that's wrong, but it's worth having a couple of examples of any general points you're going to make. You might want to make notes about other examples that you don't call out in the first brief feedback, in case they challenge your reasoning.
Try to make sure that the feedback is entirely about the code rather than the person. For example:
Good: The argument validation in foo() seems inconsistent with that in bar(). In foo(), a NullPointerException is thrown if the caller passes in null, whereas bar() throws IllegalArgumentException.
Bad: Your argument validation is all over the place. You throw NullPointerException in foo() but IllegalArgumentException in bar(). Please try to be consistent.
Even with a "please," the second form is talking about the developer rather than the code.
Of course in many cases you don't need to worry about being so careful, but if you think they're going to be very sensitive about it, it's worth making the effort. (Read what you've written carefully, if it's written feedback: I accidentally included a "you" in the first version to start with :)
I've found that most developers (superstar or not) are pretty reasonable about accepting, "No, I didn't implement that feature because it has problem X." It's possible that I've been lucky though.
Coming from the other perspective, I would encourage you to think about it in their shoes. I will describe a "hypothetical" experience.
Some things to keep in mind:
The guy was trying to do something
good.
Programmers are terrible at
mind reading. They tend to only know
what they read.
He may have not been given complete guidance as what needs to be done(or what doesn't need to be done)
He is likely doing the best he knows how to.
Just keep that in mind and talk to them. Teach them. No need for yelling or pissing contests. Just remember that they are not intentionally trying to make your life difficult.
I see that you've asked a lot of questions about how to deal with certain kinds of developers. It seems to be a common thread for you. You keep asking about how to change people around you. If this is a constant problem for you, then perhaps you are the problem.
Now I know you are asking questions to learn how to deal with people you find difficult, and that's good, however, you keep asking (and getting answers) about how to change people.
It seems to me that you need to change. Work with these people to change the code to what you want it to be. With them. Don't try to get them to do it. Just do it, and tell them what you did and why, and ask suggestions for further improvement, and learn from each other. Play off of each other's experience and strengths. Just my 2 cents.
If you have clearly defined coding standards for the project, point out that the code needs to be changed to meet those standards. The list you have there seems like quite reasonable feedback (though #3 is much argued-over; I would only push to document the really confusing parts as fixing the other three points, hopefully, makes the code less confusing).
If there are any other examples you have in your repository from this developer that are several months old, show one to him and ask him what it is doing. (Show him this one in a few months). When he has to zip around to find out what is actually in his variables, and deconstructing every line of code to figure out what it is doing. Break into a code review / pair programming session right there. Refactor and rename together so that he hopefully begins to see for himself exactly why these things are important.
Frankly, I think this is a political problem, not a coding problem. Specifically...
WHO SAID THIS PERSON WAS A "STAR"? If this is the same person you described in your other question, then you already have your answer there: THIS PERSON IS NO "STAR".
So then you get into the other effects of politics...
Who is claiming this person to be a star? Why can you not just tell the person "this is crap code"? Who is protecting them / defending them were you to do that? Can you do that or would you get blasted / demoted / put on the "to be laid off" pile?
You are asking questions that cannot really be answered in isolation. IF the code is crap, then throw it away and do it correctly yourself. IF there are reasons that you cannot do that, then you need to ask yourself if the benefits of this place outweigh the negatives.
Cheers,
-R
Creating a program and then releasing it to be worked on by other developers is tough. You are throwing your code to the mercy of others' development styles, coding conventions, etc.
Telling those developers that they are doing coding poorly, after the code is in, is one of the hardest things that you can do. It is best to address your concerns before they ever start working with your code. This can be done in two ways: Maintaining a detailed coding standard, requiring that submitted code adheres to this and maintaining a development road map, not to just outline when new features will be in, but to create dependencies to avoid such mishaps.
More to your situation, it is important not to criticize or it could cause hostility and worse code coming in. Maybe you can work with that developer to create standards documentation. You will be able to express your ideas about what the standards should be, and you will get their input, without causing any hard feelings.
Always point out the good things in their code, and be sure when discussing the weaknesses that you frame them pointing out the reasons that it will benefit everyone (the developer included), never criticize.
Good luck.
I would do the following:
Make sure he knows that his hard work is appreciated (preferably, this should be the truth)
Ask him if he would mind making a few changes, making it sound like no big deal and easy to fix
Explain the issues, including why they are issues, and suggest specific changes to set him on the right path.
Hopefully, the exercise will help him integrate into the culture project better.
We try to solve these potential 'issues' proactively:
Every 'bigger' project where people work together gets assigned a project 'codelead' (one of the developers). This rotates every project (based on preferences, experience with the particular task ...) so everyone gets to be in the 'contributing' and 'code-project-lead' roles once in a while.
We explicitely made an agreement that
these project 'leads' can decide
whatever they want to with the code
contributions of the others (sort of
like a temporary dictatorship: change
it, make suggestions, ask people to
redo stuff etc.). The projectcode
'lead' bears the complete
responsibility for the aggregated
code to work.
With these formalised 'leads' (and the changing roles) I think people have less problems with (constructive) criticism of the parts they contribute.
Yes, keep the feedback as appreciative, professional and technical as possible, back up your concerns with possible "worst case" scenarios so that the disadvantages of those features and/or this particular implementation, become blatantly obvious.
Also, if this is about features/code that are very specific and are not of any use to most users, express your concerns about the code/use ratio - indicating concerns about increased code base complexity etc.
Ideally, present your concerns as open-ended questions - in the sense of: "Though, I am wondering if this way of doing it may work in the long term, due to ...". So that you actually encourage an active dialogue between contributors.
Invite your fellow contributors and user to provide their opinions on these concerns, in fact ask other people/contributors what they are thinking about this addition (in terms of pros & cons, requirements, code quality), do make the statement that you are willing to reconsider your current position if other contributors/users can provide corresponding insight.
You are basically encouraging an informal review that way, asking your community to also look into the proposed additions, so that the advantages and disadvantages can be discussed.
So, whatever the decision will be, it will be one that is community-backed, and not just simply made by you.
You being the architect of the original design, are also in an excellent position to provide architectural reasons why something is not (yet) suitable for inclusion/deployment.
If stability, complexity or code quality are a real concern, do illustrate how other contributions also had to go through a certain review process in order to be acceptable.
You can also mention how specific code doesn't really align with your current design, or how it may not scale too well with future extension to your current design, similarly you can highlight why certain stuff was left out explicitly.
If you actually like the features or the core idea, be sure to highlight the excellent addition these features would make if properly implemented and integrated, but do also highlight that the existing implementation isn't really appropriate due to a number of reasons.
If you can, do make specific suggestions for improvements, provide examples of how to do things better, and what to avoid and do express that you hope, this can be reworked to be added with the help of your project's community.
Ideally, present your requirements for actually accepting this contribution and do mention the background for your requirements, you may in fact say that you hate some of these requirements yourself.
Preferably, present and discuss instances where you yourself contributed similar code (or even worse code) and that you ended up facing huge issues due to your own code, so that these policies are now in place to prevent such issues. By actually talking about your own bad code, you can actually be very subjective.
Emphasize that you generally appreciate the effort itself, and that you are willing to provide the necessary help and pointers to bring the code in question into a better shape and form. Also, encourage that similar contributions in the future should be properly coordinated within your community, in order to avoid similar issues.
Always think in terms of features and functionality (and remind your contributor to do the same), not code - imagine it like a thorough code review process, where the final code that ends up being committed/accepted, may have hardly anything in common with the original implementation.
This is again a good possibility, to present examples where you yourself developed code that ended up largely reworked, so that much of it is now replaced by a much better implementation.
Similarly, there's always the issue with code that has no active maintainers, so you can just as easily suggest that you feel concerned about code that may end up being unmaintained, you could even ask if the corresponding developer would be willing to help maintain that code, possibly in a separate branch.
In the same sense, always require new code to be accompanied with proper comments, documentation and other updates. In other words, code that adds new or changes existing functionality, should always be accompanied with updates to all relevant documentation.
Ultimately, if you know right away that you cannot and will not accept any of that code in the near future, you can at least invite the developer to branch or even fork your project, possibly in you repository and with your help and guidance, so that you still express your gratitude for working with your project.
Long methods are evil on several grounds:
They're hard to understand
They're hard to change
They're hard to reuse
They're hard to test
They have low cohesion
They may have high coupling
They tend to be overly complex
How to convince your fellow developer to write short methods? (weapons are forbidden =)
question from agiledeveloper
Ask them to write unit tests for the methods.
That depends on your definitions of "short" and "long".
When I hear someone say "write short methods", I immediately react badly because I've encountered too much spaghetti written by people who think the ideal method is two lines long: One line to do the tiniest possible unit of work followed by one line to call another method. (You say long methods are evil because "they're hard to understand"? Try walking into a project where every trivial action generates a call stack 50 methods deep and trying to figure out which of those 50 layers is the one you need to change...)
On the other hand, if, by "short", you mean "self-contained and limited to a single conceptual function", then I'm all for it. But remember that this can't be measured simply by lines of code.
And, as tydok pointed out, you catch more flies with honey than vinegar. Try telling them why your way is good instead of why their way is bad. If you can do this without making any overt comparisons or references to them or their practices (unless they specifically ask how your ideas would relate to something they're doing), it'll work even better.
You made a list of drawbacks. Try to make a list of what you'll gain by using short methods. Concrete examples. Then try to convince him again.
I read this quote from somewhere:
Write your code as if the person who has to maintain it is a violent psycho, who knows where you live.
In my experience the best way to convince a peer in these cases is by example. Just find opportunities to show them your code and discuss with them the benefits of short functions vs. long functions. Eventually they'll realize what's better spontaneously, without the need to make them feel "bad" programmers.
Code Reviews!
I suggest you try and get some code reviews going. This way you could have a little workshop on best practices and whatever formatting your company adhers to. This adds the context that short methods is a way to make code more readable and easier to understand and also compliant with the SRP.
If you've tried to explain good design and people just aren't getting it, or are just refusing to get it, then stop trying. It's not worth the effort. All you'll get is a bad rep for yourself. Some people are just hopeless.
Basically what it comes down to is that some programmers just aren't cut out for development. They can understand code that's already written, but they can't create it on their own.
These folks should be steered toward a support role, but they shouldn't be allowed to work on anything new. Support is a good place to see lots of different code, so maybe after a few years they'll come to see the benefits of good design.
I do like the idea of Code Reviews that someone else suggested. These sloppy programmers should not only have their own code reviewed, they should sit in on reviews of good code as well. That will give them a chance to see what good code is. Possibly they've just never seen good code.
To expand upon rvanider's answer, performing the cyclomatic complexity analysis on the code did wonders to get attention to the large method issue; getting people to change was still in the works when I left (too much momentum towards big methods).
The tipping point was when we started linking the cyclomatic complexity to the bug database. A CC of over 20 that wasn't a factory was guaranteed to have several entries in the bug database and oftentimes those bugs had a "bloodline" (fix to Bug A caused Bug B; fix to Bug B caused Bug C; etc). We actually had three CC's over 100 (max of 275) and those methods accounted for 40% of the cases in our bug database -- "you know, maybe that 5000 line function isn't such a good idea..."
It was more evident in the project I led when I started there. The goal was to keep CC as low as possible (97% were under 10) and the end result was a product that I basically stopped supporting because the 20 bugs I had weren't worth fixing.
Bug-free software isn't going to happen because of short methods (and this may be an argument you'll have to address) but the bug fixes are very quick and are often free of side-effects when you are working with short, concise methods.
Though writing unit tests would probably cure them of long methods, your company probably doesn't use unit tests. Rhetoric only goes so far and rarely works on developers who are stuck in their ways; show them numbers about how those methods are creating more work and buggy software.
Finding the right blend between function length and simplicity can be complex. Try to apply a metric such as Cyclomatic Complexity to demonstrate the difficulty in maintaining the code in its present form. Nothing beats a non-personal measurement that is based on testing factors such as branch and decision counts.
Not sure where this great quote comes from, but:
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it"
Force him to read Code Complete by Steve McConnell. Say that every good developer has to read this.
Get him drunk? :-)
The serious point to this answer is the question, "why do I consistently write short functions, and hate myself when I don't?"
The reason is that I have difficulty understanding complex code, be that long functions, things that maintain and manipulate a lot of state, or that sort of thing. I noticed many years ago that there are a fair number of people out there that are significantly better at dealing with this sort of complexity than I am. Ironically enough, it's probably because of that that I tend to be a better programmer than many of them: my own limitations force me to confront and clean up that sort of code.
I'm sorry I can't really provide a real answer here, but perhaps this can provide some insight to help lead us to an answer.
Force them to read the book "Clean Code", there are many others but this one is new, good and an easy read.
Asking them to write Unit tests for the complex code is a good avenue to take. This person needs to see for himself what that debt that complexity brings when performing maintenance or analysis.
The question I always ask my team is: "It's 11 pm and you have to read this code - can you? Do you understand under pressure? Can you, over the phone, no remote login, lead them to the section where they can fix an error?" If the answer is no, the follow up is "Can you isolate some of the complexity?"
If you get an argument in return, it's a lost cause. Throw something then.
I would give them 100 lines of code all under 1 method and then another 100 lines of code divided up between several methods and ask them to write down an explanation of what each does.
Time how long it takes to write both paragraphs and then show them the result.
...Make sure to pick code that will take twice or three times as long to understand if it were all under one method - Main() -
Nothing is better than learning by example.
short or long are terms that can be interpreted differently. For one short is a 2 line method while some else will think that method with no more than 100 lines of code are pretty short.
I think it would be better to state that a single method should not do more than one thing at the same time, meaning it should only have one responsibility.
Maybe you could let your fellow developers read something about how to practice the SOLID principles.
I'd normally show them older projects which have well written methods. I would then step through these methods while explaining the reasons behind why we developed them that way.
Hopefully when looking at the bigger picture, they would understand the reasons behind this.
ps. Also, this exercise could be used in conjuction as a mini knowledge transfer on older projects.
Show him how much easier it is to test short methods. Prove that writing short methods will make it easier and faster for him to write the tests for his methods (he is testing these methods, right?)
Bring it up when you are reviewing his code. "This method is rather long, complicated, and seems to be doing four distinct things. Extract method here, here, and here."
Long methods usually mean that the object model is flawed, i.e. one class has too many responsibilities. Chances are that you don't want just more functions, each one shorter, in the same class, but those responsibilies properly assigned to different classes.
No use teaching a pig to sing. It wastes your time and annoys the pig.
Just outshine someone.
When it comes time to fix a bug in the 5000 line routine, then you'll have a ten-line routine and a 4990-line routine. Do this slowly, and nobody notices a sudden change except that things start working better and slowly the big ball of mud evaporates.
You might want to tell them that he might have a really good memory, but you don't. Some people are able to handle much longer methods than others. If you both have to be able to maintain the code, it can only be done if the methods are smaller.
Only do this if he doesn't have a superiority complex
[edit]
why is this collecting negative scores?
You could start refactoring every single method they wrote into multiple methods, even when they're currently working on them. Assign extra time to your schedule for "refactoring other's methods to make the code maintanable". Do it like you think it should be done, and - here comes the educational part - when they complain, tell them you wouldn't have to refactor the methods if they would have made it right the first time. This way, your boss learns that you have to correct other's lazyness, and your co-workers learn that they should make it different.
That's at least some theory.
Setup
Have you ever had the experience of going into a piece of code to make a seemingly simple change and then realizing that you've just stepped into a wasteland that deserves some serious attention? This usually gets followed up with an official FREAK OUT moment, where the overwhelming feeling of rewriting everything in sight starts to creep up.
It's important to note that this bad code does not necessarily come from others as it may indeed be something we've written or contributed to in the past.
Problem
It's obvious that there is some serious code rot, horrible architecture, etc. that needs to be dealt with. The real problem, as it relates to this question, is that it's not the right time to rewrite the code. There could be many reasons for this:
Currently in the middle of a release cycle, therefore any changes should be minimal.
It's 2:00 AM in the morning, and the brain is starting to shut down.
It could have seemingly adverse affects on the schedule.
The rabbit hole could go much deeper than our eyes are able to see at this time.
etc...
Question
So how should we balance the duty of continuously improving the code, while also being a responsible developer? How do we refrain from contributing to the broken window theory, while also being aware of actions and the potential recklessness they may cause?
Update
Great answers! For the most part, there seems to be two schools of thought:
Don't resist the urge as it's a good one to have.
Don't give in to the temptation as it will burn you to the ground.
It would be interesting to know if more people feel any balance exists.
I'm a big fan of making lists!
As soon as the urge takes you to re-write something - spend 10 minutes making a list of the things that need re-writing. Follow all of the alleys that take you further into the code that needs attention and list those, too.
Hopefully within a relatively short space of time you'll have one of two things:
A really long list that completely puts you off wanting to rewrite anything ever again.
A list which actually isn't that long, so why not indulge yourself and go for a re-write?!
I've found that spending two months fixing bugs caused by my "harmless" re-write was enough to cure me of the urge to do these sorts of things without a clear mandate/project plan to do so.
The most memorable project of this kind for me occured some 4 years ago when I was called in to a remote office to "help" with a project that was due in 1 week for major presentation to the client and was not working yet at all. The project had been primarily off-shored to India, and IMO, a project management failure resulted in a ton of spaghetti code that was too fragmented to ever work properly in its current form.
After a full day's review, I presented my opinion to the management that the project simply needed wholesale refactoring and reorganization or it would never work properly. The result of this discussion was 6 days of 20 hours work / 4 hours sleep, 2 of which I actually spent sleeping on the couch in the company lobby due to the wasted time when driving back to the hotel.
The major improvements to the code included:
Application of naming standards
Moved into source control
Development of a build process
Documentation of the individual components
Most of the original code was left in place, but simply moved and reorganized / refactored to make it sustainable in the long term. Was it hell week? Sure. Did it make the project more successful? Yep.
I can't live with spaghetti code, and I'll often donate my own personal time to address it.
How to restrain one’s self from the overwhelming urge to rewrite everything?
Become old*
As this has happened to me, I've gradually lost the urge to rewrite everything. Why?
Once you've done it a few times, you realise that you often end up worse off than you started.
Even if you're god's gift to programming and your brilliant rewrite introduces no new bugs, you'll simply fail to notice or implement about 30% of the small edge-case features/bugs which things rely on. This will cost you months in fixing
Nothing wears down your irrational exuberance like time. Often this is a sad loss, but in this case, it's a win.
*(more experience may also be a suitable substitute if you lack the free time to become old)
The impulse to rewrite is righteous, provided that:
you "freeze" the existing code (with a label)
you start the rewrite attempt in a separate branch
you prepare first some unit-test in order to ascertain the current behavior and make sure you reproduce the existing features...
That said, you have to balance the rewriting process with the measure stability of the legacy code.
"If it is not broken, do not fix it" ;)
It's not quite an answer, but reading the beautiful article The Narcissism of Small Code Differences might help.
You're absolutely right that there's a right time and a wrong time to rewrite (and destabilize) the code.
Reminds me of an engineer I knew who had a habit of diving in and doing major rewriting whenever he felt like it. This drove the QA staff for his product up the wall -- if he report a bug about something trivial, the bug would get fixed, but the engineer's major rewrite of stuff he noticed while fixing the one bug would introduce fifty other new bugs, which then the QA staff needed to track down.
So one way to cure yourself of the temptation to do rewrites is to walk a mile in the shoes of the QA engineer who has to test, diagnose, and report all those new bugs. Maybe even lend yourself to the QA department for a few months to get a personal feel for the type of work they do (if you've never done that).
Another suggestion would be to make a rule for yourself to write up notes to describe any change. Perhaps post these notes when you check in code changes to source code control. You may be motivated to make the changes as minor as possible this way.
Just don't - The rabbit hole always goes too deep. Make any small local changes that leave the place cleaner than you found it, but leave big refactoring for when you're alert and have plenty of time to burn.
A nice starter on tackling big refactorings in small stages at sourcemaking.com :
you have to make like Hansel and
Gretel and nibble around the edges, a
little today, a little more tomorrow
reread "Refactoring".
Take a piece of paper and itemize the "Bad Smells" list.
(for each smell in BadSmells() {
print smell.name;
}
Add comments to the code including item(s) from the list.
while( odorPersists() ) {
Work through the list, laser-focused on one smell at a time.
}
Personally, this is where bug tracking software such as JIRA or Bugzilla come in to place with me.
I have a hard time NOT fixing all the broken windows when I see them, but if the situation is bad enough (and time permits) I will open a ticket and either assign it to me or assign it to the group responsible, that way I don't go off on a tangent.
-- I do only what needs to be done right them, but yet the issue is documented and will be fixed in time.
-- This is not to say that small broken windows should be handled this way; small issues should always be fixed on contact IMHO.
As far as I am concerned, if you have nothing better to do than to rewrite the code then you should make everyone else aware and just do it. Obviously, if it's going to take days to get any sort of result and the changes would mean excessive downtime then it's not a good idea, but eventually that code will become a problem. Make others aware that the code is awful and try to get some sort of movement to rewrite it.
The problem with the above statement is that as a programmer/developer you will ALWAYS have other things to keep yourself busy. Just leave it on the low-priority list of things-to-do, so when you're struggling with some work you can always keep your rhythm going with the rewrite.
Joel has an article about this:
There's a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong.
I've never worked anywhere particularly agile, so what I do is:
1) Figure out whether there's a reasonable fix that doesn't involve major rewrite. If not, then clear some time (perhaps by explaining to others how difficult the fix is) and do the rewrite.
2) There is a reasonable fix without major rewrite. Apply the fix, run the tests, check it in, mark the bug as fixed.
3) Now, raise a new bug/issue (enhancement request), outlining the proposed rewrite and how it would improve the code (simpler? more maintainable? reduces coupling? affects performance or resource use?). Assign it to myself, CC anyone interested in that bit of code.
4) Give people a chance to comment, then prioritise that new bug within my existing tasks. This usually means don't do it now, because most of the time if I have one "proper" bug to fix, then I have at least two. Do it once the critical list is cleared, or next time I need to do something that isn't boring. Or maybe after a couple of days it won't seem worth doing any more, and it won't get done, and I've saved the time I would have spent doing it.
The important thing, I think, is to avoid shaving a yak every time you make a small bugfix.
The trade-off is that if the code you want to rewrite is really bad, then it's easy to under-prioritise the rewrite, and you end up spending so much time maintaining it that you don't have time to replace it with something that would require less maintenance. That needs to be borne in mind. But no matter what priority the rewrite should be, or how your organisation assigns those priorities, fixing bad design in the same area of code is not the same thing as correcting the out-by-one error that caused the crash you originally went in to deal with. This has to be considered at step 1: if the design means there are probably a whole bunch of other out-by-one errors lurking in the code, then just fixing this one is probably not "reasonable". If it really was a typo, but you happened to spot a refactor opportunity because you had the file open, then correcting it and touching nothing else is "reasonable".
Obviously the more agile your shop, the more significant a refactor you can do without it being so disruptive / time-consuming / political as to require a separate task.
If it's some code you've inherited, start making the code your own. Write unit-tests, then refactor.
Write more unit tests just to find out that the code is running perfectly fine.
If you still have the urge to rewrite it, you will have some tests to find out that your rewriten code is now failing ;)
Bad code is not something that can be avoided totally, but it can be isolated by proper abstraction. If it is indeed wasteland, which means, there is no landfills around, focusing design process is much more effective than trying to make people write perfect code.
IDon't think you SHOULD stop yourself from this. Mostly if you FEEL for the big rewrite it's mostly correct to do. I insanely disagree with Joel Spolsky on this thing...
Though it's one of few places where I disagree with him ... ;)
Stop rewriting when it is good enough. For this you need to know what good program is for you not only for your employer. After all you do programming for life not for good impression. Develop sound criteria which really would tell you that you are good and result is descent and it is time to go to next task. You should like your programs not less then your boss likes them.
Refactor only if your boss/company actually encourages it, otherwise you'll end up doing frequent extra time to bring up code to perfection... Until someone less dedicated touches it again.
Starting assumption: you already "commit early, commit often".
Then there's never a bad time to start a refactoring, because you can easily back out at any point. I find that at the beginning, it always feels like it's going to be easy, but after making a few changes and seeing the effects, I start to realise quite how big a job it's going to be. That is the time to ask whether there is really time to make the changes, or whether living with it until next time you come to this code is the better course.
The willingness to stop, throw away a half-refactored branch, and do it the nasty but quick way where appropriate, is key.
That's for refactoring, where the changes are incremental, and running (or nearly-running) software keeps you well grounded. Rewriting is different, because the time until you figure out it's going to take longer than you thought is so much greater. I may be taking the question too literally, but there's almost never a good time to throw it away and start again.
Say there are two possible solutions to a problem: the first is quick but hacky; the second is preferable but would take longer to implement. You need to solve the problem fast, so you decide to get the hack in place as quickly as you can, planning to start work on the better solution afterwards. The trouble is, as soon as the problem is alleviated, it plummets down the to-do list. You're still planning to put in the better solution at some point, but it's hard to justify implementing it right now. Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
Does this sound familiar? I know it's happened more than once where I work. One colleague describes deliberately making a bad GUI so that it wouldn't be accidentally adopted long-term. Do you have a better strategy?
Write a test case which the hack fails.
If you can't write a test which the hack fails, then either there's nothing wrong with the hack after all, or else your test framework is inadequate. If the former, run away quick before you waste your life on needless optimisation. If the latter, seek another approach (either to flagging hacks, or to testing...)
Strategy 1 (almost never selected): Don't implement the kluge. Don't even let people know it's a possibility. Just do it the right way the first time. Like I said, this one is almost never selected, due to time constraints.
Strategy 2 (dishonest): Lie and Cheat. Tell management that there are bugs in the hack, and they could cause major problems later on. Unfortunately, most of the time, the managers just say to wait until the bugs become a problem, then fix the bugs.
Strategy 2a: Same as strategy 2, except there really are bugs. Same problem, though.
Strategy 3 (and my personal favorite): Design the solution whenever you can, and do it well enough that an intern or code-monkey could do it. It's easier to justify spending the small amount of code-monkey money than to justify your own salary, so it might just get done.
Strategy 4: Wait for a rewrite. Keep waiting. Sooner or later (probably later), someone is going to have to rewrite the thing. Might as well do it right then.
Here is a great related article on technical debt.
Basically, it is an analogy of debt with all the technical decisions you make. There is good debt and bad debt... and you have to pick the debt that is going to achieve the goals you want with the least long term cost.
The worst kind of debt is small little accumulating shortcuts that are analogous to credit card debt... each one doesn't hurt, but pretty soon you are in the poor house.
This is a major issue when doing deadline driven work. I find that adding very detailed comments about why this way was chosen and some hints at how it should be coded help. This way people looking at the code see it and keep it fresh.
Another option that will work is add a bug.feature in your tracking framework (you do have one, right?) detailing the rework. That way it is visible and may force the issue at some point.
The only time you can ever justify fixing these things (because they're not really broken, just ugly) is when you have another feature or bug fix that touches the same section of code, and you might as well re-write it.
You have to do the math on what a developer's time costs. If software requirements are being met, and the only thing wrong is that the code is embarrasing under the hood, it's not really worth fixing.
Whole companies can go out of business because over-zealous engineers insist on a re-architecture every year or so when they get antsy.
If it's bug-free and meets requirements, it's done. Ship it. Move on.
[Edit]
Of course I'm not advocating that everything be hacked in all the time. You have to design and write code carefully in the normal course of the development process. But when you do end up with hacks that just had to be done quickly, you have to do a cost-benefit analysis on whether or not it's worth it to clean up the code. If over the lifetime of the application you will spend more time coding around a messy hack than you would have fixing it, then of course fix it. But if not, it's way too expensive and risky to re-code a working, bug-free application just because looking at the source makes you ill.
YOU DON'T DO INTERIM SOLUTIONS.
Sometimes I think programmers just need to be told this.
Sorry about that, but seriously--a hacky solution is worthless and even on the first iteration can take longer than doing a portion of the solution correctly.
Please stop leaving me your crap code to maintain. Just ALWAYS CODE IT RIGHT. No matter how long it takes and who yells at you.
When you are sitting there twiddling your thumbs after delivering early while everyone else is debugging their stupid hacks, you'll thank me.
Even if you don't think you are a great programmer, always strive to do the best you can, never take shortcuts--it doesn't cost you ANY time to do it right. I can justify this statement if you don't believe me.
Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
If you're cursing it, why is it at the bottom of the TODO list?
If it's not affecting you, why are you cursing it?
If it is affecting you, then it's a problem that needs to be fixed NOW.
I make sure that I'm vocal about the priority of the long term fix ESPECIALLY after the short term fix has gone in.I detail the reasons why it's a hack and not a good long term solution and use those to get the stakeholders (managers, clients, etc) to understand why it needs to be fixed Depending on the case, I may even inject a bit of worst case scenario fear in there. "If this safely line snaps, the whole bridge could collapse!"I take responsibility for coming up with a long term solution and make sure that it gets deployed
It is a hard call. I have done hacks personally cause, sometimes you HAVE to get that product out the door and into the customers hands. However, the way that I take care of it is to just do it.
Tell the project lead or your boss, or the customer: There are some spots that need to be cleaned up, and coded better. I need a week to do it, and it is going to cost less to do it now, then it will be to do it 6 months from now, when we need to implement an extension onto the subsystem.
Usually problems like this arise from bad communication with management or the customer. If the solution works for the customer then they see no reason to ask for it to be changed. So they need to be told about the tradeoffs you made beforehand so they can plan extra time to fix the problems after you've implemented the quick solution.
How to solve it depends a bit on why it's a bad solution. If your solution is bad because it's hard to change or maintain then the first time you have to do maintenance and have a bit more time then that is the right time to upgrade to a better solution. In this case it helps if you tell the customer or your boss that you took a shortcut in the first place. That way they know that they can't expect a fast solution next time around. Cripling the UI can be a good way to make sure the customer comes back to get stuff fixed.
If the solution is bad because it's risky or unstable then you really need to talk to the person doing the planning and have some time planned in to fix the problem asap.
Good luck. In my experience this is almost impossible to achieve.
Once you go down the slippery slope of implementing a hack because you are under pressure then you might as well get used to living with it for all time. There is almost NEVER enough time to re-work something that already works, no matter how badly it is implemented internally. What makes you think you will magically have more time "at some later date" to fix the hack?
The only exception I can think of to this rule is if the hack completely prevents you from implementing another piece of functionality that is needed by a customer. Then you have no choice but to do the re-work.
I try to build the hacky solution so that it can be migrated to the longterm way as painlessly as possible. Say you got a guy who is building a database in SQL Server cuz that's his strongest DB, but your corporate standard is Oracle. Build the db with as few non-transferable features (like Bit datatypes) as possible. In this example, it's not hard to avoid bit types, but it makes transitioning later an easier process.
Educate whoever is in charge of making the final decision why the hacky way of doing things is bad in the long-run.
Describe the problem in terms they can relate to.
Include a graph of cost, productivity, and revenue curves.
Teach them about technical debt.
Regularly refactor if you're pushed forward.
Never call it "refactoring" or "going back and cleaning up" in front of non-technical people. Instead, call it "adapting" the system to handle "new features".
Basically, people who don't understand software don't get the concept of revisiting things that already work. The way they look at it, developers are like mechanics who want to keep taking apart and reassembling the entire car every time someone wants to add a feature, which sounds insane to them.
It helps to make analogies to everyday things. Explain to them how when you made the interim solution, you made choices that suited building it quickly, as opposed to being stable, maintainable, etc. It's like choosing to build with wood instead of steel because wood is easier to cut, and thus, you could build the interim solution quicker. The wood, however, simply can not support the foundation of a 20-story building.
We use Java and Hudson for continuous integration. 'Interim solutions' must be commented with:
// TODO: Better solution required.
Every time Hudson runs a build it provides a report of each TODO item so that we have an up to date, highly visible record of any outstanding items that need improved.
Great question. This bothers me a lot, too - and most of the time I'm the sole person responsible for prioritizing issues in my own projects (yep, small business).
I found out that the problem that needs to be fixed is usually just a subset of the problem. IOW, the customer that needs an urgent fix does not need the whole problem to be solved, just a part of it - smaller or larger. That sometimes enables me to create a workaround that is not solution to the complete problem but just to the customer's subset and that allows me to leave the bigger issue open in the issue tracker.
That may of course not apply at all to your work environment :(
This reminds me of the story of "CTool". In the beginning CTool was put forward by one of our devs, I'll call him Don, as one possible way to solve the problem we were having. Being an earnest hard-working type, Don plugged away and delivered a working prototype. You know where I am going with this. Overnight, CTool became a part of the company work flow with an entire department depending on it. By the second or third day, bitter complaints started streaming in about CTool's shortcomings. Users questioned Don's competence, commitment and IQ. Don's protests that this was never supposed to be a production app fell on deaf ears. This went on for years. Finally, someone got around to re-writing the app, well after Don had departed. By this time, so much loathing had become attached to the name CTool that naming it CTool version 2 was out of the question. There was even a formal funeral for CTool, somewhat reminiscent of the copier (or was it a printer?) execution scene in Office Space.
Some might say Don deserved the slings and arrows for not making it go right to fix CTool. My only point is that saying you should never hack out a solution is probably unjustifiable in the Real World. But if you are the one to do it, tread cautiously.
Get it in writing (an email). So when it becomes a problem later management doesn't "forget" that it was supposed to be temporary.
Make it visible to the users. The more visible it is the less likely people are going to forget to go back and do it the right way when the crisis is over.
Negotiate before the temp solution is in place for a project, resources, and time lines to get the real fix in. Work for the real solution should probably begin as soon as the temp solution is finished.
You file a second very descriptive bug against your own "fix" and put a to-do comment right in the affected areas that says, "This area needs a lot of work. See defect #555" (use the right number of course). People who say "don't put in a hack" don't seem to understand the question. Assume you have a system that needs to be up and running now, your non-hack solution is 8 days of work, your hack is 38 minutes of work, the hack is there to buy you time to do the work and not lose money while you're doing it.
Now you still have to get your customer or management agree to schedule the N*100 minutes of time required to do the real fix in addition to the N minutes needed now to fix it. If you must refuse to implement the hack until you get such agreement, then maybe that's what you have to do, but I've worked with some understanding people in that regard.
The real price of introducing a quick-fix is that when someone else needs to introduce a 2nd quick fix, they will introduce it based on your own quick-fix. So, the longer a quick-fix is in place, the more entrenched it will become. Quite often, a hack takes only a little bit longer than doing things right, until you encounter a 2nd hack which builds on the first.
So, obviously it is (or seems to be) sometimes necessary to introduce a quick fix.
One possible solution, assuming your version control supports it, is to introduce a fork from the source whenever you make such a hack. If people are encouraged to avoid coding new features within these special "get it done" forks, then it will eventually be more work to integrate the new features with the fork than it will be to get rid of the hack. More likely, though, the "good" fork will get discarded. And if you are far enough away from release that making such a fork will not be practical (because it is not worth doing the dual integration mentioned above), then you probably shouldn't even be using a hack anyways.
A very idealistic approach.
A more realistic solution is to keep your program segmented into as many orthogonal components as possible and to occasionally do a complete rewrite of some of the components.
A better question is why the hacky solution is bad. If it is bad because it reduces flexibility, ignore it until you need flexibility. If it is bad because it impacts the programs behavior, ignore it and eventually it will become a bug fix and WILL be addressed. If it is bad because it looks ugly, ignore it, as long as the hack is localized.
Some solutions I've seen in the past:
Mark it with a comment HACK in the code (or similar scheme such as XXX)
Have an automatic report run and emailed weekly to those that care which counts how many times the HACK comments appear
Add a new entry in your bug tracking system with the line number and description of the right solution (so the knowledge gained from the research before writing the hack isn't lost)
write a test case that demonstrates how the hack fails (if possible) and check it into the appropriate test suite (i.e. so that it throws errors that someone will eventually want to cleanup)
once the hack is installed and the pressure is off, immediately start on the right solution
This is an excellent question. One thing I've noticed as I get more experience: hacks buy you a very short amount of time, and often cost you a huge amount more. Closely related is the 'quick fix' that solves what you think is the problem -- only to find when it blows up that that it wasn't the problem at all.
Setting aside the debate about whether you should do it, let's assume that you have to do it. The trick now is to do it in a way that minimizes long range affects, it easily ripped out later, and makes itself a nuisance so you remember to fix it.
The nuisance part is easy: make it issue a warning every time you execute the kludge.
The ripped out part can be easy: I like to do this be putting the kludge behind a subroutine name. That makes it easier to update since you compartmentalize the code. When you get your permanent solution, you're subroutine can either implement it or be a no-op. Sometimes a subclass can work nicely for this too. Don't let other people depend on whatever your quick fix is, though. It's difficult to recommend any particular technique without seeing the situation.
Minimizing long range effects should be easy if the rest of the code is nice. Always go through the published interface, and so on.
Try to make the cost of the hack clear to the business folks. Then they can make an informed decision either way.
You could intentionally write it in way that is overly restrictive and singe purposed and would require a re-write to be modified.
We had to do this once - make a short term demo version that we knew we did not want to keep. The customer wanted it on a winTel box, so we developed the prototype in SGI/XWindows. (We were fluent in both, so it wasn't a problem).
Confession:
I have used '#define private public' in C++ in order to read data from some other code layer. It went in as a hack but works well and fixing it has never become a priority. It is now 3 years later...
One of the main reasons hacks do not get removed is the risk that one introduces new bugs while fixing the hack. (Especially when dealing with pre-TDD code bases.)
My answer is a bit different from the others. My experience is that the following practices help you stay agile and move from hackey first iteration/alpha solutions to beta/production ready:
Test Driven Development
Small units of refactoring
Continous Integration
Good Configuration management
Agile database techniques/database refactoring
And it should go without saying you have to have stakeholder support to do any of these correctly. But with these products in place you have the right tools and processes to quickly change a product in major ways with confidence. Sometimes your ability to change is your ability to manage the risk of the changes and from the development perspective these tools/techniques give you surer footing.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Joining an existing team with a large codebase already in place can be daunting. What's the best approach;
Broad; try to get a general overview of how everything links together, from the code
Narrow; focus on small sections of code at a time, understanding how they work fully
Pick a feature to develop and learn as you go along
Try to gain insight from class diagrams and uml, if available (and up to date)
Something else entirely?
I'm working on what is currently an approx 20k line C++ app & library (Edit: small in the grand scheme of things!). In industry I imagine you'd get an introduction by an experienced programmer. However if this is not the case, what can you do to start adding value as quickly as possible?
--
Summary of answers:
Step through code in debug mode to see how it works
Pair up with someone more familiar with the code base than you, taking turns to be the person coding and the person watching/discussing. Rotate partners amongst team members so knowledge gets spread around.
Write unit tests. Start with an assertion of how you think code will work. If it turns out as you expected, you've probably understood the code. If not, you've got a puzzle to solve and or an enquiry to make. (Thanks Donal, this is a great answer)
Go through existing unit tests for functional code, in a similar fashion to above
Read UML, Doxygen generated class diagrams and other documentation to get a broad feel of the code.
Make small edits or bug fixes, then gradually build up
Keep notes, and don't jump in and start developing; it's more valuable to spend time understanding than to generate messy or inappropriate code.
this post is a partial duplicate of the-best-way-to-familiarize-yourself-with-an-inherited-codebase
Start with some small task if possible, debug the code around your problem.
Stepping through code in debug mode is the easiest way to learn how something works.
Another option is to write tests for the features you're interested in. Setting up the test harness is a good way of establishing what dependencies the system has and where its state resides. Each test starts with an assertion about the way you think the system should work. If it turns out to work that way, you've achieved something and you've got some working sample code to reproduce it. If it doesn't work that way, you've got a puzzle to solve and a line of enquiry to follow.
One thing that I usually suggest to people that has not yet been mentioned is that it is important to become a competent user of the existing code base before you can be a developer. When new developers come into our large software project, I suggest that they spend time becoming expert users before diving in trying to work on the code.
Maybe that's obvious, but I have seen a lot of people try to jump into the code too quickly because they are eager to start making progress.
This is quite dependent on what sort of learner and what sort of programmer you are, but:
Broad first - you need an idea of scope and size. This might include skimming docs/uml if they're good. If it's a long term project and you're going to need a full understanding of everything, I might actually read the docs properly. Again, if they're good.
Narrow - pick something manageable and try to understand it. Get a "taste" for the code.
Pick a feature - possibly a different one to the one you just looked at if you're feeling confident, and start making some small changes.
Iterate - assess how well things have gone and see if you could benefit from repeating an early step in more depth.
Pairing with strict rotation.
If possible, while going through the documentation/codebase, try to employ pairing with strict rotation. Meaning, two of you sit together for a fixed period of time (say, a 2 hour session), then you switch pairs, one person will continue working on that task while the other moves to another task with another partner.
In pairs you'll both pick up a piece of knowledge, which can then be fed to other members of the team when the rotation occurs. What's good about this also, is that when a new pair is brought together, the one who worked on the task (in this case, investigating the code) can then summarise and explain the concepts in a more easily understood way. As time progresses everyone should be at a similar level of understanding, and hopefully avoid the "Oh, only John knows that bit of the code" syndrome.
From what I can tell about your scenario, you have a good number for this (3 pairs), however, if you're distributed, or not working to the same timescale, it's unlikely to be possible.
I would suggest running Doxygen on it to get an up-to-date class diagram, then going broad-in for a while. This gives you a quickie big picture that you can use as you get up close and dirty with the code.
I agree that it depends entirely on what type of learner you are. Having said that, I've been at two companies which had very large code-bases to begin with. Typically, I work like this:
If possible, before looking at any of the functional code, I go through unit tests that are already written. These can generally help out quite a lot. If they aren't available, then I do the following.
First, I largely ignore implementation and look only at header files, or just the class interfaces. I try to get an idea of what the purpose of each class is. Second, I go one level deep into the implementation starting with what seems to be the area of most importance. This is hard to gauge, so occasionally I just start at the top and work my way down in the file list. I call this breadth-first learning. After this initial step, I generally go depth-wise through the rest of the code. The initial breadth-first look helps to solidify/fix any ideas I got from the interface level, and then the depth-wise look shows me the patterns that have been used to implement the system, as well as the different design ideas. By depth-first, I mean you basically step through the program using the debugger, stepping into each function to see how it works, and so on. This obviously isn't possible with really large systems, but 20k LOC is not that many. :)
Work with another programmer who is more familiar with the system to develop a new feature or to fix a bug. This is the method that I've seen work out the best.
I think you need to tie this to a particular task. When you have time on your hands, go for whichever approach you are in the mood for.
When you have something that needs to get done, give yourself a narrow focus and get it done.
Get the team to put you on bug fixing for two weeks (if you have two weeks). They'll be happy to get someone to take responsibility for that, and by the end of the period you will have spent so much time problem-solving with the library that you'll probably know it pretty well.
If it has unit tests (I'm betting it doesn't). Start small and make sure the unit tests don't fail. If you stare at the entire codebase at once your eyes will glaze over and you will feel overwhelmed.
If there are no unit tests, you need to focus on the feature that you want. Run the app and look at the results of things that your feature should affect. Then start looking through the code trying to figure out how the app creates the things you want to change. Finally change it and check that the results come out the way you want.
You mentioned it is an app and a library. First change the app and stick to using the library as a user. Then after you learn the library it will be easier to change.
From a top down approach, the app probably has a main loop or a main gui that controls all the action. It is worth understanding the main control flow of the application. It is worth reading the code to give yourself a broad overview of the main flow of the app. If it is a GUI app, creating a paper that shows which screens there are and how to get from one screen to another. If it is a command line app, how the processing is done.
Even in companies it is not unusual to have this approach. Often no one fully understands how an application works. And people don't have time to show you around. They prefer specific questions about specific things so you have to dig in and experiment on your own. Then once you get your specific question you can try to isolate the source of knowledge for that piece of the application and ask it.
Start by understanding the 'problem domain' (is it a payroll system? inventory? real time control or whatever). If you don't understand the jargon the users use, you'll never understand the code.
Then look at the object model; there might already be a diagram or you might have to reverse engineer one (either manually or using a tool as suggested by Doug). At this stage you could also investigate the database (if any), if should follow the object model but it may not, and that's important to know.
Have a look at the change history or bug database, if there's an area that comes up a lot, look into that bit first. This doesn't mean that it's badly written, but that it's the bit everyone uses.
Lastly, keep some notes (I prefer a wiki).
The existing guys can use it to sanity check your assumptions and help you out.
You will need to refer back to it later.
The next new guy on the team will really thank you.
I had a similar situation. I'd say you go like this:
If its a database driven application, start from the database and try to make sense of each table, its fields and then its relation to the other tables.
Once fine with the underlying store, move up to the ORM layer. Those table must have some kind of representation in code.
Once done with that then move on to how and where from these objects are coming from. Interface? what interface? Any validations? What preprocessing takes place on them before they go to the datastore?
This would familiarize you better with the system. Remember that trying to write or understand unit tests is only possible when you know very well what is being tested and why it needs to be tested in only that way.
And in case of a large application that is not driven towards databases, I'd recommend an other approach:
What the main goal of the system?
What are the major components of the system then to solve this problem?
What interactions each of the component has among them? Make a graph that depicts component dependencies. Ask someone already working on it. These componentns must be exchanging something among each other so try to figure out those as well (like IO might be returning File object back to GUI and like)
Once comfortable to this, dive into component that is least dependent among others. Now study how that component is further divided into classes and how they interact wtih each other. This way you've got a hang of a single component in total
Move to the next least dependent component
To the very end, move to the core component that typically would have dependencies on many of the other components which you've already tackled
While looking at the core component, you might be referring back to the components you examined earlier, so dont worry keep working hard!
For the first strategy:
Take the example of this stackoverflow site for instance. Examine the datastore, what is being stored, how being stored, what representations those items have in the code, how an where those are presented on the UI. Where from do they come and what processing takes place on them once they're going back to the datastore.
For the second one
Take the example of a word processor for example. What components are there? IO, UI, Page and like. How these are interacting with each other? Move along as you learn further.
Be relaxed. Written code is someone's mindset, froze logic and thinking style and it would take time to read that mind.
First, if you have team members available who have experience with the code you should arrange for them to do an overview of the code with you. Each team member should provide you with information on their area of expertise. It is usually valuable to get multiple people explaining things, because some will be better at explaining than others and some will have a better understanding than others.
Then, you need to start reading the code for a while without any pressure (a couple of days or a week if your boss will provide that). It often helps to compile/build the project yourself and be able to run the project in debug mode so you can step through the code. Then, start getting your feet wet, fixing small bugs and making small enhancements. You will hopefully soon be ready for a medium-sized project, and later, a big project. Continue to lean on your team-mates as you go - often you can find one in particular who is willing to mentor you.
Don't be too hard on yourself if you struggle - that's normal. It can take a long time, maybe years, to understand a large code base. Actually, it's often the case that even after years there are still some parts of the code that are still a bit scary and opaque. When you get downtime between projects you can dig in to those areas and you'll often find that after a few tries you can figure even those parts out.
Good luck!
You may want to consider looking at source code reverse engineering tools. There are two tools that I know of:
SWAG Kit (Linux only) link
Bauhaus academic commercial
Both tools offer similar feature sets that include static analysis that produces graphs of the relations between modules in the software.
This mostly consists of call graphs and type/class decencies. Viewing this information should give you a good picture of how the parts of the code relate to one another. Using this information, you can dig into the actual source for the parts that you are most interested in and that you need to understand/modify first.
I find that just jumping in to code can be a a bit overwhelming. Try to read as much documentation on the design as possible. This will hopefully explain the purpose and structure of each component. Its best if an existing developer can take you through it but that isn't always possible.
Once you are comfortable with the high level structure of the code, try to fix a bug or two. this will help you get to grips with the actual code.
I like all the answers that say you should use a tool like Doxygen to get a class diagram, and first try to understand the big picture. I totally agree with this.
That said, this largely depends on how well factored the code is to begin with. If its a gigantic mess, it's going to be hard to learn. If its clean, and organized properly, it shouldn't be that bad.
See this answer on how to use test coverage tools to locate the code for a feature of interest, without knowing anything about where that feature is, or how it is spread across many modules.
(shameless marketing ahead)
You should check out nWire. It is an Eclipse plugin for navigating and visualizing large codebases. Many of our customers use it to break-in new developers by printing out visualizations of the major flows.