I'm in the process of going back over some of the more minor TODO's in my code. One of them is in a class that handles partial dates, e.g. Jan 2001. It works fine for dates that will be seen in our system (1990 - 2099) and gracefully fails for other dates.
The TODO that I've left for myself is that I don't handle dates in the century 2100 and beyond. I don't really think it worth the effort fixing this particular problem, but I am cognisant of the Y2k bugs. If we were in 2080 already I think I'd be thinking differently and would fix the bug.
So how long does code last for? How far ahead should we plan for our systems to keep running for?
Update
Ok, thanks for all your input. I think I'm going for the option of leave the TODO in the code and do nothing. The thoughts I found most interesting were:
#Adrian - Eternity, I think that's the most correct assumption, your point about VM's is a good one.
#jan-hancic - It depends, yes it does.
#chris-ballance - I'm guessing I'll be dead by the time this restriction is hit, so they can come defile my grave if they want, but I'll be dead, so I'll just haunt his ass.
The reason I decided to do nothing was simple. It added negligable business value, the other things that needed looking at did add value so I'll do them first and if I get the time I'll fix it, but really it'll be nothing more than an academic exercise.
Longer than you expect.
Eternity.
Given the trend that old system keep running in virtual machines, we must assume that all useful code will run forever. There are many system that run since the 60ies, eg backend code in financial sector, and there seems to be no indication that these systems will ever get replaced. (And in the meantime, the frontend is being replaced every other year with the latest fad in web technology. So, the closer your code is to the core of your system, the more likely it will run forever.)
You can't have a general answer here. Depends on what kind of project you are building.
If you are writing software for a space probe then you might want to code it so that it will work for the next 100 years and more.
But if you are programming a special Xmas offer for your company's web page, a few weeks should be enough ...
Assume that whoever will maintain the code is a psychopath and has your home address.
Nobody really knows. Professional programming has been around for 30-40 years, so nobody really knows if code is going to last for 100 years. But if the Y2K bug is an indication, it is that a lot of code is going to stick around for a lot longer than the programmer intended. Keep in mind that even if you take that into account, it could still stick around longer than you expected. No matter how much you prepare, it might still outlive it's intended life expectancy.
My advice is to not plan for code to last 100 years. Instead try to make sure all your code will work for the same length of time, that is, part of it should not fail in 2 years, while the other part should fail in 100 years. Remember, you should always fix the weakest link first, so there is no point making the strongest link stronger.
Sometimes, code lasts longer than you think. But, more important is the slippery slope argument. Once you forgive yourself a bit of non-bullet-proofness, you may be tempted to optimize further and skimp on logical correctness, until it finally bites you.
By the way, I recommend to have an issue ID (such as FogBugz case number) in every TODO comment, so that people can actually subscribe to and track this TODO.
in Dan Bernstein's immortal words: Don't contribute to the Y10K problem!
I don´t think the code will last so long.
Think about all the inventions and progress made in the last 90 years.
In 2100 we won´t have to write down code.
There will be some kind of brain-machine interface.
Well, we recently made a timestamp format where time is stored in a unsigned 64-bit integer as microseconds from 1970. It will last until the year 586912, which should be enough.
Coding for "forever" is unnecessary - of course you could use BigIntegers and such everywhere, but why? Just be prepared for more than 5 or 10 years. Twenty year old production code is not quite unusual nowadays, and I suspect that the average life cycle will get even longer in the near future.
It depends on how much business value the code has and how much resources it takes to write it from scratch. The more value and resources the longer it lasts. Ten years and more is typical for commercial "works, don't touch it" code.
I always tried to code like my applications must work "forever". I am very sure I wont be around anymore in 2100 but knowing my software has a build in expiration date doesn't make me feel good. If you know about such things try to avoid them! You will never know but some unknown programmer in the future may be grateful.
Right up until the time that it breaks, or otherwise ceases to be useful, and then for a bit longer after that
The essential things are:
How good is your internal date class (get a very robust library version and stick to it!)
It's not just the passage of time, but also the growth in the range of inputs your users want. For example, maybe you have 30 year mortgage inputs now, but next month someone decides to input a 99 year lease with maturity 2110, or a 100 year Disney bond!
If you accept 2 digit year inputs with a date window, think very carefully about how that is applied to start and end dates, and give lots of immediate feedback.
Here are my two cents:
When we design a project, we usually declare it to last "at least" 5 years. Usually no more than 10 years before we re-design it and build it all over. (We're talking about mid-large size projects here).
What usually happens is that the new project you build is supposed to replace the old one, either techonology wise (i.e. moving from MF to windows, VB to .net etc.), but this project never ends. So your client ends up working with 2 systems at once and that leftover system is what later is referred to as "legacy".
If you wait long enough, a third project will rise causing the client to work with 3 systems at once and so on...
But to answer your question, I would bet on 5-10 years before redesign, and unless your dates are supposed to be long into the future - no need to worry about the 2100 limitation.
IMHO it comes down to craftmanship : the pride we take in our work, coding to a standard we would not be ashamed another real coder to see.
In the case of dates like this, you've stated that it gracefully fails after 2100. This sounds like you can remove the TODO without a bad conscience, because you have built in a response that will allow the cause of failure to be easily diagnosed and fixed in the (however likely or unlikely) circumstance that a failure occurs.
There are examples of code running on older machines which is 40 or 50 years old.
(Interesting bits in this thread: http://developers.slashdot.org/developers/08/05/11/1759213.shtml).
You've got to ask yourself about the nature of the problem you're solving but generally speaking even "quick fixes" will be around for years so you could realistically be looking at a decade or more for code intended to have a decent shelf life.
The other things you need to think about is:
1) What is the "active life" of the application - that is where it's going to be used and processing.
2) What is the "inactive life" of the application - that is it's not going to be used day to day but might be used for retrieving and viewing old records. For instance UK audit law means that records need to be available for 7 years, so that's potentially 7 years from last system use.
3) What is the range of future data it needs to handle? For instance say you're taking down credit card expiry dates - you can have a card which won't expire for a decade. Can you handle that date?
The answers to these questions will generally lead you to the assumption that you should never knowingly write code which has date constraints beyond those the OS/Language you're using dictates.
The question isn't "How long does code last?" but rather "How long will things in my code affect an application?"
Even if your code is replaced, it's possible that it will get replaced with code that does the exact same thing. To some extent, this is the direct cause of the Y2K problem. More to the point, it is the direct cause of the Y2038 problem.
Also keep in mind what you mean by last.
For example, the original UNIX operating was developed 30 years ago. But during that 30 years, the product has evolved over time.
Though, it wouldn't surprise me if a little but of original code still exists in it today.
So think of it 2 ways ... 1) do you ever antisipate the code being touched in the future, 2) the product/code will evolve if you have support and involvmment.
My current shop has a large code base that runs financial applications with complex business rules. Some of these rules are encoded in stored procedures, some in triggers, and some in 3gl and 4gl application code. There is running code from the late 90's, and none of it in your "traditional" Legacy languages like COBOL or FORTRAN. As one could imagine, it's a steaming pile of spaghetti code, most created before TDD meant anything, so people are reluctant to touch anything.
I have had occasion to be brought in on contract more than a decade after the fact to consult on porting code to a new platform (OS/2 just isn't that popular these days!). When in doubt, assume that your code will live longer than you will. At the very least, document the heck out of limitations like this; fix them unless that takes tremendously more work than to document them.
In 1995 I started work at a new job, on an 8 year old code base.
So it's incept date was 1987 or thereabouts.
The code is probably still in service. Thats what ? 23 years.
There's been some moves of the company, but they probably kept the software ( because it works)
If it's still in service now, it will still be in service in a decade or so.
It's not surprising, particularly high tech code, in C (mostly)
In 1999 I started at a new job, the the codebase had antecedents back to 1984.
The new version I designed in the 2000's is still in service, with design elements like data file formats from the previous one ( and so on back) and that would be a development program over 26 years.
So the year 2086 problem is starting to loom a little as those 32 bit signed time_t's roll over.
Remember that one of the major bonuses of modern programming is reuse.
So that means the code you write originally to solve one problem may get re-purposed and used in a completely different scenario years later (maybe even without your knowledge, by a team mate).
As a side note:
One of the major pluses of automated unit testing, is testing code that you can't even remember is there in a system! :)
As long as people can continue to bill for support with people willing to pay for it.
3-5 years max. After that you have moved on to another job and left your crappy code behind.
Related
How do you deal with a client who has different time estimates for the software product than yours?
I am going to describe a scenario that is not mine, but that captures broadly the same problem. I am working as a subcontractor to a large company that has a programming department. The software project we are working on is in an area that the department believe they have a handle on, but because their expertise and mine are very different we tend to get different results.
Example: At the start of the project I suggested one way of development which they rubbished as being unrealistically difficult and suggested integrating a different framework (one they are familiar with) with the programming language we are using (Python) to get more or less the same result.
Their estimate for this integration: less than a week (they haven't done the integration before).
My estimate for the integration: above two weeks.
Using my suggested way to get the result needed (including using matplotlib among other libraries used elsewhere within the project): 45 minutes. This is not an estimate, the bit was actually finished in 45 minutes.
Example: for the software to be integrated with their internal system, they needed to provide a web service for me to use. They provided a broken one, though it does work with their internal tool (doesn't work with .Net or Java mainstream packages among other options). They maintain that it is my fault that the integration has taken longer than the time estimated.
The problem is not that they don't know, the problem is that they have enough knowledge about programming to be dangerous (in my opinion). Is there some guidelines for how to deal with this type of situation? A way for expectation management? Or may be I shouldn't get involved in such projects from the start and in this case what are the telltale signs?
If a client isn't happy with a time estimate, don't do the work. If they think they can do it better or faster, tell them to go ahead.
The one thing I never allow is for my estimates to be modified. That's something that caught me out early on in my career but we learn our lessons.
If clients were so good at doing the work, they wouldn't be hiring me. I'd simply point out that they hired me for my expertise so why are they disregarding that expertise. Of course, if they were to allow the scope of the project to change (i.e., less work), that would be another matter, and one up for discussion.
If you didn't lock in exactly what they were meant to provide as part of the deal, then it's a "he says, she says" situation and, unfortunately, the customer controls the purse strings. However, often, the greatest power you can have is the ability to just walk away.
No-one says you have to do the job.
Of course, all that advice above is worth every cent you paid for it :-)
I don't know your specific circumstances.
Or may be I shouldn't get involved in such projects from the start and in this case what are the telltale signs?
My answer for sure. If you can avoid those projects, do it.
Some signs : people thinking they know how to do things when you can guess they can't. The "oh no let's not use this perfectly suitable tool because I don't know it" is a major indicator that the person is technically challenged.
first of all, it is no fun to be in such an environment.
So, if you like to have fun at your job, and you do not need to take this job for extenuating financial reasons, then simply do not take the job that is not fun.
Since that is hardly realistic in many cases, you will end up with the job and need to manage the situation as best you can. One way is to make sure there is a paper trail documenting your objections and concerns with the plan. Try not to be overtly negative, but try to be constructive and present valid alternatives. Here you will need to feel out the political landscape, determine if the 'boss' will be appreciative or threatened by your commentary, and act accordingly.
Many times there are other issues that management is dealing with that you are not aware of. Be cautious of this fact, and maybe ask the management team if this is the case, again without being condescending or negative.
Finally, if you have alternatives that take less time than the meetings it would take to discuss them, just try it in a sandbox, and show it off. This would go a long way to 'proving' your points. Caution here is that you could be accused of not being a team player, or of wasting resources, or not following direction. Make sure this is mitigated by doing these types of things on your own time, or after careful consideration of how long you are spending on these things as well as how vested your boss seems to be on the alternatives.
hth
I ran into the same problem with integration. Example: for the
software to be integrated with their internal system, they needed to
provide a web service for me to use...They maintain that it is my
fault that the integration has taken longer than the time estimated.
Wow very similar to what I was experiencing with a client. The best thing I can suggest is to keep good documentation. In the end that is what saved me. When it came to finger pointing I had all of the emails and facts in order and was prepared to defend my self.
One thing I would suggest is to separate out a target/goal and an estimation. I would not change my estimate unless it involved actually removing features or something is revealed that would make it easier. Tell them you will try to hit the target in anyway you can and you care about the business goal. However, your estimate will not change. If its getting no where and they are just dense then smile and nod and take it if its the only gig around.
Was just writing about this in my blog
How to estimate the WRONG way
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I understand that this question could be answered with a simple sentence and that it may be viewed as subjective, however, I am a young student who is interested in pursuing a career in programming and wondered how long it took some of you to get to the level of experience you are now?.
I ask this because I am currently working on building an application in Java on the Android platform and it bothers me that I am constantly having to look up how to write a certain section of code in my application such as writing to a database, or how the if statement should be structured.
My question really is, how long did it take for you to become experienced enough to actually know exactly how your next line of code was going to look, before you even wrote it?
The speed at which you can quickly recall language syntax, common library functions, and best practice patterns is directly proportional to the amount of time you spend using them.
In other words you will find yourself getting faster the more you do it.
I have been a C++ programmer for the last 20 years. It has taken me that long to get to this expertise level. I'm mostly a Windows programmer, and I keep the msdn website up on one of my monitors all the time.
Doesn't matter how long you've been doing it. You will never know everything from memory. Don't sweat it.
I've been programing for almost half of my life and I sill can't always recall simple syntax, let alone entire tracks of code for more complex tasks. If you ask me that's what reference books and Google are for.
A far more important skill to have is the general knowledge of programming in any language, i.e. recursion, looping, object oriented design, working with APIs, error handling etc... Once you have all that down, you can apply it to any language and platform.
I can tell you that after 25 years there are lines of code that I don't know how they're exactly going to look like.
Want an example? I'm programming in Java since last century and I can honestly still make a mistake if I were to type hashcode() or hashCode().
Why? Because actually typing such a method name yourself is so last century. Your intention is to override Object's hashCode() method, so you use programming by intention.
You hit Ctrl-O then h and you get a list of the methods starting with an 'h' that you can override. Then you hit enter. As a bonus, the "#Override" gets inserted for you too.
4 keys. 4, to get this:
#Override
public int hashCode() {
}
And honestly, whether hashCode takes an uppercase 'c' or not... I couldn't care less. This is not what an hashcode is about and my intention is not to know all the inconsistencies languages and API designers came up with. My intention is to override the method that gives back an object's hashcode and my (modern) IDE allows me to get that skeleton in four keypresses, including hitting enter.
Another example: there are people who do really type this countless times a day:
for (int i = 0; i < ; i++) {
}
or the more tricky:
for (int i = ; i >= 0; i--) {
}
Note in that latter case I can still mess up and type "i++" instead of "i--" (a 'thinko' as its called).
But I don't care at all, because I type "fi<tab>" (three keys) and I get the first one or "fir<tab>" (four keys, "for i (in) reverse") and I get the second one. You ain't beating that (especially seen that I'm a touch typist so I type these three or four keys fast). In addition to speed, as an added bonus the autocompletion won't mess you "i--".
In many case I don't know exactly the line I'll get: sure, I know it "more or less" and that's exactly the way it should be.
Don't sweat it too much. As others have mentioned it eventually gets easier to write code without looking things up so much as you work with a particular language over time.
HOWEVER
There are a few reasons that even veteran programmers find themselves constantly using reference material:
(1) Unlike days of yore, most projects now require you to use a number of languages to get the job done. For example single web site-project a web-site may require C#, XML, JavaScript, SQL, HTML, XHTML, RegEx and CSS all in the same project. Switching between some of these languages can really throw you for a loop sometimes because many of them are just similar enough to be familiar, but just different enough to make you forget the subtle differences in syntax.
(2) Just when you start getting comfortable that you know a language inside and out, the vendor will release a new update that changes everything you knew about it. For example ASP->ASP.NET.
I still look up simple things fairly frequently and I've been at this almost 20 years. The important thing is that you understand the underlying concepts and principles.
It took me 12 years to get where I am at today, which is my experience in professional programming. You will always improve when working with some programming language, even if you have been working with it for the past 5 years.
About your question, it depends. I think that you should be comfortable with the syntax after a week, comfortable with the main libraries after a month, and comfortable with the platform after 6 months.
But when you get there, don't stop!
If you code every day with the same language, you'll probably have the common language elements and patterns memorized in a month or two. But there are plenty of things that you'll probably never memorize, simply because you don't use them often enough, and also because modern IDE's can help you so much that you don't really need to remember everything if you utilize all their features (like code snippets, shortcuts, intellisense).
I've been coding for 15 years, and doing C# for the last three, and I still use the MSDN reference material every day. However, as far as the basic building blocks of the language are concerned, I had them memorized in the first month or two.
Also, the more often you code, the better you'll commit it to memory.
There's a false assumption going on here I think...
At my job I end up working all over the stack in different languages and platforms. If I'm away from a project for 6 months I end up having to look at code to get even basic things done. The advantage of experience is reducing the re-ramp up time on productivity though.
So, instead of it taking weeks or months to get back to a point where I can write 80% of the code from memory it takes a few or several days (if that sometimes). I've been programming for about 5 years now. I'm just now getting to the point of being able to visualize small applications in their entirety.
As long as you're working on solving a problem that you haven't already solved (numerous times) you'll probably always have to look up code.
If it can be done I'm guessing it takes longer than 5 years for most people, unless they work in one language with one editor and in one area. (ex: C#, Visual Studio, file system operations) My company isn't big enough to employee someone that specific.
Don't be downhearted by having to consult documentation all the time man, that's what it's there for. Over time you get used to syntax and things like that but don't sweat it if you can't remember library methods or ways to connect to a DB.
Over time (with experience) you might remember these off the top of your head but in reality there's nothing wrong with taking a quick consultation of the documentation to refresh the memory.
Also remember that technology is ever changing so it's good to keep the mind fresh with new ways of thinking/ways to do things.
Question is not 100% clear. One of the best programmers I know doesn't remember anything and needs to look up printf formatting strings almost daily. On the other hand if you are having hard time figuring out how to write that for (int i = 0; i < len; i++) loop after doing this for 6 months -- that doesn't sound right.
the idea that one could every bit of code from even a single language and then type plainly from memory is pretty far out. the amount of pre-defined functions for, say PHP or Java alone is immense.
but that being said, its important to learn the programming structures, and know them the best you can. structures like foreach, if then else, switch, etc. are really the things that need to be integrated thoroughly. also, conceptual things like Object Orientation (not just "using" objects like mysqli, but understanding things like controlling code, client code, bottom-up and top-down architecture) are the real things that make good programmers great. for myself, i know that i have not the capacity to learn every defined function thats provided by language writers so i instead learn whether or not it can be done(and of course still try on occasion to do things that cant be done, lol). if you know that, then its a matter of google and books to find the "specific" mechanisms on how.
cheers my friend.
Not an answer per se, but I just wanted to say that as a novice myself, this q&a is one of the most useful things I've read on SO. It seems that yes, experts can probably code the basic stuff from memory, but even they revert to book for complex problems, and for beginners, that's what the book is for.
I feel like I'm just beginning
You should use code snippet if you are using certain piece of code repeatedly. I doesn't bother me if I cannot remember some piece of code from memory.
For me, it depends heavily on what I'm writing. For example, I doubt that most people ever quite memorize all the parameters to some Windows functions. I may know that I need to call CreateFile on the next line, but don't know all the parameters in order until they're typed in (with help from Intellisense, and sometimes MSDN).
With something that's doing simple computation, I'm a lot more likely to be limited by my typing speed (but I'm a fairly poor typist, so thinking faster than I can type doesn't take much).
That means it's really a question of how much of the time you need to refer to something to type the next line of code -- at first, it's a pretty small percentage, and over time it grows. I doubt it ever gets to 100% for anybody though. I, for one, don't think I'd want it to -- that would indicate I wasn't working on new and different things...
If you use eclipse and java, you may find yourself already there.
Other combinations may be a little slower to a lot slower.
Java has the advantage that it's pretty easy for the environment to build an entire parse tree while you are typing. At any place in your code, typing ctrl-space can give you an entire list of valid options.
Also syntax errors are always underlined.
If you want to go hard core though, I started in C before the day of decent editors and it took me about a year before I typed in more than a few lines and didn't get a compile error.
I don't know about memorization. Repetition is mother of all learning and that applies to all aspects of life. Look at the experienced accountant vs. the novice when filing taxes, who looks up stuff more. But what I did discover recently is that I navigate documentations much quicker and have a sense for going directly to where my question is answered. I got 6h sense - I can see the code! Seriously, it all comes with experience. Still, when learning something completely new, there's no shame in looking up how to do certain things. That's what separates humans right, learning from others. The more you work on something, the better you become.
There is absolutely nothing wrong with having to lookup the documentation every time you want to write some code. I got lucky in that once I use a certain function, I don't forget it very easily. However, most of the time, especially when I'm coding in a language that I haven't used in a while,
I start out by writing a flowchart of the algorithm that I want to code - just the pure logic. The most important reason for doing this is so that I don't lose my train of thought and forget the algorithm that solves my problem in the midst of technical problems like syntax and < what library functions exist? >
Then I look at the documentation and check to see if there are simple function calls that will help me accomplish each task in my algorithm
If such functions do not exist, then I either modify my algorithm to accommodate for what functions the language does provide or write helper functions do fill in the gaps.
I only start coding NOW, which is not too difficult to do anymore, because I already have all the relevant functions written down. So now it's just a question of translating pure logic into syntactically accurate code.
Proper syntax usually does not elude me, but if that does happen (VERY rare), Google provides very nice code snippets if you ask nicely.
Hope this helps
May the Force be with you
I'm programming Java now for about 5 years, and I never have had any trouble remembering syntax. I can <brag>write almost all java.util.*, java.io.*, java.lang.* and javax.swing.* stuff out of my memory</brag>, but does it help me? Not very much. It doesn't make me a better programmer than someone who can't!
I'm using Netbeans, which greatly helps working with libraries. Also, the documentation, just in the place where you need it. Sometimes, it's quite unnecessary, but sometimes you'd wish the "auto complete" screen would popup faster!
The best thing as a student is to concentrate on what you are doing, not how fast you are doing it. Looking up things isn't bad; as it'll help your so-called "unconsciousness mind" process what you are really trying to accomplish. Having such breaks, e.g. by looking up a certain documentation or syntax reference, may even let you be better at programming (no proof for this, though).
Question is quite subjective.
With the great many IDE's available and the "newbie" tutorials on getting started, it won't take you long before you're off writing your own apps.
That said, unless you have a "thirst" for how stuff works kinda attitude all the time towards everything, you won't go far. In this field, you really have to have a passion for what you do to be great.
... It bothers me that I am continually having to look things up... How long did it take you to get to the level where you are now?
For me, and for most of the students I teach, the answer depends on two variables:
How many lines of code have I written?
Do I use the language or library every day?
(Reading other people's code is very helpful for learning a language and learning how to think in a language, but for me at least, it hasn't helped me become a fluent writer of programs in a language. Only writing code does that.) So my first comment is that time should be measured in lines of code written, not hours or years.
(Ray Bradbury once advised aspiring writers of fiction to write a thousand words a day six days a week, and after they've written a million words they might start to know something about their craft. This is good advice for programmers too.)
As for my own experience, across a half dozen languages that I currently know well or once knew well, it's been pretty consistent for me that
After writing 100 lines I am continually looking things up in the manual and don't really know what I am doing.
After writing 1,000 lines I use the manual occasionally and am starting to learn how to think in the language.
After writing 10,000 lines I am about as good as I'm going to get without making special efforts.
After writing 25,000 lines I probably will not need the manual again.
It's also true that
To learn to write 100 lines I had to read 100 to 1,000 lines that someone else wrote.
For the first 1,000 lines I write it is good for me to read 2,000 lines someone else wrote. For the next 1,000 lines it is good for me to read 1,000 lines someone else wrote.
After I've written 5,000 lines I learn the most by reading code written by world experts or by people who designed the language and understand what is there. I no longer have much to learn by reading just any program.
On the other hand, my experience about when I stop having to refer continually to the documentation is much less consistent.
I find it especially hard when two languages are very similar; I will never stop needing the ksh manual to tell me what is different from sh or bash, and I will never stop needing the Haskell manual to tell me what is different from Standard ML (though the need grows less with each additional 1,000 lines of Haskell that I write). I also find it interesting that while I have written over 35,000 lines of Lua code, and I will never need the manual again for a language question, I have to look up libraries and API functions almost every time I write something longer than 500 lines. (I've written a lot of short Lua programs and a couple of long ones, and I don't use the language every day, although I definitely use it at least several days each month.)
As for the unspoken question, when are you personally going to get better?, take some advice from Watts Humphrey: measure your own performance and track it over time. I think if each day you count the number of times you had to stop and look things up, and graph that against number of lines of code written or edited (which you can get from source-code control), you will be pleasantly surprised by objective evidence of improvement. And I think once you have such evidence, you will be able to focus more on continuing to improve, and not so much on where you are now or where you hope to be in a year.
It's true that after some years of programming you'll be able to remember a lot of thing without having to check the "manual". For me this is not an important milestone in your programming life though, the really important moment is when you reach the point where you don't know how to do something... but you're sure that can be done and you know where and how to research about it :-)
You made a very important step participating on this site. Exchanging ideas and helping each other it's a excellent way of learning.
Sociologist Malcolm Gladwell believes that ten thousand hours is a good benchmark for the amount of practice required to become world class at many fields of endeavour. I think that sort of number applies to programming as well. This isn't quite what you asked, though; being able to code competently certainly requires familiarity with your environment (language constructs, system libraries, third party libraries and perhaps something of the concepts underpinning them), but there are many soft skills involved which are harder to describe and can only really be acquired through practice.
As others have said, being good at programming is not about typing code from memory; it's about recognising patterns, understanding systems, solving problems. It's about choosing the right tools for the job (languages, libraries, algorithms, whatever) and being able to make proper use of them.
In all the jobs I've had, it's about adaptibility and flexibility; you might have to learn a new language or pick up somebody else's poorly documented code tomorrow, and a good programmer will be able to take this in their stride.
I've been coding professionally for nearly 10 years now; there's all sorts of code I use semi-regularly which I look up the options for at least some of the time. There are too many commands with too many options in too many languages for me to remember each and every last one in detail and Google is quite good enough at getting the information I want.
That said - there are some bits of routine code which I use all the time but can count on one hand the number of times I've written - the exact syntax for populating a dataset in .Net for example. One of the skills I've most come to value over this time and which saves me the most time is spotting when some code can be quickly and easily moved into utility libraries. If it's fiddly but routine, consider this approach to save yourself hassle and improve your overall code quality.
In the context of this question ; java, c++, javascript the languages are still evolving. I can't say about other languages.
The language standard/specification changes over time
Libraries are added to supplement the language constructs e.g Boost, Google Collections, Apache Commons, jQuery
Applications will rarely be bound to a single aspect of a language
Across organizations/projects, coding standards change
A project I worked upon recommended against using primitives
When unfamiliar with the constructs used, I put in pseudo-code flagged with //TODO first .. then go in and find the actual API to use.
IMO, the answer to your question is - there is no definite answer.
As a Java programmer the sheer size of the runtime library makes it impossible to remember everything. Swing is big, there is an XSLT engine (which contains TWO languages), the Concurrent support evolves and grows.
The direct access to the Javadoc API from within Eclipse combined with code completion makes it possible to find the information you cannot remember but you know is there, quickly and efficiently.
I have found the javaalmanac.com (which has been reworked into the more convoluted http://www.exampledepot.com/egs/index.html) invaluable in presenting short, concise and above all correct programs doing just one small thing. Strongly recommended.
Most weeks I program in Java, C#, Python, PHP, JavaScript, SQL, Smarty, Django Templating and occasionally C++ and Objective C. I'm a student so this is partially school work and partially my part-time job. Instead of learning syntax I've learned what concepts to look for.
Seeing patterns and concepts is key, once you know the concepts and what to look for the syntax is secondary.
I find that even when I am just being exposed to a language I can accomplish a lot just by looking for 'what ought to be there'
Why should you be able to remember all of this stuff? Personally I embrace the fact that I can't remember all of this stuff and simply try and remember where to find the information that I need. I find that much more useful. This takes the form of blogging, taking notes, keeping large amounts of 'sample' code and reusable libraries and writing about code that I find useful and interesting; oh and lots of books, some of which I hardly ever 'need' and some of which I hardly ever really read but they've been skimmed and I know where they live.
Technologies come and go and there's just no way I could have kept all of the things that I may one day find useful in my head; so I page them out and just keep the index in memory... For example; 9 years ago I was doing some stuff with Java and CORBA and whatever. There's no way that I could drop back into that now without the notes that I wrote up for my website back when I was doing it: http://www.lenholgate.com/blog/2001/02/corba---enumeration.html. Likewise I have code that I use on a daily basis that has been kicking around since 1997 or earlier. I don't remember how to type it in, I have it in a file with tests (if I'm lucky) and docs (if I'm even luckier).
Whilst I realise that most of what I'm talking about is 'big stuff' I also often have to go and look at some of my old code to simply work out how to structure a typedef...
Of course the day to day stuff will come with time and practice; but you need to work out that you have to page some of it out in a form that you can reload later very quickly. Embrace the fact that your memory is never going to be able to hold it all and outsource it :)
Here's what I'm wondering. Every night that our 3 months old baby lets us sleep, I jump to my computer and start coding my hobby projects. I have about 20 different projects that I'm working on: different types of projects, from C++ games to web apps along with some contribution to open source projects. It's truly a passion and has been for a lot of years.
Yet, when I look back, I see that I haven't been able to fully complete one of my hobby projects. I've always done the prototypes and setup the most important features, but with time instead of finishing my project I end up switching to another project that seems "so much cooler" at the moment. Hence I usually end up with buggy and incomplete games that have no end nor story, 3D engines that have the fastest PolygonDraw routine ever, yet lack to implement anything else, etc... The list is long. I think I must have written unfinished Pong over a hundred times different!
I've been told that the remedy is to write specs for my hobby projects.
On one hand, I write a lot of specs at work. I know how crucial they are for defining a product's roadmap and staying within schedule. On the other hand, specs and hobby project just quite don't seem to go along! It seems to me that the learning curve to building a game is actually what makes it fun; not the game itself. Hence the fun of losing time restructuring an entire engine, the fun of creating the most useless features, and so on...
So here comes the question: Do you ever write specifications for your hobby projects? How are they different then the ones from work? How do you manage to complete your hobby projects?
I'd be glad to know while I work on my new project: a piano sonata generator :)
I don't think writing specs is the solution to your problem. Clearly, your "hobby projects" are things that you find fun. You write the fun parts but then avoid the not fun parts that would be necessary to complete something.
If you're just "programming for fun" then good, you're succeeding. I don't think writing specs is fun.
If you really want to "finish" something, the best way isn't to write a spec, it's to not jump to another project when the fun factor dips.
It is all about 'Self project management' ... even for fun.
I feel for you ... I used to have many repos that tended to all get stuck at around revision 200 or so.
Here is what used to happen, because I didn't do enough planning, after around 200 commits, things get messy and need a rewrite ... then interest disappears because it seems like too much hassle.
I learned to write my own specs for personal use
to
Give me focus to get the job done, and not go off into feature creep lane
Remind me what I am working towards
To have great ideas before I get coding
Keep thing more fun for a longer time
For me, writing my own specs is vital to getting anything done!
You wouldn't start a business without a plan would you?
For personal projects I have tons of moleskine books filled with rough specs and ideas. When they mature, they migrate from the note books into real documents and the coding begins.
BIG EDIT: On a drive for personal efficiency and, to get projects finished. I read "Getting Things Done" ... Despite all the hippy crap about 'psyche' and various levels of mind (which im sure is not based in any science) the tips are very good.
I don't get too complicated, but listing out all of the features and requirements that you want included in your application really does help. As with most hobby projects you often don't just sit down and code them straight through for 2 months and finish them. It's an hour here, two hours there, etc. Basically it's very common to forget what you were working on last and what the original purpose of this super great idea for an application was.
If you spend a few hours writing down specs and requirements it will be very valuable to you 6 months down the road when you get some free time or your ADD switches to that project and you try to remember what it is this was suppose to do.
I just found out recently that writing specs is really the thing I need to get my projects done.
I've been a bit like you, tons of projects, jumping from one to the other and never getting things finished. Until about 6 months ago, when I started to actually write specs and have a kind of roadmap for my projects.
All that I can say is that, it actually works, because you break your projects into smaller steps, just like a race with checkpoints, and when you start to mark the checkpoints as done, it feels good, addictive and your focus will be on the finish line.
This way, you get to keep only 1 or 2 projects at the same time, but actually finish them. And of course, you have the extra and pretty valuable bonus of keeping up with the project even if you don't touch it for about a month or more. The specs will always be there to remind you of the goals and purposes of your project.
This is just my personal experience, and I believe that you should give it a try. Hopefully it will workout for you too.
I've been able to do some hobby projects and finish some of them. I try to finish them all but some i just cant muster.
The reason i think is that the amount of details that are needed to finish a projects are so many that it goes from a passion project to a chore of a project.
What helped me finish most of mine is that they stayed a passion until the finishing touches were left. So i just plowed through them.
Will a spec help, to some degree yes. They get you further into the project but almost always there's a point where the passion fades and you look for the next shiny object.
It doesn't work for me! Infact whenever I'm writing up specs I'm generally making the projects even bigger, and less likely to be finished.
Sometimes the best way to do it is to just do it.
Ze Frank explains this much better than me:
http://www.zefrank.com/theshow/archives/2006/07/071106.html (video link with swearing)
EDIT: Just to add. If you are finding you want to leave your half-finished project for a new, grand idea... do it! Don't look back!
Completion is not a requirement for your own pet projects. Nobody will blame you for not finishing stuff that barely anyone else would even bother starting.
The reason you started was because of passion. That is very important. You should not force yourself to 'slog through' in your free time. You will drain your passion which is your most vital resource.
I usually write a first set of spec when I get started.
I'm also a big fan of paper thinking, so I'll draw screens, UML, diagrams, flow charts, design elements... It's just a matter of defining the scope of your project and be able to watch what you had in mind. It really helps me think.
These documents will be my specs for the whole project. I will add others as I go, but I'm not trying to maintain the old ones as much as I would have it it was a work project: I know where I'm going and I can keep track of the changes looking at my code.
Of course, some of my hobby projects are done collaboratively. In these cases, I write down more specs in order to have a better communication with my team and I try to keep documents such as DB Diagrams up to date.
I also have several hobby projects that I have not finished. I have about 10 and have written a specification for exactly one of them, the largest in scope (also a game).
I have not finished either the ones without specifications, nor the one with. I think this is because I never publish the work or show it to anyone so it remains full of bugs and never 'finished.
I suppose that this means that regardless of whether or not you have a spec, it will not affect the success of the project as much as other factors, like having the time, motivation, help, and having confidence.
The single best thing I've ever found to help move towards completion is to have someone else working on the project with you. Find a friend (or two) who is interested in the same thing and design/code it with them. Not only do you have someone to bounce ideas off of, but you've also got someone to motivate you, not to mention progress is twice as fast so you'll hopefully finish before you give up :)
Of course, it requires source control, but you were already using that for your projects, right? :)
Do you want to finish them?
I think it's reasonable to never finish a hobby project. You can just keep working on it as long as you live. Aciddose has been working on his virtual instrument xhip for years, stubbornly never getting to 1.0, making the instrument patches people program worthless from one release to the next. Yet he and the users of his softsynth seem to be having a grand time.
Maybe if you just aim for a "release" and not being "finished" you'll be more satisfied. Betas let you keep dreaming.
Yes and no. I write notes in a notebook as I'm thinking about it, and add to it as I implement it. It is a somewhat different process from work projects where someone else may have to see the spec.
I finish about half of what I start.
I've helped with development on a range of systems from safety critical avionics to throwaway personal projects like a Sudoku solver. Obviously with the avionics systems, specifications were critical to the safe operation of the system and to prevent killing somebody, but I've never bothered with my personal projects.
I think this is because specs are generally boring to read and write. Joel wrote an interesting article about this, and how to make them better if you do write them:
Painless Functional Specifications
Unfortunately I haven't had the guts to try making my specs more fun to read at work yet.
Maybe intead of writing specs you should try working on some projects for or with other people? That could provide some external motivation. I do some web devleopment for my cousin's drive in theater, and if they need a feature they won't stop asking me about it until I finish it.
The single biggest piece of advice I could give you would be to get something out there - make the spec for your first version small enough that you actually feel you can complete it, even though it won't have nearly all the features you want.
Once you get something out there, the pressure from users of your software will be enough to hopefully keep you going on it. It also ensures that the direction you take in development is the same direction your users want you to go.
If you don't actually get any users, then don't feel so bad about dropping the project - if nobody is interested, it probably isn't worth pursuing.
If pressure from your users isn't enough to keep you focused, then open source it. If there's enough interest in it, somebody else will pick it up where you left off, and you are free to move on to bigger and better things.
Unfortunately, after writing specs for the core of the DIFL engine (don't bother looking it up, as there's no trace of it outside my home systems), I still didn't finish it up.
Short answer: developing specifications for a hobby project is neither necessary nor sufficient to guarantee completion.
That being said...
I keep an engineering notebook for all of my personal projects. I use the notebook to capture all sorts of things about the projects on which I work. This includes project motivation, valuable resources leveraged during the project, things developed over the course of the project that might potentially be reused later, key insights gained, etc. etc. It also includes, more to your question, specifications for most of the projects. I employ an agile/lean approach to creating these specifications which, for me, is compelling from a cost/benefit perspective.
btw...I have many, many personal projects that did not culminate in a complete working system. Some of these I might get around to completing 'someday maybe'. I consciously chose to stop working on some of the others because they had served their purpose (e.g. introduced me to a new technology, helped me better understand a language feature, etc.) Continuing to crank away at projects like these would have led to diminishing returns so I chose to reallocate my time to projects I felt were higher leverage.
The real question is: what is your hobby? Is it finishing a project, or tinkering. If getting the last ten yards is a chore, you have to decide if it's worth it to you. Writing detailed specs will work; so will self-flagellation if you're into that sort of self-discipline. Nothing will make it easy if it's against your make-up, so you have to decide whether the end-goal is worth anything to you.
And, just to demonstrate that there is nothing programming-specific about this point, you might really like this guy. One of the main points in his work is that conceptual artists, such as Picasso and Da Vinci never really cared about the final execution--the idea was everything, and, having asserted it, they were strangely content with someone else finishing the actual work or leaving the sketch unfinished and unpublished.
I'm not sure that writing specs is the solution to your problems (or mine which seem similar) however in the case where I want to make something more than a throwaway experiment there are a few things that help me slightly without taking the fun out of it.
Specs really are quite tight and should be technical but for a hobby approach you could write up a little bit of something similar much more loose that outlines some of the things you would like to feature and shows how they fit together in a sort of design draft. Though not as detailed or restrictive as a proper spec it might help to keep the tinkering leading in the right direction.
Secondly you could break it down and depending on your time allowances maybe add a few goals in. If you focus on building one part of the project as a time breaking it into subprojects that can be linked together at the end, it gives a feeling of progress as you move from part to part rather than feeling like you have been working on the same thing for ages and can't be bothered any more. It works if you tick it off on a list, since usually it has to happen atleast mentally anyway.
In saying this if your goal is to play with certain concepts and not actually create a final product then you probably won't because you aren't working towards it. One way might be to take the above mentioned idea of breaking it up and then find a way of adding something personally interesting into each part that bores you, maybe trying to add a challenge into it or something.
I'm not particularly experienced still learning, but this is how I keep my tinkering together(sometimes unless I hit a total block cause by inexperience) and how I've approached many multimedia and web projects on a hobby basis in past years. Though the guy who said open-source it when you get bored and let someone else pick it up, that was a good idea if you want to see your code used but have satisfied your personal goals.
I have much the same problem. One thing I've noticed that HAS helped though, is lowering my ambitions. like WAY WAY low. Writing a spec is one way to reign in the ambitions, if you have some kind of limiting rule for the spec, like "The spec can only be one page", or "the spec can be no longer than 300 words long", or "Spec only something that I can get done in one day of coding". Getting the balance right can take some practice. If you go with the last limit, you can impose the rule of MANDATORY dismissal of the project if you can't finish it in one day.
The nice thing about this, is it limits you to achievable goals. This might sound really stupid or wrong at first. Or maybe it sounds reasonable, but you just can't help it, you wanna do amazing things, not ordinary things! Not small things that you can only get done in a few hours!
but keep this in mind:
“A complex system that works is
invariably found to have evolved from
a simple system that worked. The
inverse proposition also appears to be
true: A complex system designed from
scratch never works and cannot be made
to work. You have to start over,
beginning with a working simple
system.”
—John Gall
It is SO MUCH easier to make that ambitious project, if you already have a FINISHED and WORKING project to base it on. Then the "more complex thing" CAN be a project that fits in a day. This is the ideal and philosophy I'm working towards, because I think it has the best chance of succeeding. Looking at past successful projects, the vast majority of them evolved in this way, whether it was intentional or not.
What helps me a lot is to split a new feature into small tasks that could each be done in an evening hacksession. So if I have time, I simply pick one task from the list and just finish it. This is often enough to get "in the flow" and do "just one more".
I do this only for one feature at a time so I don't get distracted by all the other cool things I could add to my application.
I constantly write specs for my projects, in work, at university and outside in my free time. The biggest weakness of a programmer is his/her memory, so I find it good to keep myself busy during my thinking time by writing down my every thought into some sort of structured document. Before you know it you've written a full database schema or have a Requirements Specification.
At the moment I'm working on improving my SQL skills, and I've been spending a lot of this free time between writing queries writing down my experienced. After a couple of tweaks I had a decent document outlining what needed to be done.
I think the core problem is not the lack of specs, but rather that finishing something (anything) is hard.
It is hard work. It may seem as if your program is 90 % done. But doing those last 10 % (rooting out all bugs, getting the application to release quality, writing documentation, etc) requires as much work as the first 90 %. And if you want to be serious about marketing your program, answering support emails, fixing other people's bugs, that's more work still. And perhaps not the kind of work you are most interested in.
It is also hard mentally. An unfinished project has unlimited potential. It is an empty canvas where you can project your unbridled ambitions, lofty ideals and revolutionary thoughts. Once it is finished and made real you have to see it for what it is. Limited. Flawed. Never as pretty as the idea that spawned it.
That said, finishing something can also be very rewarding. You learn a lot, get a reality check on your ideas, the satisfaction of having completed something and you get to see what other people think of your work.
Some advice:
Make sure that you really want to finish the project. I.e., that the rewards are worth all the hard work. (If not, then accept that fact and remain a happy tinkerer.)
Find ways of motiviating yourself through the "boring" parts. Specs, maybe, if it keeps you focused. But find whatever works for you, whether it is ticking of todo-items, rewarding yourself with a cookie or dreaming of fame and fortune.
Release early, release often. The more you save for a "big release" the bigger is the chance that that release never happens.
First release, then rewrite. When you feel the urge to do a major rewrite, do a release first, then do the rewrite (if you are still up for it). Software is never perfect. If you strive for perfection without any pressure to release your half-baked (but existing) code, then you will never be done.
Most hobby projects of mine don't really get finished either. As long as I'm working on something and learning though I don't think thats a problem. Currently I'm not writing specs, but I am practicing/training TDD. I bring it up as I write tests that are the specs. Some days I'll sit down and just create a bunch of tests outlining what the software should do. Some days I make those tests pass. Its enjoyable in that I don't have to keep the code all in my head, and at any point I can sit down and make further progress by fixing the broken tests. Things just work, its kind of surreal.
Joel's article about the Evidence Based Scheduling works for me. Though I implemented it differently.
The idea is to break the project into small tasks and give estimates, then make a forecast when your project will finish based on the time the finished tasks took to finish them.
You may think your project will take years to finish, but actually from the estimate it's just two months or less. If you work more and finish tasks quickly, you will see the finish date coming earlier.
I think the most motivating thing to proceed forward is seeing the goal coming closer you run towards.
Plus: create something you will use later. Using stuff gives you incentive to improve it later.
Setup
Have you ever had the experience of going into a piece of code to make a seemingly simple change and then realizing that you've just stepped into a wasteland that deserves some serious attention? This usually gets followed up with an official FREAK OUT moment, where the overwhelming feeling of rewriting everything in sight starts to creep up.
It's important to note that this bad code does not necessarily come from others as it may indeed be something we've written or contributed to in the past.
Problem
It's obvious that there is some serious code rot, horrible architecture, etc. that needs to be dealt with. The real problem, as it relates to this question, is that it's not the right time to rewrite the code. There could be many reasons for this:
Currently in the middle of a release cycle, therefore any changes should be minimal.
It's 2:00 AM in the morning, and the brain is starting to shut down.
It could have seemingly adverse affects on the schedule.
The rabbit hole could go much deeper than our eyes are able to see at this time.
etc...
Question
So how should we balance the duty of continuously improving the code, while also being a responsible developer? How do we refrain from contributing to the broken window theory, while also being aware of actions and the potential recklessness they may cause?
Update
Great answers! For the most part, there seems to be two schools of thought:
Don't resist the urge as it's a good one to have.
Don't give in to the temptation as it will burn you to the ground.
It would be interesting to know if more people feel any balance exists.
I'm a big fan of making lists!
As soon as the urge takes you to re-write something - spend 10 minutes making a list of the things that need re-writing. Follow all of the alleys that take you further into the code that needs attention and list those, too.
Hopefully within a relatively short space of time you'll have one of two things:
A really long list that completely puts you off wanting to rewrite anything ever again.
A list which actually isn't that long, so why not indulge yourself and go for a re-write?!
I've found that spending two months fixing bugs caused by my "harmless" re-write was enough to cure me of the urge to do these sorts of things without a clear mandate/project plan to do so.
The most memorable project of this kind for me occured some 4 years ago when I was called in to a remote office to "help" with a project that was due in 1 week for major presentation to the client and was not working yet at all. The project had been primarily off-shored to India, and IMO, a project management failure resulted in a ton of spaghetti code that was too fragmented to ever work properly in its current form.
After a full day's review, I presented my opinion to the management that the project simply needed wholesale refactoring and reorganization or it would never work properly. The result of this discussion was 6 days of 20 hours work / 4 hours sleep, 2 of which I actually spent sleeping on the couch in the company lobby due to the wasted time when driving back to the hotel.
The major improvements to the code included:
Application of naming standards
Moved into source control
Development of a build process
Documentation of the individual components
Most of the original code was left in place, but simply moved and reorganized / refactored to make it sustainable in the long term. Was it hell week? Sure. Did it make the project more successful? Yep.
I can't live with spaghetti code, and I'll often donate my own personal time to address it.
How to restrain one’s self from the overwhelming urge to rewrite everything?
Become old*
As this has happened to me, I've gradually lost the urge to rewrite everything. Why?
Once you've done it a few times, you realise that you often end up worse off than you started.
Even if you're god's gift to programming and your brilliant rewrite introduces no new bugs, you'll simply fail to notice or implement about 30% of the small edge-case features/bugs which things rely on. This will cost you months in fixing
Nothing wears down your irrational exuberance like time. Often this is a sad loss, but in this case, it's a win.
*(more experience may also be a suitable substitute if you lack the free time to become old)
The impulse to rewrite is righteous, provided that:
you "freeze" the existing code (with a label)
you start the rewrite attempt in a separate branch
you prepare first some unit-test in order to ascertain the current behavior and make sure you reproduce the existing features...
That said, you have to balance the rewriting process with the measure stability of the legacy code.
"If it is not broken, do not fix it" ;)
It's not quite an answer, but reading the beautiful article The Narcissism of Small Code Differences might help.
You're absolutely right that there's a right time and a wrong time to rewrite (and destabilize) the code.
Reminds me of an engineer I knew who had a habit of diving in and doing major rewriting whenever he felt like it. This drove the QA staff for his product up the wall -- if he report a bug about something trivial, the bug would get fixed, but the engineer's major rewrite of stuff he noticed while fixing the one bug would introduce fifty other new bugs, which then the QA staff needed to track down.
So one way to cure yourself of the temptation to do rewrites is to walk a mile in the shoes of the QA engineer who has to test, diagnose, and report all those new bugs. Maybe even lend yourself to the QA department for a few months to get a personal feel for the type of work they do (if you've never done that).
Another suggestion would be to make a rule for yourself to write up notes to describe any change. Perhaps post these notes when you check in code changes to source code control. You may be motivated to make the changes as minor as possible this way.
Just don't - The rabbit hole always goes too deep. Make any small local changes that leave the place cleaner than you found it, but leave big refactoring for when you're alert and have plenty of time to burn.
A nice starter on tackling big refactorings in small stages at sourcemaking.com :
you have to make like Hansel and
Gretel and nibble around the edges, a
little today, a little more tomorrow
reread "Refactoring".
Take a piece of paper and itemize the "Bad Smells" list.
(for each smell in BadSmells() {
print smell.name;
}
Add comments to the code including item(s) from the list.
while( odorPersists() ) {
Work through the list, laser-focused on one smell at a time.
}
Personally, this is where bug tracking software such as JIRA or Bugzilla come in to place with me.
I have a hard time NOT fixing all the broken windows when I see them, but if the situation is bad enough (and time permits) I will open a ticket and either assign it to me or assign it to the group responsible, that way I don't go off on a tangent.
-- I do only what needs to be done right them, but yet the issue is documented and will be fixed in time.
-- This is not to say that small broken windows should be handled this way; small issues should always be fixed on contact IMHO.
As far as I am concerned, if you have nothing better to do than to rewrite the code then you should make everyone else aware and just do it. Obviously, if it's going to take days to get any sort of result and the changes would mean excessive downtime then it's not a good idea, but eventually that code will become a problem. Make others aware that the code is awful and try to get some sort of movement to rewrite it.
The problem with the above statement is that as a programmer/developer you will ALWAYS have other things to keep yourself busy. Just leave it on the low-priority list of things-to-do, so when you're struggling with some work you can always keep your rhythm going with the rewrite.
Joel has an article about this:
There's a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong.
I've never worked anywhere particularly agile, so what I do is:
1) Figure out whether there's a reasonable fix that doesn't involve major rewrite. If not, then clear some time (perhaps by explaining to others how difficult the fix is) and do the rewrite.
2) There is a reasonable fix without major rewrite. Apply the fix, run the tests, check it in, mark the bug as fixed.
3) Now, raise a new bug/issue (enhancement request), outlining the proposed rewrite and how it would improve the code (simpler? more maintainable? reduces coupling? affects performance or resource use?). Assign it to myself, CC anyone interested in that bit of code.
4) Give people a chance to comment, then prioritise that new bug within my existing tasks. This usually means don't do it now, because most of the time if I have one "proper" bug to fix, then I have at least two. Do it once the critical list is cleared, or next time I need to do something that isn't boring. Or maybe after a couple of days it won't seem worth doing any more, and it won't get done, and I've saved the time I would have spent doing it.
The important thing, I think, is to avoid shaving a yak every time you make a small bugfix.
The trade-off is that if the code you want to rewrite is really bad, then it's easy to under-prioritise the rewrite, and you end up spending so much time maintaining it that you don't have time to replace it with something that would require less maintenance. That needs to be borne in mind. But no matter what priority the rewrite should be, or how your organisation assigns those priorities, fixing bad design in the same area of code is not the same thing as correcting the out-by-one error that caused the crash you originally went in to deal with. This has to be considered at step 1: if the design means there are probably a whole bunch of other out-by-one errors lurking in the code, then just fixing this one is probably not "reasonable". If it really was a typo, but you happened to spot a refactor opportunity because you had the file open, then correcting it and touching nothing else is "reasonable".
Obviously the more agile your shop, the more significant a refactor you can do without it being so disruptive / time-consuming / political as to require a separate task.
If it's some code you've inherited, start making the code your own. Write unit-tests, then refactor.
Write more unit tests just to find out that the code is running perfectly fine.
If you still have the urge to rewrite it, you will have some tests to find out that your rewriten code is now failing ;)
Bad code is not something that can be avoided totally, but it can be isolated by proper abstraction. If it is indeed wasteland, which means, there is no landfills around, focusing design process is much more effective than trying to make people write perfect code.
IDon't think you SHOULD stop yourself from this. Mostly if you FEEL for the big rewrite it's mostly correct to do. I insanely disagree with Joel Spolsky on this thing...
Though it's one of few places where I disagree with him ... ;)
Stop rewriting when it is good enough. For this you need to know what good program is for you not only for your employer. After all you do programming for life not for good impression. Develop sound criteria which really would tell you that you are good and result is descent and it is time to go to next task. You should like your programs not less then your boss likes them.
Refactor only if your boss/company actually encourages it, otherwise you'll end up doing frequent extra time to bring up code to perfection... Until someone less dedicated touches it again.
Starting assumption: you already "commit early, commit often".
Then there's never a bad time to start a refactoring, because you can easily back out at any point. I find that at the beginning, it always feels like it's going to be easy, but after making a few changes and seeing the effects, I start to realise quite how big a job it's going to be. That is the time to ask whether there is really time to make the changes, or whether living with it until next time you come to this code is the better course.
The willingness to stop, throw away a half-refactored branch, and do it the nasty but quick way where appropriate, is key.
That's for refactoring, where the changes are incremental, and running (or nearly-running) software keeps you well grounded. Rewriting is different, because the time until you figure out it's going to take longer than you thought is so much greater. I may be taking the question too literally, but there's almost never a good time to throw it away and start again.
Say there are two possible solutions to a problem: the first is quick but hacky; the second is preferable but would take longer to implement. You need to solve the problem fast, so you decide to get the hack in place as quickly as you can, planning to start work on the better solution afterwards. The trouble is, as soon as the problem is alleviated, it plummets down the to-do list. You're still planning to put in the better solution at some point, but it's hard to justify implementing it right now. Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
Does this sound familiar? I know it's happened more than once where I work. One colleague describes deliberately making a bad GUI so that it wouldn't be accidentally adopted long-term. Do you have a better strategy?
Write a test case which the hack fails.
If you can't write a test which the hack fails, then either there's nothing wrong with the hack after all, or else your test framework is inadequate. If the former, run away quick before you waste your life on needless optimisation. If the latter, seek another approach (either to flagging hacks, or to testing...)
Strategy 1 (almost never selected): Don't implement the kluge. Don't even let people know it's a possibility. Just do it the right way the first time. Like I said, this one is almost never selected, due to time constraints.
Strategy 2 (dishonest): Lie and Cheat. Tell management that there are bugs in the hack, and they could cause major problems later on. Unfortunately, most of the time, the managers just say to wait until the bugs become a problem, then fix the bugs.
Strategy 2a: Same as strategy 2, except there really are bugs. Same problem, though.
Strategy 3 (and my personal favorite): Design the solution whenever you can, and do it well enough that an intern or code-monkey could do it. It's easier to justify spending the small amount of code-monkey money than to justify your own salary, so it might just get done.
Strategy 4: Wait for a rewrite. Keep waiting. Sooner or later (probably later), someone is going to have to rewrite the thing. Might as well do it right then.
Here is a great related article on technical debt.
Basically, it is an analogy of debt with all the technical decisions you make. There is good debt and bad debt... and you have to pick the debt that is going to achieve the goals you want with the least long term cost.
The worst kind of debt is small little accumulating shortcuts that are analogous to credit card debt... each one doesn't hurt, but pretty soon you are in the poor house.
This is a major issue when doing deadline driven work. I find that adding very detailed comments about why this way was chosen and some hints at how it should be coded help. This way people looking at the code see it and keep it fresh.
Another option that will work is add a bug.feature in your tracking framework (you do have one, right?) detailing the rework. That way it is visible and may force the issue at some point.
The only time you can ever justify fixing these things (because they're not really broken, just ugly) is when you have another feature or bug fix that touches the same section of code, and you might as well re-write it.
You have to do the math on what a developer's time costs. If software requirements are being met, and the only thing wrong is that the code is embarrasing under the hood, it's not really worth fixing.
Whole companies can go out of business because over-zealous engineers insist on a re-architecture every year or so when they get antsy.
If it's bug-free and meets requirements, it's done. Ship it. Move on.
[Edit]
Of course I'm not advocating that everything be hacked in all the time. You have to design and write code carefully in the normal course of the development process. But when you do end up with hacks that just had to be done quickly, you have to do a cost-benefit analysis on whether or not it's worth it to clean up the code. If over the lifetime of the application you will spend more time coding around a messy hack than you would have fixing it, then of course fix it. But if not, it's way too expensive and risky to re-code a working, bug-free application just because looking at the source makes you ill.
YOU DON'T DO INTERIM SOLUTIONS.
Sometimes I think programmers just need to be told this.
Sorry about that, but seriously--a hacky solution is worthless and even on the first iteration can take longer than doing a portion of the solution correctly.
Please stop leaving me your crap code to maintain. Just ALWAYS CODE IT RIGHT. No matter how long it takes and who yells at you.
When you are sitting there twiddling your thumbs after delivering early while everyone else is debugging their stupid hacks, you'll thank me.
Even if you don't think you are a great programmer, always strive to do the best you can, never take shortcuts--it doesn't cost you ANY time to do it right. I can justify this statement if you don't believe me.
Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
If you're cursing it, why is it at the bottom of the TODO list?
If it's not affecting you, why are you cursing it?
If it is affecting you, then it's a problem that needs to be fixed NOW.
I make sure that I'm vocal about the priority of the long term fix ESPECIALLY after the short term fix has gone in.I detail the reasons why it's a hack and not a good long term solution and use those to get the stakeholders (managers, clients, etc) to understand why it needs to be fixed Depending on the case, I may even inject a bit of worst case scenario fear in there. "If this safely line snaps, the whole bridge could collapse!"I take responsibility for coming up with a long term solution and make sure that it gets deployed
It is a hard call. I have done hacks personally cause, sometimes you HAVE to get that product out the door and into the customers hands. However, the way that I take care of it is to just do it.
Tell the project lead or your boss, or the customer: There are some spots that need to be cleaned up, and coded better. I need a week to do it, and it is going to cost less to do it now, then it will be to do it 6 months from now, when we need to implement an extension onto the subsystem.
Usually problems like this arise from bad communication with management or the customer. If the solution works for the customer then they see no reason to ask for it to be changed. So they need to be told about the tradeoffs you made beforehand so they can plan extra time to fix the problems after you've implemented the quick solution.
How to solve it depends a bit on why it's a bad solution. If your solution is bad because it's hard to change or maintain then the first time you have to do maintenance and have a bit more time then that is the right time to upgrade to a better solution. In this case it helps if you tell the customer or your boss that you took a shortcut in the first place. That way they know that they can't expect a fast solution next time around. Cripling the UI can be a good way to make sure the customer comes back to get stuff fixed.
If the solution is bad because it's risky or unstable then you really need to talk to the person doing the planning and have some time planned in to fix the problem asap.
Good luck. In my experience this is almost impossible to achieve.
Once you go down the slippery slope of implementing a hack because you are under pressure then you might as well get used to living with it for all time. There is almost NEVER enough time to re-work something that already works, no matter how badly it is implemented internally. What makes you think you will magically have more time "at some later date" to fix the hack?
The only exception I can think of to this rule is if the hack completely prevents you from implementing another piece of functionality that is needed by a customer. Then you have no choice but to do the re-work.
I try to build the hacky solution so that it can be migrated to the longterm way as painlessly as possible. Say you got a guy who is building a database in SQL Server cuz that's his strongest DB, but your corporate standard is Oracle. Build the db with as few non-transferable features (like Bit datatypes) as possible. In this example, it's not hard to avoid bit types, but it makes transitioning later an easier process.
Educate whoever is in charge of making the final decision why the hacky way of doing things is bad in the long-run.
Describe the problem in terms they can relate to.
Include a graph of cost, productivity, and revenue curves.
Teach them about technical debt.
Regularly refactor if you're pushed forward.
Never call it "refactoring" or "going back and cleaning up" in front of non-technical people. Instead, call it "adapting" the system to handle "new features".
Basically, people who don't understand software don't get the concept of revisiting things that already work. The way they look at it, developers are like mechanics who want to keep taking apart and reassembling the entire car every time someone wants to add a feature, which sounds insane to them.
It helps to make analogies to everyday things. Explain to them how when you made the interim solution, you made choices that suited building it quickly, as opposed to being stable, maintainable, etc. It's like choosing to build with wood instead of steel because wood is easier to cut, and thus, you could build the interim solution quicker. The wood, however, simply can not support the foundation of a 20-story building.
We use Java and Hudson for continuous integration. 'Interim solutions' must be commented with:
// TODO: Better solution required.
Every time Hudson runs a build it provides a report of each TODO item so that we have an up to date, highly visible record of any outstanding items that need improved.
Great question. This bothers me a lot, too - and most of the time I'm the sole person responsible for prioritizing issues in my own projects (yep, small business).
I found out that the problem that needs to be fixed is usually just a subset of the problem. IOW, the customer that needs an urgent fix does not need the whole problem to be solved, just a part of it - smaller or larger. That sometimes enables me to create a workaround that is not solution to the complete problem but just to the customer's subset and that allows me to leave the bigger issue open in the issue tracker.
That may of course not apply at all to your work environment :(
This reminds me of the story of "CTool". In the beginning CTool was put forward by one of our devs, I'll call him Don, as one possible way to solve the problem we were having. Being an earnest hard-working type, Don plugged away and delivered a working prototype. You know where I am going with this. Overnight, CTool became a part of the company work flow with an entire department depending on it. By the second or third day, bitter complaints started streaming in about CTool's shortcomings. Users questioned Don's competence, commitment and IQ. Don's protests that this was never supposed to be a production app fell on deaf ears. This went on for years. Finally, someone got around to re-writing the app, well after Don had departed. By this time, so much loathing had become attached to the name CTool that naming it CTool version 2 was out of the question. There was even a formal funeral for CTool, somewhat reminiscent of the copier (or was it a printer?) execution scene in Office Space.
Some might say Don deserved the slings and arrows for not making it go right to fix CTool. My only point is that saying you should never hack out a solution is probably unjustifiable in the Real World. But if you are the one to do it, tread cautiously.
Get it in writing (an email). So when it becomes a problem later management doesn't "forget" that it was supposed to be temporary.
Make it visible to the users. The more visible it is the less likely people are going to forget to go back and do it the right way when the crisis is over.
Negotiate before the temp solution is in place for a project, resources, and time lines to get the real fix in. Work for the real solution should probably begin as soon as the temp solution is finished.
You file a second very descriptive bug against your own "fix" and put a to-do comment right in the affected areas that says, "This area needs a lot of work. See defect #555" (use the right number of course). People who say "don't put in a hack" don't seem to understand the question. Assume you have a system that needs to be up and running now, your non-hack solution is 8 days of work, your hack is 38 minutes of work, the hack is there to buy you time to do the work and not lose money while you're doing it.
Now you still have to get your customer or management agree to schedule the N*100 minutes of time required to do the real fix in addition to the N minutes needed now to fix it. If you must refuse to implement the hack until you get such agreement, then maybe that's what you have to do, but I've worked with some understanding people in that regard.
The real price of introducing a quick-fix is that when someone else needs to introduce a 2nd quick fix, they will introduce it based on your own quick-fix. So, the longer a quick-fix is in place, the more entrenched it will become. Quite often, a hack takes only a little bit longer than doing things right, until you encounter a 2nd hack which builds on the first.
So, obviously it is (or seems to be) sometimes necessary to introduce a quick fix.
One possible solution, assuming your version control supports it, is to introduce a fork from the source whenever you make such a hack. If people are encouraged to avoid coding new features within these special "get it done" forks, then it will eventually be more work to integrate the new features with the fork than it will be to get rid of the hack. More likely, though, the "good" fork will get discarded. And if you are far enough away from release that making such a fork will not be practical (because it is not worth doing the dual integration mentioned above), then you probably shouldn't even be using a hack anyways.
A very idealistic approach.
A more realistic solution is to keep your program segmented into as many orthogonal components as possible and to occasionally do a complete rewrite of some of the components.
A better question is why the hacky solution is bad. If it is bad because it reduces flexibility, ignore it until you need flexibility. If it is bad because it impacts the programs behavior, ignore it and eventually it will become a bug fix and WILL be addressed. If it is bad because it looks ugly, ignore it, as long as the hack is localized.
Some solutions I've seen in the past:
Mark it with a comment HACK in the code (or similar scheme such as XXX)
Have an automatic report run and emailed weekly to those that care which counts how many times the HACK comments appear
Add a new entry in your bug tracking system with the line number and description of the right solution (so the knowledge gained from the research before writing the hack isn't lost)
write a test case that demonstrates how the hack fails (if possible) and check it into the appropriate test suite (i.e. so that it throws errors that someone will eventually want to cleanup)
once the hack is installed and the pressure is off, immediately start on the right solution
This is an excellent question. One thing I've noticed as I get more experience: hacks buy you a very short amount of time, and often cost you a huge amount more. Closely related is the 'quick fix' that solves what you think is the problem -- only to find when it blows up that that it wasn't the problem at all.
Setting aside the debate about whether you should do it, let's assume that you have to do it. The trick now is to do it in a way that minimizes long range affects, it easily ripped out later, and makes itself a nuisance so you remember to fix it.
The nuisance part is easy: make it issue a warning every time you execute the kludge.
The ripped out part can be easy: I like to do this be putting the kludge behind a subroutine name. That makes it easier to update since you compartmentalize the code. When you get your permanent solution, you're subroutine can either implement it or be a no-op. Sometimes a subclass can work nicely for this too. Don't let other people depend on whatever your quick fix is, though. It's difficult to recommend any particular technique without seeing the situation.
Minimizing long range effects should be easy if the rest of the code is nice. Always go through the published interface, and so on.
Try to make the cost of the hack clear to the business folks. Then they can make an informed decision either way.
You could intentionally write it in way that is overly restrictive and singe purposed and would require a re-write to be modified.
We had to do this once - make a short term demo version that we knew we did not want to keep. The customer wanted it on a winTel box, so we developed the prototype in SGI/XWindows. (We were fluent in both, so it wasn't a problem).
Confession:
I have used '#define private public' in C++ in order to read data from some other code layer. It went in as a hack but works well and fixing it has never become a priority. It is now 3 years later...
One of the main reasons hacks do not get removed is the risk that one introduces new bugs while fixing the hack. (Especially when dealing with pre-TDD code bases.)
My answer is a bit different from the others. My experience is that the following practices help you stay agile and move from hackey first iteration/alpha solutions to beta/production ready:
Test Driven Development
Small units of refactoring
Continous Integration
Good Configuration management
Agile database techniques/database refactoring
And it should go without saying you have to have stakeholder support to do any of these correctly. But with these products in place you have the right tools and processes to quickly change a product in major ways with confidence. Sometimes your ability to change is your ability to manage the risk of the changes and from the development perspective these tools/techniques give you surer footing.