What code metric(s) convince you that provided code is "crappy"? [closed] - language-agnostic

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Code lines per file, methods per class, cyclomatic complexity and so on. Developers resist and workaround most if not all of them! There is a good Joel article on it (no time to find it now).
What code metric(s) you recommend for use to automatically identify "crappy code"?
What can convince most (you can't convince all of us to some crappy metric! :O) ) of developers that this code is "crap".
Only metrics that can be automatically measured counts!

Not an automated solution, but I find WTF's per minute useful.
(source: osnews.com)

No metrics regarding coding-style are part of such a warning.
For me it is about static analysis of the code, which can truly be 'on' all the time:
cyclomatic complexity (detected by checkstyle)
dependency cycle detection (through findbugs for instance)
critical errors detected by, for instance findbugs.
I would put coverage test in a second step, as such tests can take time.
Do not forget that "crappy" code are not detected by metrics, but by the combination and evolution (as in "trend) of metrics: see the What is the fascination with code metrics? question.
That means you do not have just to recommend code metrics to "automatically identify "crappy code"", but you also have to recommend the right combination and trend analysis to go along those metrics.
On a sidenote, I do share your frustration ;), and I do not share the point of view of tloach (in the comments of another answers) "Ask a vague question, get a vague answer" he says... your question deserve a specific answer.

Number of warnings the compiler spits out when I do a build.

Number of commented out lines per line of production code. Generally it indicates a sloppy programmer that doesn't understand version control.

Developers are always concerned with metrics being used against them and calling "crappy" code is not a good start. This is important because if you are worried about your developers gaming around them then don't use the metrics for anything that is to their advantage/disadvantage.
The way this works best is don't let the metric tell you where the code is crappy but use the metric to determine where you need to look. You look by having a code review and the decision of how to fix the issue is between the developer and the reviewer. I would also error on the side of the developer against the metric. If the code is still popping on the metric but the reviewers think it is good, leave it alone.
But it is important to keep in mind this gaming effect when your metrics start to improve. Great, I now have 100% coverage but are the unit tests any good? The metric tells me I am ok, but I still need to check it out and look at what got us there.
Bottom line, the human trumps the machine.

number of global variables.

Non-existent tests (revealed by code coverage). It's not necessarily an indicator that the code is bad, but it's a big warning sign.
Profanity in comments.

Metrics alone do not identify crappy code. However they can identify suspicious code.
There are a lot of metrics for OO software. Some of them can be very useful:
Average method size (both in LOC/Statements or complexity). Large methods can be a sign of bad design.
Number of methods overridden by a subclass. A large number indicates bad class design.
Specialization index (number of overridden methods * nesting level / total number of methods). High numbers indicate possible problems in the class diagram.
There are a lot more viable metrics, and they can be calculated using tools. This can be a nice help in identifying crappy code.

global variables
magic numbers
code/comment ratio
heavy coupling (for example, in C++ you can measure this by looking at class relations or number of cpp/header files that cross-include each other
const_cast or other types of casting within the same code-base (not w/ external libs)
large portions of code commented-out and left in there

My personal favourite warning flag: comment free code. Usually means the coder hasn't stopped to think about it; plus it automatically makes it hard to understand, so ups the crappy ratio.

At first sight: cargo cult application of code idioms.
As soon as I have a closer look: obvious bugs and misconceptions by the programmer.

My bet: combination of cyclomatic complexity(CC) and code coverage from automated tests(TC).
CC | TC
2 | 0% - good anyway, cyclomatic complexity too small
10 | 70% - good
10 | 50% - could be better
10 | 20% - bad
20 | 85% - good
20 | 70% - could be better
20 | 50% - bad
...
crap4j - possible tool (for java) and concept explanation ... in search for C# friendly tool :(

Number of worthless comments to meaningful comments:
'Set i to 1'
Dim i as Integer = 1

I don't believe there is any such metric. With the exception of code that actually doesn't do what it's supposed to (which is a whole extra level of crappiness) 'crappy' code means code that is hard to maintain. That usually means it's hard for the maintainer to understand, which is always to some extent a subjective thing, just like bad writing. Of course there are cases where everyone agrees the writing (or the code) is crappy, but it's very hard to write a metric for it.
Plus everything is relative. Code doing a highly complex function, in minimal memory, optimized for every last cycle of speed, will look very bad compared with a simple function under no restrictions. But it's not crappy - it's just doing what it has to.

Unfortunately there is not a metric that I know of. Something to keep in mind is no matter what you choose the programmers will game the system to make their code look good. I have seen that everywhere any kind of "automatic" metric is put into place.

A lot of conversions to and from strings. Generally it's a sign that the developer isn't clear about what's going on and is merely trying random things until something works. For example, I've often seen code like this:
object num = GetABoxedInt();
// long myLong = (long) num; // throws exception
long myLong = Int64.Parse(num.ToString());
when what they really wanted was:
long myLong = (long)(int)num;

I am surprised no one has mentioned crap4j.

Watch out for ratio of Pattern classes vs. standard classes. A high ratio would indicate Patternitis
Check for magic numbers not defined as constants
Use a pattern matching utility to detect potentially duplicated code

Sometimes, you just know it when you see it. For example, this morning I saw:
void mdLicense::SetWindows(bool Option) {
_windows = (Option ? true: false);
}
I just had to ask myself 'why would anyone ever do this?'.

Code coverage has some value, but otherwise I tend to rely more on code profiling to tell if the code is crappy.

Ratio of comments that include profanity to comments that don't.
Higher = better code.

Lines of comments / Lines of code
value > 1 -> bad (too many comments)
value < 0.1 -> bad (not enough comments)
Adjust numeric values according to your own experience ;-)

I take a multi-tiered approach with the first tier being reasonable readability offset only by the complexity of the problem being solved. If it can't pass the readability test I usually consider the code less than good.

TODO: comments in production code. Simply shows that the developer does not execute tasks to completion.

Methods with 30 arguments. On a web service. That is all.

Well, there are various different ways you could use to point out whether or not a code is a good code. Following are some of those:
Cohesiveness: Well, the block of code, whether class or a method, if found to be serving multiple functionality, then the code can be found to be lower in cohesiveness. The code lower in cohesiveness can be termed as low in re-usability. This can further be termed as code lower in maintainability.
Code complexity: One can use McCabe cyclomatic complexity (no. of decision points) to determine the code complexity. The code complexity being high can be used to represent code with less usability (difficult to read & understand).
Documentation: Code with not enough document can also attribute to low software quality from the perspective of usability of the code.
Check out following page to read about checklist for code review.

This hilarious blog post on The Code C.R.A.P Metric could be useful.

Related

How long should it take for someone to be able to type code from memory? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I understand that this question could be answered with a simple sentence and that it may be viewed as subjective, however, I am a young student who is interested in pursuing a career in programming and wondered how long it took some of you to get to the level of experience you are now?.
I ask this because I am currently working on building an application in Java on the Android platform and it bothers me that I am constantly having to look up how to write a certain section of code in my application such as writing to a database, or how the if statement should be structured.
My question really is, how long did it take for you to become experienced enough to actually know exactly how your next line of code was going to look, before you even wrote it?
The speed at which you can quickly recall language syntax, common library functions, and best practice patterns is directly proportional to the amount of time you spend using them.
In other words you will find yourself getting faster the more you do it.
I have been a C++ programmer for the last 20 years. It has taken me that long to get to this expertise level. I'm mostly a Windows programmer, and I keep the msdn website up on one of my monitors all the time.
Doesn't matter how long you've been doing it. You will never know everything from memory. Don't sweat it.
I've been programing for almost half of my life and I sill can't always recall simple syntax, let alone entire tracks of code for more complex tasks. If you ask me that's what reference books and Google are for.
A far more important skill to have is the general knowledge of programming in any language, i.e. recursion, looping, object oriented design, working with APIs, error handling etc... Once you have all that down, you can apply it to any language and platform.
I can tell you that after 25 years there are lines of code that I don't know how they're exactly going to look like.
Want an example? I'm programming in Java since last century and I can honestly still make a mistake if I were to type hashcode() or hashCode().
Why? Because actually typing such a method name yourself is so last century. Your intention is to override Object's hashCode() method, so you use programming by intention.
You hit Ctrl-O then h and you get a list of the methods starting with an 'h' that you can override. Then you hit enter. As a bonus, the "#Override" gets inserted for you too.
4 keys. 4, to get this:
#Override
public int hashCode() {
}
And honestly, whether hashCode takes an uppercase 'c' or not... I couldn't care less. This is not what an hashcode is about and my intention is not to know all the inconsistencies languages and API designers came up with. My intention is to override the method that gives back an object's hashcode and my (modern) IDE allows me to get that skeleton in four keypresses, including hitting enter.
Another example: there are people who do really type this countless times a day:
for (int i = 0; i < ; i++) {
}
or the more tricky:
for (int i = ; i >= 0; i--) {
}
Note in that latter case I can still mess up and type "i++" instead of "i--" (a 'thinko' as its called).
But I don't care at all, because I type "fi<tab>" (three keys) and I get the first one or "fir<tab>" (four keys, "for i (in) reverse") and I get the second one. You ain't beating that (especially seen that I'm a touch typist so I type these three or four keys fast). In addition to speed, as an added bonus the autocompletion won't mess you "i--".
In many case I don't know exactly the line I'll get: sure, I know it "more or less" and that's exactly the way it should be.
Don't sweat it too much. As others have mentioned it eventually gets easier to write code without looking things up so much as you work with a particular language over time.
HOWEVER
There are a few reasons that even veteran programmers find themselves constantly using reference material:
(1) Unlike days of yore, most projects now require you to use a number of languages to get the job done. For example single web site-project a web-site may require C#, XML, JavaScript, SQL, HTML, XHTML, RegEx and CSS all in the same project. Switching between some of these languages can really throw you for a loop sometimes because many of them are just similar enough to be familiar, but just different enough to make you forget the subtle differences in syntax.
(2) Just when you start getting comfortable that you know a language inside and out, the vendor will release a new update that changes everything you knew about it. For example ASP->ASP.NET.
I still look up simple things fairly frequently and I've been at this almost 20 years. The important thing is that you understand the underlying concepts and principles.
It took me 12 years to get where I am at today, which is my experience in professional programming. You will always improve when working with some programming language, even if you have been working with it for the past 5 years.
About your question, it depends. I think that you should be comfortable with the syntax after a week, comfortable with the main libraries after a month, and comfortable with the platform after 6 months.
But when you get there, don't stop!
If you code every day with the same language, you'll probably have the common language elements and patterns memorized in a month or two. But there are plenty of things that you'll probably never memorize, simply because you don't use them often enough, and also because modern IDE's can help you so much that you don't really need to remember everything if you utilize all their features (like code snippets, shortcuts, intellisense).
I've been coding for 15 years, and doing C# for the last three, and I still use the MSDN reference material every day. However, as far as the basic building blocks of the language are concerned, I had them memorized in the first month or two.
Also, the more often you code, the better you'll commit it to memory.
There's a false assumption going on here I think...
At my job I end up working all over the stack in different languages and platforms. If I'm away from a project for 6 months I end up having to look at code to get even basic things done. The advantage of experience is reducing the re-ramp up time on productivity though.
So, instead of it taking weeks or months to get back to a point where I can write 80% of the code from memory it takes a few or several days (if that sometimes). I've been programming for about 5 years now. I'm just now getting to the point of being able to visualize small applications in their entirety.
As long as you're working on solving a problem that you haven't already solved (numerous times) you'll probably always have to look up code.
If it can be done I'm guessing it takes longer than 5 years for most people, unless they work in one language with one editor and in one area. (ex: C#, Visual Studio, file system operations) My company isn't big enough to employee someone that specific.
Don't be downhearted by having to consult documentation all the time man, that's what it's there for. Over time you get used to syntax and things like that but don't sweat it if you can't remember library methods or ways to connect to a DB.
Over time (with experience) you might remember these off the top of your head but in reality there's nothing wrong with taking a quick consultation of the documentation to refresh the memory.
Also remember that technology is ever changing so it's good to keep the mind fresh with new ways of thinking/ways to do things.
Question is not 100% clear. One of the best programmers I know doesn't remember anything and needs to look up printf formatting strings almost daily. On the other hand if you are having hard time figuring out how to write that for (int i = 0; i < len; i++) loop after doing this for 6 months -- that doesn't sound right.
the idea that one could every bit of code from even a single language and then type plainly from memory is pretty far out. the amount of pre-defined functions for, say PHP or Java alone is immense.
but that being said, its important to learn the programming structures, and know them the best you can. structures like foreach, if then else, switch, etc. are really the things that need to be integrated thoroughly. also, conceptual things like Object Orientation (not just "using" objects like mysqli, but understanding things like controlling code, client code, bottom-up and top-down architecture) are the real things that make good programmers great. for myself, i know that i have not the capacity to learn every defined function thats provided by language writers so i instead learn whether or not it can be done(and of course still try on occasion to do things that cant be done, lol). if you know that, then its a matter of google and books to find the "specific" mechanisms on how.
cheers my friend.
Not an answer per se, but I just wanted to say that as a novice myself, this q&a is one of the most useful things I've read on SO. It seems that yes, experts can probably code the basic stuff from memory, but even they revert to book for complex problems, and for beginners, that's what the book is for.
I feel like I'm just beginning
You should use code snippet if you are using certain piece of code repeatedly. I doesn't bother me if I cannot remember some piece of code from memory.
For me, it depends heavily on what I'm writing. For example, I doubt that most people ever quite memorize all the parameters to some Windows functions. I may know that I need to call CreateFile on the next line, but don't know all the parameters in order until they're typed in (with help from Intellisense, and sometimes MSDN).
With something that's doing simple computation, I'm a lot more likely to be limited by my typing speed (but I'm a fairly poor typist, so thinking faster than I can type doesn't take much).
That means it's really a question of how much of the time you need to refer to something to type the next line of code -- at first, it's a pretty small percentage, and over time it grows. I doubt it ever gets to 100% for anybody though. I, for one, don't think I'd want it to -- that would indicate I wasn't working on new and different things...
If you use eclipse and java, you may find yourself already there.
Other combinations may be a little slower to a lot slower.
Java has the advantage that it's pretty easy for the environment to build an entire parse tree while you are typing. At any place in your code, typing ctrl-space can give you an entire list of valid options.
Also syntax errors are always underlined.
If you want to go hard core though, I started in C before the day of decent editors and it took me about a year before I typed in more than a few lines and didn't get a compile error.
I don't know about memorization. Repetition is mother of all learning and that applies to all aspects of life. Look at the experienced accountant vs. the novice when filing taxes, who looks up stuff more. But what I did discover recently is that I navigate documentations much quicker and have a sense for going directly to where my question is answered. I got 6h sense - I can see the code! Seriously, it all comes with experience. Still, when learning something completely new, there's no shame in looking up how to do certain things. That's what separates humans right, learning from others. The more you work on something, the better you become.
There is absolutely nothing wrong with having to lookup the documentation every time you want to write some code. I got lucky in that once I use a certain function, I don't forget it very easily. However, most of the time, especially when I'm coding in a language that I haven't used in a while,
I start out by writing a flowchart of the algorithm that I want to code - just the pure logic. The most important reason for doing this is so that I don't lose my train of thought and forget the algorithm that solves my problem in the midst of technical problems like syntax and < what library functions exist? >
Then I look at the documentation and check to see if there are simple function calls that will help me accomplish each task in my algorithm
If such functions do not exist, then I either modify my algorithm to accommodate for what functions the language does provide or write helper functions do fill in the gaps.
I only start coding NOW, which is not too difficult to do anymore, because I already have all the relevant functions written down. So now it's just a question of translating pure logic into syntactically accurate code.
Proper syntax usually does not elude me, but if that does happen (VERY rare), Google provides very nice code snippets if you ask nicely.
Hope this helps
May the Force be with you
I'm programming Java now for about 5 years, and I never have had any trouble remembering syntax. I can <brag>write almost all java.util.*, java.io.*, java.lang.* and javax.swing.* stuff out of my memory</brag>, but does it help me? Not very much. It doesn't make me a better programmer than someone who can't!
I'm using Netbeans, which greatly helps working with libraries. Also, the documentation, just in the place where you need it. Sometimes, it's quite unnecessary, but sometimes you'd wish the "auto complete" screen would popup faster!
The best thing as a student is to concentrate on what you are doing, not how fast you are doing it. Looking up things isn't bad; as it'll help your so-called "unconsciousness mind" process what you are really trying to accomplish. Having such breaks, e.g. by looking up a certain documentation or syntax reference, may even let you be better at programming (no proof for this, though).
Question is quite subjective.
With the great many IDE's available and the "newbie" tutorials on getting started, it won't take you long before you're off writing your own apps.
That said, unless you have a "thirst" for how stuff works kinda attitude all the time towards everything, you won't go far. In this field, you really have to have a passion for what you do to be great.
... It bothers me that I am continually having to look things up... How long did it take you to get to the level where you are now?
For me, and for most of the students I teach, the answer depends on two variables:
How many lines of code have I written?
Do I use the language or library every day?
(Reading other people's code is very helpful for learning a language and learning how to think in a language, but for me at least, it hasn't helped me become a fluent writer of programs in a language. Only writing code does that.) So my first comment is that time should be measured in lines of code written, not hours or years.
(Ray Bradbury once advised aspiring writers of fiction to write a thousand words a day six days a week, and after they've written a million words they might start to know something about their craft. This is good advice for programmers too.)
As for my own experience, across a half dozen languages that I currently know well or once knew well, it's been pretty consistent for me that
After writing 100 lines I am continually looking things up in the manual and don't really know what I am doing.
After writing 1,000 lines I use the manual occasionally and am starting to learn how to think in the language.
After writing 10,000 lines I am about as good as I'm going to get without making special efforts.
After writing 25,000 lines I probably will not need the manual again.
It's also true that
To learn to write 100 lines I had to read 100 to 1,000 lines that someone else wrote.
For the first 1,000 lines I write it is good for me to read 2,000 lines someone else wrote. For the next 1,000 lines it is good for me to read 1,000 lines someone else wrote.
After I've written 5,000 lines I learn the most by reading code written by world experts or by people who designed the language and understand what is there. I no longer have much to learn by reading just any program.
On the other hand, my experience about when I stop having to refer continually to the documentation is much less consistent.
I find it especially hard when two languages are very similar; I will never stop needing the ksh manual to tell me what is different from sh or bash, and I will never stop needing the Haskell manual to tell me what is different from Standard ML (though the need grows less with each additional 1,000 lines of Haskell that I write). I also find it interesting that while I have written over 35,000 lines of Lua code, and I will never need the manual again for a language question, I have to look up libraries and API functions almost every time I write something longer than 500 lines. (I've written a lot of short Lua programs and a couple of long ones, and I don't use the language every day, although I definitely use it at least several days each month.)
As for the unspoken question, when are you personally going to get better?, take some advice from Watts Humphrey: measure your own performance and track it over time. I think if each day you count the number of times you had to stop and look things up, and graph that against number of lines of code written or edited (which you can get from source-code control), you will be pleasantly surprised by objective evidence of improvement. And I think once you have such evidence, you will be able to focus more on continuing to improve, and not so much on where you are now or where you hope to be in a year.
It's true that after some years of programming you'll be able to remember a lot of thing without having to check the "manual". For me this is not an important milestone in your programming life though, the really important moment is when you reach the point where you don't know how to do something... but you're sure that can be done and you know where and how to research about it :-)
You made a very important step participating on this site. Exchanging ideas and helping each other it's a excellent way of learning.
Sociologist Malcolm Gladwell believes that ten thousand hours is a good benchmark for the amount of practice required to become world class at many fields of endeavour. I think that sort of number applies to programming as well. This isn't quite what you asked, though; being able to code competently certainly requires familiarity with your environment (language constructs, system libraries, third party libraries and perhaps something of the concepts underpinning them), but there are many soft skills involved which are harder to describe and can only really be acquired through practice.
As others have said, being good at programming is not about typing code from memory; it's about recognising patterns, understanding systems, solving problems. It's about choosing the right tools for the job (languages, libraries, algorithms, whatever) and being able to make proper use of them.
In all the jobs I've had, it's about adaptibility and flexibility; you might have to learn a new language or pick up somebody else's poorly documented code tomorrow, and a good programmer will be able to take this in their stride.
I've been coding professionally for nearly 10 years now; there's all sorts of code I use semi-regularly which I look up the options for at least some of the time. There are too many commands with too many options in too many languages for me to remember each and every last one in detail and Google is quite good enough at getting the information I want.
That said - there are some bits of routine code which I use all the time but can count on one hand the number of times I've written - the exact syntax for populating a dataset in .Net for example. One of the skills I've most come to value over this time and which saves me the most time is spotting when some code can be quickly and easily moved into utility libraries. If it's fiddly but routine, consider this approach to save yourself hassle and improve your overall code quality.
In the context of this question ; java, c++, javascript the languages are still evolving. I can't say about other languages.
The language standard/specification changes over time
Libraries are added to supplement the language constructs e.g Boost, Google Collections, Apache Commons, jQuery
Applications will rarely be bound to a single aspect of a language
Across organizations/projects, coding standards change
A project I worked upon recommended against using primitives
When unfamiliar with the constructs used, I put in pseudo-code flagged with //TODO first .. then go in and find the actual API to use.
IMO, the answer to your question is - there is no definite answer.
As a Java programmer the sheer size of the runtime library makes it impossible to remember everything. Swing is big, there is an XSLT engine (which contains TWO languages), the Concurrent support evolves and grows.
The direct access to the Javadoc API from within Eclipse combined with code completion makes it possible to find the information you cannot remember but you know is there, quickly and efficiently.
I have found the javaalmanac.com (which has been reworked into the more convoluted http://www.exampledepot.com/egs/index.html) invaluable in presenting short, concise and above all correct programs doing just one small thing. Strongly recommended.
Most weeks I program in Java, C#, Python, PHP, JavaScript, SQL, Smarty, Django Templating and occasionally C++ and Objective C. I'm a student so this is partially school work and partially my part-time job. Instead of learning syntax I've learned what concepts to look for.
Seeing patterns and concepts is key, once you know the concepts and what to look for the syntax is secondary.
I find that even when I am just being exposed to a language I can accomplish a lot just by looking for 'what ought to be there'
Why should you be able to remember all of this stuff? Personally I embrace the fact that I can't remember all of this stuff and simply try and remember where to find the information that I need. I find that much more useful. This takes the form of blogging, taking notes, keeping large amounts of 'sample' code and reusable libraries and writing about code that I find useful and interesting; oh and lots of books, some of which I hardly ever 'need' and some of which I hardly ever really read but they've been skimmed and I know where they live.
Technologies come and go and there's just no way I could have kept all of the things that I may one day find useful in my head; so I page them out and just keep the index in memory... For example; 9 years ago I was doing some stuff with Java and CORBA and whatever. There's no way that I could drop back into that now without the notes that I wrote up for my website back when I was doing it: http://www.lenholgate.com/blog/2001/02/corba---enumeration.html. Likewise I have code that I use on a daily basis that has been kicking around since 1997 or earlier. I don't remember how to type it in, I have it in a file with tests (if I'm lucky) and docs (if I'm even luckier).
Whilst I realise that most of what I'm talking about is 'big stuff' I also often have to go and look at some of my old code to simply work out how to structure a typedef...
Of course the day to day stuff will come with time and practice; but you need to work out that you have to page some of it out in a form that you can reload later very quickly. Embrace the fact that your memory is never going to be able to hold it all and outsource it :)

How to convince your fellow developer to write short methods?

Long methods are evil on several grounds:
They're hard to understand
They're hard to change
They're hard to reuse
They're hard to test
They have low cohesion
They may have high coupling
They tend to be overly complex
How to convince your fellow developer to write short methods? (weapons are forbidden =)
question from agiledeveloper
Ask them to write unit tests for the methods.
That depends on your definitions of "short" and "long".
When I hear someone say "write short methods", I immediately react badly because I've encountered too much spaghetti written by people who think the ideal method is two lines long: One line to do the tiniest possible unit of work followed by one line to call another method. (You say long methods are evil because "they're hard to understand"? Try walking into a project where every trivial action generates a call stack 50 methods deep and trying to figure out which of those 50 layers is the one you need to change...)
On the other hand, if, by "short", you mean "self-contained and limited to a single conceptual function", then I'm all for it. But remember that this can't be measured simply by lines of code.
And, as tydok pointed out, you catch more flies with honey than vinegar. Try telling them why your way is good instead of why their way is bad. If you can do this without making any overt comparisons or references to them or their practices (unless they specifically ask how your ideas would relate to something they're doing), it'll work even better.
You made a list of drawbacks. Try to make a list of what you'll gain by using short methods. Concrete examples. Then try to convince him again.
I read this quote from somewhere:
Write your code as if the person who has to maintain it is a violent psycho, who knows where you live.
In my experience the best way to convince a peer in these cases is by example. Just find opportunities to show them your code and discuss with them the benefits of short functions vs. long functions. Eventually they'll realize what's better spontaneously, without the need to make them feel "bad" programmers.
Code Reviews!
I suggest you try and get some code reviews going. This way you could have a little workshop on best practices and whatever formatting your company adhers to. This adds the context that short methods is a way to make code more readable and easier to understand and also compliant with the SRP.
If you've tried to explain good design and people just aren't getting it, or are just refusing to get it, then stop trying. It's not worth the effort. All you'll get is a bad rep for yourself. Some people are just hopeless.
Basically what it comes down to is that some programmers just aren't cut out for development. They can understand code that's already written, but they can't create it on their own.
These folks should be steered toward a support role, but they shouldn't be allowed to work on anything new. Support is a good place to see lots of different code, so maybe after a few years they'll come to see the benefits of good design.
I do like the idea of Code Reviews that someone else suggested. These sloppy programmers should not only have their own code reviewed, they should sit in on reviews of good code as well. That will give them a chance to see what good code is. Possibly they've just never seen good code.
To expand upon rvanider's answer, performing the cyclomatic complexity analysis on the code did wonders to get attention to the large method issue; getting people to change was still in the works when I left (too much momentum towards big methods).
The tipping point was when we started linking the cyclomatic complexity to the bug database. A CC of over 20 that wasn't a factory was guaranteed to have several entries in the bug database and oftentimes those bugs had a "bloodline" (fix to Bug A caused Bug B; fix to Bug B caused Bug C; etc). We actually had three CC's over 100 (max of 275) and those methods accounted for 40% of the cases in our bug database -- "you know, maybe that 5000 line function isn't such a good idea..."
It was more evident in the project I led when I started there. The goal was to keep CC as low as possible (97% were under 10) and the end result was a product that I basically stopped supporting because the 20 bugs I had weren't worth fixing.
Bug-free software isn't going to happen because of short methods (and this may be an argument you'll have to address) but the bug fixes are very quick and are often free of side-effects when you are working with short, concise methods.
Though writing unit tests would probably cure them of long methods, your company probably doesn't use unit tests. Rhetoric only goes so far and rarely works on developers who are stuck in their ways; show them numbers about how those methods are creating more work and buggy software.
Finding the right blend between function length and simplicity can be complex. Try to apply a metric such as Cyclomatic Complexity to demonstrate the difficulty in maintaining the code in its present form. Nothing beats a non-personal measurement that is based on testing factors such as branch and decision counts.
Not sure where this great quote comes from, but:
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it"
Force him to read Code Complete by Steve McConnell. Say that every good developer has to read this.
Get him drunk? :-)
The serious point to this answer is the question, "why do I consistently write short functions, and hate myself when I don't?"
The reason is that I have difficulty understanding complex code, be that long functions, things that maintain and manipulate a lot of state, or that sort of thing. I noticed many years ago that there are a fair number of people out there that are significantly better at dealing with this sort of complexity than I am. Ironically enough, it's probably because of that that I tend to be a better programmer than many of them: my own limitations force me to confront and clean up that sort of code.
I'm sorry I can't really provide a real answer here, but perhaps this can provide some insight to help lead us to an answer.
Force them to read the book "Clean Code", there are many others but this one is new, good and an easy read.
Asking them to write Unit tests for the complex code is a good avenue to take. This person needs to see for himself what that debt that complexity brings when performing maintenance or analysis.
The question I always ask my team is: "It's 11 pm and you have to read this code - can you? Do you understand under pressure? Can you, over the phone, no remote login, lead them to the section where they can fix an error?" If the answer is no, the follow up is "Can you isolate some of the complexity?"
If you get an argument in return, it's a lost cause. Throw something then.
I would give them 100 lines of code all under 1 method and then another 100 lines of code divided up between several methods and ask them to write down an explanation of what each does.
Time how long it takes to write both paragraphs and then show them the result.
...Make sure to pick code that will take twice or three times as long to understand if it were all under one method - Main() -
Nothing is better than learning by example.
short or long are terms that can be interpreted differently. For one short is a 2 line method while some else will think that method with no more than 100 lines of code are pretty short.
I think it would be better to state that a single method should not do more than one thing at the same time, meaning it should only have one responsibility.
Maybe you could let your fellow developers read something about how to practice the SOLID principles.
I'd normally show them older projects which have well written methods. I would then step through these methods while explaining the reasons behind why we developed them that way.
Hopefully when looking at the bigger picture, they would understand the reasons behind this.
ps. Also, this exercise could be used in conjuction as a mini knowledge transfer on older projects.
Show him how much easier it is to test short methods. Prove that writing short methods will make it easier and faster for him to write the tests for his methods (he is testing these methods, right?)
Bring it up when you are reviewing his code. "This method is rather long, complicated, and seems to be doing four distinct things. Extract method here, here, and here."
Long methods usually mean that the object model is flawed, i.e. one class has too many responsibilities. Chances are that you don't want just more functions, each one shorter, in the same class, but those responsibilies properly assigned to different classes.
No use teaching a pig to sing. It wastes your time and annoys the pig.
Just outshine someone.
When it comes time to fix a bug in the 5000 line routine, then you'll have a ten-line routine and a 4990-line routine. Do this slowly, and nobody notices a sudden change except that things start working better and slowly the big ball of mud evaporates.
You might want to tell them that he might have a really good memory, but you don't. Some people are able to handle much longer methods than others. If you both have to be able to maintain the code, it can only be done if the methods are smaller.
Only do this if he doesn't have a superiority complex
[edit]
why is this collecting negative scores?
You could start refactoring every single method they wrote into multiple methods, even when they're currently working on them. Assign extra time to your schedule for "refactoring other's methods to make the code maintanable". Do it like you think it should be done, and - here comes the educational part - when they complain, tell them you wouldn't have to refactor the methods if they would have made it right the first time. This way, your boss learns that you have to correct other's lazyness, and your co-workers learn that they should make it different.
That's at least some theory.

Should code be short/concise? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
When writing a mathematical proof, one goal is to continue compressing the proof. The proof gets more elegant but not necessarily more readable. Compression translates to better understanding, as you weed out unnecessary characters and verbosity.
I often hear developers say you should make your code foot print as small as possible. This can very quickly yield unreadable code. In mathematics, it isn't such an issue since the exercise is purely academic. However, in production code where time is money, having people try to figure out what some very concise code is doing doesn't seem to make much sense. For a little more verbose code, you get readability and savings.
At what point do you stop compressing software code?
I try to reach a level of verbosity where my program statements read like a sentence any programmer could understand. This does mean heavily refactoring my code such that it's all short pieces of a story, so each action would be described in a separate method (an even further level might be to another class).
Meaning I would not reduce my number of characters just because it can be expressed in fewer. That's what code-golf competitions are for.
My rule is say what you mean. One common way I see people go wrong is "strength reduction." Basically, they replace the concept they are thinking with something that seems to skip steps. Unfortunately, they are leaving concepts out of their code, making it harder to read.
For example, changing
for (int i = 0; i < n; i++)
foo[i] = ...
to
int * p = foo, q = foo+n;
while ( *p++ = ... < q );
is an example of a strength reduction that seems to save steps, but it leaves out the fact that foo is an array, making it harder to read.
Another common one is using bool instead of an enum.
enum {
MouseDown,
MouseUp
};
Having this be
bool IsMouseDown;
leaves out the fact that this is a state machine, making the code harder to maintain.
So my rule of thumb would be, in your implementation, don't dig down to a lower level than the concepts you are trying to express.
You can make code smaller by seeing redundancy and eliminating it, or by being clever. Do the former and not the latter.
Here's a good article by Steve McConnell - Best Practices http://www.stevemcconnell.com/ieeesoftware/bp06.htm
I think short/concise are two results from well written code. There are many aspects to make code good and many results from well written code, realize the two are different. You don't plan for a small foot print, you plan for a function that is concise and does a single thing extremely well - this SHOULD lead to a small foot print (but may not). Here's a short list of what I would focus on when writing code:
single focused functions - a function should do only one thing, a simple delivery, multi featured functions are buggy and not easily reusable
loosely coupled - don't reach out from inside one function to global data and don't rely heavily on other functions
precise naming - use meaningful precise variable names, cryptic names are just that
keep the code simple and not complex - don't over use language specific technical wow's, good for impressing others, difficult to easily understand and maintain - if you do add something 'special' comment it so at least people can appreciate it prior to cursing you out
evenly comment - to many comments will be ignored and outdated to few have no meaning
formatting - take pride in how the code looks, properly indented code helps
work with the mind of a code maintenance person - think what it would be like to maintain the code you're writting
do be afraid or to lazy to refactor - nothing is perfect the first time, clean up your own mess
One way to find a balance is to seek for readability and not concise-ness. Programmers are constantly scanning code visually to see what is being done, and so the code should as much as possible flow nicely.
If the programmer is scanning code and hits a section that is hard to understand, or takes some effort to visually parse and understand, it is a bad thing. Using common well understood constructs is important, stay away from the vague and infrequently used unless necessary.
Humans are not compilers. Compilers can eat the stuff and keep moving on. Obscure code is not mentally consumed by humans as quickly as clearly understood code.
At times it is very hard to produce readable code in a complicated algorithm, but for the most part, human readability is what we should look for, and not cleverness. I don't think length of code is really a measure of clearness either, because sometimes a more verbose method is more readable than a concise method, and sometimes a concise method is more readable than a long one.
Also, comments should only supplement, and should not describe your code, your code should describe itself. If you have to comment a line because it isn't obvious what is done, that is bad. It takes longer for most experienced programmers to read an English explanation than it does to read the code itself. I think the book Code Complete hammers this one home.
As far as object names go, the thinking on this has gone through an evolution with the introduction of new programming languages.
If you take the "curly brace" languages, starting with C, brevity was considered the soul of wit. So, you would have a variable to hold a loan value named "lv", for instance. The idea was that you were typing a lot of code, so keep the keystrokes to a minimum.
Then along came the Microsoft-sanctioned "Hungarian notation", where the first letters of a variable name were meant to indicate its underlying type. One might use "fLV", or some such, to indicate that the loan value was represented by a float variable.
With Java, and then C#, the paradigm has become one of clarity. A good name for a loan value variable would be "loanValue". I believe part of the reason for this is the command-completion feature in most modern editors. Since its not necessary to type an entire name anymore, you might as well use as many characters as is needed to be descriptive.
This is a good trend. Code needs to be intelligible. Comments are often added as an afterthought, if at all. They are also not updated as code is updated, so they become out of date. Descriptive, well-chosen, variable names are the first, best and easiest way to let others know what you were coding about.
I had a computer science professor who said "As engineers, we are constantly creating types of things that never existed before. The names that we give them will stick, so we should be careful to name things meaningfully."
There needs to be a balance between short sweet source code and performance. If it is nice source and runs the fastest, then good, but for the sake of nice source it runs like a dog, then bad.
Strive to refactor until the code itself reads well. You'll discover your own mistakes in the process, the code will be easier to grok for the "next guy", and you won't be burdened by maintaining (and later forgetting to change) in comments what you're already expressed in code.
When that fails... sure, leave me a comment.
And don't tell me "what" in the comment (that's what the code is for), tell me "why".
As opposed to long/rambling? Sure!
But it gets to the point where it's so short and so concise that it's hard to understand, then you've gone too far.
Yes. Always.
DRY: Don't Repeat Yourself. That will give you a code that is both concise and secure. Writing the same code several times is a good way to make it hard to maintain.
Now that does not mean you should make a function of any blocks of code looking remotely alike.
A very common error (horror ?) for instance is factorizing code doing nearly the same thing, and to handle the differences between occurences by adding a flag to function API. This may look inocuous at first, but generates code flow hard to understand and bug prone, and even harder to refactor.
If you follow common refactoring rules (looking about code smells) your code will become more and more concise as a side effect as many code smells are about detecting redundancy.
On the other hand, if you try to make the code as short as possible not following any meaningfull guidelines, at some point you will have to stop because you just won't see any more how to reduce code.
Just imagine if the first step is removing all useless whitespaces... after that step code in most programming languages will become so hard to read you won't have much chance to find any other possible enhancement.
The example above is quite caricatural, but not so far from what you get when trying to optimise for size without following any sensible guideline.
There's no exact line that can be drawn to distinguish between code that is glib and code that is flowery. Use your best judgment. Have others look at your code and see how easily they can understand it. But remember, correctness is the number 1 goal.
The need for small code footprints is a throwback from the days of assembly language and the first slightly high level languages... there small code footprints where a real and pressing need. These days though, its not so much of a necessity.
That said, I hate verbose code. Where I work, we write code that reads as much as possible like a natural language, without any extra grammar or words. And we don't abbreviate anything unless its a very common abbreviation.
Company.get_by_name("ABC")
makeHeaderTable()
is about as terse as we go.
In general, I make things obvious and easy to work with. If concision/shortness serves me in that end, all the better. Often short answers are the clearest, so shortness is a byproduct of obvious.
There are a couple points to my mind that determine when to stop optimizing:
Worth of spending time performing optimizations. If you have people spending weeks and not finding anything, are there better uses of those resources?
What is the order of optimization priority. There are a few different factors that one could care about when it comes to code: Execution time, execution space(both running and just the compiled code), scalability, stability, how many features are implemented, etc. Part of this is the trade off of time and space, but it can also be where does some code go, e.g. can middleware execute ad hoc SQL commands or should those be routed through stored procedures to improve performance?
I think the main point is that there is a moderation that most good solutions will have.
The code optimizations have little to do with the coding style. The fact that the file contains x spaces or new lines less than at the beginning does not make it better or faster, at least at the execution stage - you format the code with white characters that are unsually ignored by the compiler. It even makes the code worse, because it becomes unreadable for the other programmers and yourself.
It is much more important for the code to be short and clean in its logical structure, such as testing conditions, control flow, assumptions, error handling or the overall programming interface. Of course, I would also include here smart and useful comments + the documentation.
There is not necessarily a correlation between concise code and performance. This is a myth. In mature languages like C/C++ the compilers are capable of optimizing the code very effectively. There is cause need in such languages to assume that the more concise code is the better performing code. Newer, less performance-optimized languages like Ruby lack the compiler optimization features of C/C++ compilers, but there is still little reason to believe that concise code is better performing. The reality is that we never know how well code will perform in production until it gets into production and is profiled. Simple, innocuous, functions can be huge performance bottlenecks if called from enough locations within the code. In highly concurrent systems the biggest bottlenecks are generally caused by poor concurrency algorithms or excessive locking. These issues are rarely solved by writing "concise" code.
The bottom line is this: Code that performs poorly can always be refactored once profiling determines it is the bottleneck. Code can only be effectively refactored if it is easy to understand. Code that is written to be "concise" or "clever" is often more difficult to refactor and maintain.
Write your code for human readability then refactor for performance when necessary.
My two cents...
Code should be short, concrete, and concentrated. You can always explain your ideas with many words in the comments.
You can make your code as short or compact as you like as long as you comment it. This way your code can be optimized but still make sence. I tend to stay in the middle somewhere with descriptive variables and methods and sparce comments if it is still unclear.

Metrics for measuring successful refactoring [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Are there objective metrics for measuring code refactoring?
Would running findbugs, CRAP or checkstyle before and after a refactoring be a useful way of checking whether the code was actually improved rather than just changed?
I'm looking for metrics that can be can determined and tested for to help improve the code review process.
Number of failed unittests must be less or equal to zero :)
Depending on your specific goals, metrics like cyclomatic complexity can provide an indicator for success. In the end every metric can be subverted, since they cannot capture intelligence and/or common sense.
A healthy code review process might do wonders though.
Would running findbugs, CRAP or checkstyle before and after a refactoring be a useful way of checking if the code was actually improved rather than just changed?
Actually, as I have detailed in the question "What is the fascination with code metrics?", the trend of any metrics (findbugs, CRAP, whatever) is the true added value of metrics.
It (the evolution of metrics) allows you to prioritize the main fixing action you really need to make to your code (as opposed to blindly try to respect every metric out there)
A tool like Sonar can, in this domain (monitoring of metrics) can be very useful.
Sal adds in the comments:
The real issue is on checking what code changes add value rather than just adding change
For that, test coverage is very important, because only tests (unit tests, but also larger "functional tests") will give you a valid answer.
But refactoring should not be done without a clear objective anyway. To do it only because it would be "more elegant" or even "easier to maintain" may be not in itself a good reason enough to change the code.
There should be other measures like some bugs which will be fixed in the process, or some new functions which will be implemented much faster as a result of the "refactored" code.
In short, the added value of a refactoring is not solely measured with metrics, but should also be evaluated against objectives and/or milestones.
Code size. Anything that reduces it without breaking functionality is an improvement in my book (removing comments and shortening identifiers would not count, of course)
No matter what you do just make sure this metric thing is not used for evaluating programmer performance, deciding promotion or anything like that.
I would stay away from metrics for measuring refactoring success (aside from #unit test failures == 0). Instead, I'd go with code reviews.
It doesn't take much work to find obvious targets for refactoring: "Haven't I seen that exact same code before?" For the rest, you should create certain guidelines around what not to do, and make sure your developers know about them. Then they'll be able to find places where the other developer didn't follow the standards.
For higher-level refactorings, the more senior developers and architects will need to look at code in terms of where they see the code base moving. For instance, it may be perfectly reasonable for the code to have a static structure today; but if they know or suspect that a more dynamic structure will be required, they may suggest using a factory method instead of using new, or extracting an interface from a class because they know there will be another implementation in the next release.
None of these things would benefit from metrics.
Yes, several measures of code quality can tell you if a refactoring improves the quality of your code.
Duplication. In general, less duplication is better. However, duplication finders that I've used sometimes identify duplicated blocks that are merely structurally similar but have nothing to do with one another semantically and so should not be deduplicated. Be prepared to suppress or ignore those false positives.
Code coverage. This is by far my favorite metric in general, but it's only indirectly related to refactoring. You can and should raise low coverage by writing more tests, but that's not refactoring. However, you should monitor code coverage while refactoring (as with any other change to the code) to be sure it doesn't go down. Refactoring can improve code coverage by removing untested copies of duplicated code.
Size metrics such as lines of code, total and per class, method, function, etc. A Jeff Atwood post lists a few more. If a refactoring reduces lines of code while maintaining clarity, quality has increased. Unusually long classes, methods, etc. are likely to be good targets for refactoring. Be prepared to use judgement in deciding when a class, method, etc. really does need to be longer than usual to get its job done.
Complexity metrics such as cyclomatic complexity. Refactoring should try to decrease complexity and not increase it without a well thought out reason. Methods/functions with high complexity are good refactoring targets.
Robert C. Martin's package-design metrics: Abstractness, Instability and Distance from the abstractness-instability main sequence. He described them in his article on Stability in C++ Report and his book Agile Software Development, Principles, Patterns, and Practices. JDepend is one tool that measures them. Refactoring that improves package design should minimize D.
I have used and continue to use all of these to monitor the quality of my software projects.
There are two outcomes you want from refactoring. You want to team to maintain sustainable pace and you want zero defects in production.
Refactoring takes place on the code and the unit test build during Test Driven Development (TDD). Refactoring can be small and completed on a piece of code necessary to finish a story card. Or, refactoring can be large and required a technical story card to address technical debt. The story card can be placed on the product backlog and prioritized with the business partner.
Furthermore, as you write unit tests as you do TDD, you will continue to refactor the test as the code is developed.
Remember, in agile, the management practices as defined in SCRUM will provide you collaboration and ensure you understand the needs of the business partner and the code you have developed meets the business need. However, without proper engineering practices (as defined by Extreme Programming) you project will loss sustainable pace. Many agile project that did not employ engineering practices were in need of rescue. On the other hand, a team that was disciplined and employed both management and engineering agile practice were able to sustain delivery indefinitely.
So, if your code is released with many defects or your team looses velocity, refactoring, and other engineering practices (TDD, paring, automated testing, simple evolutionary design etc), are not being properly employed.
I see the question from the smell point of view. Smells could be treated as indicators of quality problems and hence, the volume of identified smell instances could reveal the software code quality.
Smells can be classified based on their granularity and their potential impact. For instance, there could be implementation smells, design smells, and architectural smells. You need to identify smells at all granularity levels before and after to show the gain from a refactoring exercise. In fact, refactoring could be guided by identified smells.
Examples:
Implementation smells: Long method, Complex conditional, Missing default case, Complex method, Long statement, and Magic numbers.
Design smells: Multifaceted abstraction, Missing abstraction, Deficient encapsulation, Unexploited encapsulation, Hub-like modularization, Cyclically-dependent modularization, Wide hierarchy, and Broken hierarchy. More information about design smells can be found in this book.
Architecture smells: Missing layer, Cyclical dependency in packages, Violated layer, Ambiguous Interfaces, and Scattered Parasitic Functionality. Find more information about architecture smells here.

Should I prepare my code for future changes? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Should I prepare my code for possible/predicted future changes so that it's easier to make these changes even if I don't really know if these changes will be required anytime?
I am likely to get lynched for my opinion on this, but here I go.
While I have had this hammered into me over years of reading idealistic articles and sitting through far too many seminars and lectures categorically stating the nirvana like benefits of this, I too had similar questions in my mind. This line of thought can lead to massive over-engineering of the code, adding many man hours or more to design, development and testing estimates, increasing cost and overheads, when in reality this is not often the case. How many times have you actually reused your code or a library. If it is going to be used in many places, through numerous projects, then yes you should.
However, most of the time this is not the case. You will often find it more economical (in time and money) to only refactor your code for reuse and configurability when you actually know that you are going to use it again. The rest of the time the real benefits are lost.
This is not, I repeat NOT, an excuse to write sloppy, poorly designed, poorly documented code. This should be a fundamental that is so wholly ingrained in you that you could not break it, but writing a class for reuse is a waste most of the time as it will never get reused.
There are obvious exceptions to this. If you are writing third party libraries then obviously this is not the case and reuse and expansion should be key to your design. Certain other types of code should be obvious for reuse (Logging, Configuration etc.)
I asked a similar question here Code Reusability: Is it worth it It might help.
Within reason and certainly if its not much effort.
I don't think you can always apply this, as it can make you over-engineer everything and then it takes too long and you don't make much money. Consider how likely the client is to implement something, how much extra it will take to prepare for it now and how much time it will save later.
If it requires a lot of work, but makes sense to save money, you could raise it with the client.
I seem to be in disagreement with a lot of people here, who say always - but I've seen a lot of things where effort has been put into make future features easy to implement ... but they've never been implemented. If a client hasn't paid for the time spent making the feature easy to implement, that's money straight off your bottom line.
Edit: I think its relevant to point out that I'm coming from an agency environment. If you are working on code for yourself, you can probably predict future development with a greater level of certainty, and so its probably feasible to do this in more cases.
yagni.
http://en.wikipedia.org/wiki/YAGNI (*inserted by friendly editor :-) *)
fix the bugs in that horrenous code you're writing today.
refactor when the time comes.
If you work in a refactoring-friendly lanuguage I'd say NO. (In other languages I'm not sure)
You should make your code as loosley coupled as possible and keep things as simple as possible. Stay specific and dont generalize to unknown use cases.
This will make your code base prepared for the things the future will bring.
(And frankly, most anticipations of the future tend to be sufficiently off-mark not to warrant coding for it today)
Edit: It also depends on what you're doing. Designing apis for external users is not the same as developing a web app for your company
Yes -- by doing less.
You won't know what the future requirements for your code. The best preparation for the future is not to implement anything that's not needed right away, and have good unit-test coverage everything you do implement.
Scalability in your code is one thing you should always consider.
The more time you spent today in catering for scalable solutions, the less time you will spend in the future when actually expanding
Predicted or very likely changes - yes, generally it's good to have them in mind.
"Take anything that might ever happen in the universe into account" - no. You don't know what could happen, trying to cover for everything unknown is just over engineering.
Remember that most of you code will be changed/refactored. If you know that you will have to change your code within the next week, prepare it. But don't start making everything exchangeable and modular by default. Just because "maybe in the future" you shouldn't create a framework, if three lines of code do the job for now.
But think twice, if the system behind makes refactoring difficult (databases).
One thing I learned in my mere year of coding for the company I work for, everything you do, no matter how perfect you think it is will come back haunting you for an update or needs to be altered because client X suddenly decided not to like it.
Now I am making my code highly customizable so when that day comes to do some adjustments, it would be ready in no time and I can continue with my work.
In a word, yes.
In a few more words, you should always make your code as readable as possible, include comments, and always assume that at some time in the future, you will be called upon, or someone else will be, to modify the code.
If that someone in the future comes across a block of code, uncommented, unformatted, with no indication of what it does or should do, then they will curse you forever :)
No, never. Write good code that is easy to reuse/refactor but preparing for half thought out enhancements is, imo, the brother of premature optimisation; you'll likely end up doing things you don't need or that push you down a certain (possibly non-optimal) design path at a future date. As mfx says, do the minimum required now and unit test everything; it makes extending code a doddle.
In two words: yes, always.
What you describe is part and parcel of being a good developer.
See On Writing Maintainable Code
The obvious answer is YES. But the more important question is how!
Orthogonality, Reversibility, Flexible architecture, Decoupling, and Metaprogramming are some of the keywords that address this problem. Check out chapters 2 and 5 of "The Pragmatic Programmer"
I find it is generally a better strategy to design a "change-accommodating" architecture than trying to specify specific changes that might (or might not) happen. It is a good exercise, though, to ask "What may change in the future?", and then resist the temptation to prematurely implement potentially unnecessary features, but rather have such possibilities in mind when creating the application architecture.
I find that there is a parallel here to something I heard on test-driven development recently. The person talking about it had observed that while at first it could be a little annoying to always write unit tests and think about how your code can be written to be testable, it turned out that at some point it just begins to come naturally to write test friendly code.
The point is that if you always write with modifiability in mind, you might end up doing it more or less by reflex, thereby making the overhead of writing the extra code very small. If you can reach a state where high quality, extendable code is what comes naturally to you, I think that would definately be worth the initial cost. I do still believe, though, that you must to it in moderation and that sometimes it's just not the right thing to do for a given customer.
Two simple principles to follow:
KISS
Always avoid dependencies
Thats a good start
Yes, but only by writing maintainable, easily refactored code.
You should definitely NOT try to guess what might be required in the future. Such efforts are not only pointless and time-wasting for your current targets, but are more often than not counterproductive when any future changes become apparent.
This is really important. It needs to be done well, but takes experience.
If I count up the number of edits (via "diff") after implementing a typical requirements change, numbers like 10-50 are common.
If I make a mistake on any of them, I've inserted a bug.
So personally, I always try to design to keep that number down.
I also try to leave instructions in the code for how to make anticipated changes. If I'm maintaining code, I really appreciate it if the prior author also did this.
To balance the effort with the
benefits is the skill of design.
Not all our code needs to be flexible. Some things will not be changing.
No wasted effort. Finding the right parts to devote our attention to.
Tricky.
Yes, always think of where your code may need to evolve in the future. In the current project I am working on there are thousands of files and every single one of them has atleast one revision to it. Even leaving aside bug fixes plenty of those revisions are to make way for additional software features.
I wouldn't change my could to prepare for an unknown future feature.
But I would refactor to get the best solution to the current problem.
You can't design against an unknown (future), and as other people have said, trying to build a flexible design can easily lead to overengineering, so I think that the best pragmatic approach is think in terms of avoiding things that you know will make it harder for you to maintain your code in future. Every time that you make a design decision, just ask yourself whether you are making it harder to change things in future, and if so, what you are going to do to limit the problem.
Obvious things that will always cause problems:
Scattered configuration information - you need to be able to check and change this stuff easily
Untested code - you need tests to make changes with confidence
Mingling of storage and output concerns with the core logic - you will switch database instances and output formats, if only for testing
Complex architecture - you need to have a clear mental model of the system
Arrangements that require manual intervention or updates to keep them running