How to convince your fellow developer to write short methods? - language-agnostic

Long methods are evil on several grounds:
They're hard to understand
They're hard to change
They're hard to reuse
They're hard to test
They have low cohesion
They may have high coupling
They tend to be overly complex
How to convince your fellow developer to write short methods? (weapons are forbidden =)
question from agiledeveloper

Ask them to write unit tests for the methods.

That depends on your definitions of "short" and "long".
When I hear someone say "write short methods", I immediately react badly because I've encountered too much spaghetti written by people who think the ideal method is two lines long: One line to do the tiniest possible unit of work followed by one line to call another method. (You say long methods are evil because "they're hard to understand"? Try walking into a project where every trivial action generates a call stack 50 methods deep and trying to figure out which of those 50 layers is the one you need to change...)
On the other hand, if, by "short", you mean "self-contained and limited to a single conceptual function", then I'm all for it. But remember that this can't be measured simply by lines of code.
And, as tydok pointed out, you catch more flies with honey than vinegar. Try telling them why your way is good instead of why their way is bad. If you can do this without making any overt comparisons or references to them or their practices (unless they specifically ask how your ideas would relate to something they're doing), it'll work even better.

You made a list of drawbacks. Try to make a list of what you'll gain by using short methods. Concrete examples. Then try to convince him again.

I read this quote from somewhere:
Write your code as if the person who has to maintain it is a violent psycho, who knows where you live.

In my experience the best way to convince a peer in these cases is by example. Just find opportunities to show them your code and discuss with them the benefits of short functions vs. long functions. Eventually they'll realize what's better spontaneously, without the need to make them feel "bad" programmers.

Code Reviews!
I suggest you try and get some code reviews going. This way you could have a little workshop on best practices and whatever formatting your company adhers to. This adds the context that short methods is a way to make code more readable and easier to understand and also compliant with the SRP.

If you've tried to explain good design and people just aren't getting it, or are just refusing to get it, then stop trying. It's not worth the effort. All you'll get is a bad rep for yourself. Some people are just hopeless.
Basically what it comes down to is that some programmers just aren't cut out for development. They can understand code that's already written, but they can't create it on their own.
These folks should be steered toward a support role, but they shouldn't be allowed to work on anything new. Support is a good place to see lots of different code, so maybe after a few years they'll come to see the benefits of good design.
I do like the idea of Code Reviews that someone else suggested. These sloppy programmers should not only have their own code reviewed, they should sit in on reviews of good code as well. That will give them a chance to see what good code is. Possibly they've just never seen good code.

To expand upon rvanider's answer, performing the cyclomatic complexity analysis on the code did wonders to get attention to the large method issue; getting people to change was still in the works when I left (too much momentum towards big methods).
The tipping point was when we started linking the cyclomatic complexity to the bug database. A CC of over 20 that wasn't a factory was guaranteed to have several entries in the bug database and oftentimes those bugs had a "bloodline" (fix to Bug A caused Bug B; fix to Bug B caused Bug C; etc). We actually had three CC's over 100 (max of 275) and those methods accounted for 40% of the cases in our bug database -- "you know, maybe that 5000 line function isn't such a good idea..."
It was more evident in the project I led when I started there. The goal was to keep CC as low as possible (97% were under 10) and the end result was a product that I basically stopped supporting because the 20 bugs I had weren't worth fixing.
Bug-free software isn't going to happen because of short methods (and this may be an argument you'll have to address) but the bug fixes are very quick and are often free of side-effects when you are working with short, concise methods.
Though writing unit tests would probably cure them of long methods, your company probably doesn't use unit tests. Rhetoric only goes so far and rarely works on developers who are stuck in their ways; show them numbers about how those methods are creating more work and buggy software.

Finding the right blend between function length and simplicity can be complex. Try to apply a metric such as Cyclomatic Complexity to demonstrate the difficulty in maintaining the code in its present form. Nothing beats a non-personal measurement that is based on testing factors such as branch and decision counts.

Not sure where this great quote comes from, but:
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it"

Force him to read Code Complete by Steve McConnell. Say that every good developer has to read this.

Get him drunk? :-)
The serious point to this answer is the question, "why do I consistently write short functions, and hate myself when I don't?"
The reason is that I have difficulty understanding complex code, be that long functions, things that maintain and manipulate a lot of state, or that sort of thing. I noticed many years ago that there are a fair number of people out there that are significantly better at dealing with this sort of complexity than I am. Ironically enough, it's probably because of that that I tend to be a better programmer than many of them: my own limitations force me to confront and clean up that sort of code.
I'm sorry I can't really provide a real answer here, but perhaps this can provide some insight to help lead us to an answer.

Force them to read the book "Clean Code", there are many others but this one is new, good and an easy read.

Asking them to write Unit tests for the complex code is a good avenue to take. This person needs to see for himself what that debt that complexity brings when performing maintenance or analysis.
The question I always ask my team is: "It's 11 pm and you have to read this code - can you? Do you understand under pressure? Can you, over the phone, no remote login, lead them to the section where they can fix an error?" If the answer is no, the follow up is "Can you isolate some of the complexity?"
If you get an argument in return, it's a lost cause. Throw something then.

I would give them 100 lines of code all under 1 method and then another 100 lines of code divided up between several methods and ask them to write down an explanation of what each does.
Time how long it takes to write both paragraphs and then show them the result.
...Make sure to pick code that will take twice or three times as long to understand if it were all under one method - Main() -
Nothing is better than learning by example.

short or long are terms that can be interpreted differently. For one short is a 2 line method while some else will think that method with no more than 100 lines of code are pretty short.
I think it would be better to state that a single method should not do more than one thing at the same time, meaning it should only have one responsibility.
Maybe you could let your fellow developers read something about how to practice the SOLID principles.

I'd normally show them older projects which have well written methods. I would then step through these methods while explaining the reasons behind why we developed them that way.
Hopefully when looking at the bigger picture, they would understand the reasons behind this.
ps. Also, this exercise could be used in conjuction as a mini knowledge transfer on older projects.

Show him how much easier it is to test short methods. Prove that writing short methods will make it easier and faster for him to write the tests for his methods (he is testing these methods, right?)
Bring it up when you are reviewing his code. "This method is rather long, complicated, and seems to be doing four distinct things. Extract method here, here, and here."

Long methods usually mean that the object model is flawed, i.e. one class has too many responsibilities. Chances are that you don't want just more functions, each one shorter, in the same class, but those responsibilies properly assigned to different classes.

No use teaching a pig to sing. It wastes your time and annoys the pig.
Just outshine someone.
When it comes time to fix a bug in the 5000 line routine, then you'll have a ten-line routine and a 4990-line routine. Do this slowly, and nobody notices a sudden change except that things start working better and slowly the big ball of mud evaporates.

You might want to tell them that he might have a really good memory, but you don't. Some people are able to handle much longer methods than others. If you both have to be able to maintain the code, it can only be done if the methods are smaller.
Only do this if he doesn't have a superiority complex
[edit]
why is this collecting negative scores?

You could start refactoring every single method they wrote into multiple methods, even when they're currently working on them. Assign extra time to your schedule for "refactoring other's methods to make the code maintanable". Do it like you think it should be done, and - here comes the educational part - when they complain, tell them you wouldn't have to refactor the methods if they would have made it right the first time. This way, your boss learns that you have to correct other's lazyness, and your co-workers learn that they should make it different.
That's at least some theory.

Related

How do you refactor a large messy codebase?

I have a big mess of code. Admittedly, I wrote it myself - a year ago. It's not well commented but it's not very complicated either, so I can understand it -- just not well enough to know where to start as far as refactoring it.
I violated every rule that I have read about over the past year. There are classes with multiple responsibilities, there are indirect accesses (I forget the term - something like foo.bar.doSomething()), and like I said it is not well commented. On top of that, it's the beginnings of a game, so the graphics is coupled with the data, or the places where I tried to decouple graphics and data, I made the data public in order for the graphics to be able to access the data it needs...
It's a huge mess! Where do I start? How would you start on something like this?
My current approach is to take variables and switch them to private and then refactor the pieces that break, but that doesn't seem to be enough. Please suggest other strategies for wading through this mess and turning it into something clean so that I can continue where I left off!
Update two days later: I have been drawing out UML-like diagrams of my classes, and catching some of the "Low Hanging Fruit" along the way. I've even found some bits of code that were the beginnings of new features, but as I'm trying to slim everything down, I've been able to delete those bits and make the project feel cleaner. I'm probably going to refactor as much as possible before rigging my test cases (but only the things that are 100% certain not to impact the functionality, of course!), so that I won't have to refactor test cases as I change functionality. (do you think I'm doing it right or would it, in your opinion, be easier for me to suck it up and write the tests first?)
Please vote for the best answer so that I can mark it fairly! Feel free to add your own answer to the bunch as well, there's still room for you! I'll give it another day or so and then probably mark the highest-voted answer as accepted.
Thanks to everyone who has responded so far!
June 25, 2010: I discovered a blog post which directly answers this question from someone who seems to have a pretty good grasp of programming: (or maybe not, if you read his article :) )
To that end, I do four things when I
need to refactor code:
Determine what the purpose of the code was
Draw UML and action diagrams of the classes involved
Shop around for the right design patterns
Determine clearer names for the current classes and methods
Pick yourself up a copy of Martin Fowler's Refactoring. It has some good advice on ways to break down your refactoring problem. About 75% of the book is little cookbook-style refactoring steps you can do. It also advocates automated unit tests that you can run after each step to prove your code still works.
As for a place to start, I would sit down and draw out a high-level architecture of your program. You don't have to get fancy with detailed UML models, but some basic UML is not a bad idea. You need a big picture idea of how the major pieces fit together so you can visually see where your decoupling is going to happen. Just a page or two of some basic block diagrams will help with the overwhelming feeling you have right now.
Without some sort of high level spec or design, you just risk getting lost again and ending up with another unmaintainable mess.
If you need to start from scratch, remember that you never truly start from scratch. You have some code and the knowledge you gained from your first time. But sometimes it does help to start with a blank project and pull things in as you go, rather than put out fires in a messy code base. Just remember not to completely throw out the old, use it for its good parts and pull them in as you go.
What was most important for me on different occasions were unit tests: I took a few days to write tests for the old code and then I was free to refactor with confidence. How exactly is a different question, but having the tests made it possible for me to make real, substantial changes to the code.
I'll second everyone's recommendations for Fowler's Refactoring, but in your specific case you may want to look at Michael Feathers' Working Effectively with Legacy Code, which is really perfect for your situation.
Feathers talks about Characterization Tests, which are unit tests not to assert known behaviour of the system but to explore and define the existing (unclear) behaviour -- in the case where you've written your own legacy code, and fixing it yourself, this may not be so important, but if your design is sloppy then it's quite possible there are parts of the code that work by 'magic' and their behaviour isn't clear, even to you -- in that case, characterization tests will help.
One great part of the book is the discussion about finding (or creating) seams in your codebase -- seams are natural 'fault lines', if you like, where you can break into the existing system to start testing it, and pulling it towards a better design. Hard to explain but well worth a read.
There's a brief paper where Feathers fleshes out some of the concepts from the book, but it really is well worth hunting down the whole thing. It's one of my favourites.
Just an additional refactoring that is more important than you think: Name things correctly!
This goes for any variable name and method name. If the name does not accurately reflect what the thing is used for, then rename it to something more accurate. This might require several iterations. If you cannot find a short, and entirely accurate name, then that item does too much and you have an excellent candidate for a code snippet that needs to be split. The names also clearly indicate where the cuts are to be made.
Also, document your stuff. Whenever the answer to WHY? is not clearly conveyed by the answer to HOW? (being the code) you will need to add some documentation. Capturing design decisions is probably the most important task as it is very hard to do in code.
You could always start from "scratch". That doesn't mean scrap it and start from nothing, but try to rethink high-level things from the beginning, since you seem to have learned a lot since the last time you worked on it.
Start from a higher level, and as you build the scaffolding of your new and improved structure, take all the code you can reuse, which will probably be more than you think if you're willing to read through it and make some small changes.
When you're making the changes, be sure to be strict with yourself about following all the good practices you now know, because you will really thank yourself later.
It can be surprisingly refreshing to properly re-make program to do exactly what it did before, only more "cleanly". ;)
As others have mentioned as well, unit-tests are your best friend! They help you ensure that your refactoring works, and if you're starting from "scratch", it's the perfect time to write them.
You're in a much better position than many people facing this problem in that you understand what the code is supposed to do.
Taking variables out of a shared scope, as you're doing, is a great start, in that you're partitioning responsibilities. Ultimately you want each class to express a single responsibility. A few other things you might look at:
Easy targets for refactoring are code that's duplicated in lots of places and long methods.
If you're managing application state through statically initialized singletons or worse, a global state that everything is talking to, consider moving it to a managed initialization system (i.e. a dependency injection framework like spring or guice) or at least make sure that the initialization isn't entangled with the rest of the code.
Centralize and standardize how you're accessing outside resources, especially if you've got things like file locations or urls hardcoded.
Buy an IDE that has good refactoring support. I think IntelliJ is the best, but Eclipse has it now, too.
The unit test idea is key as well. You will want to have a suite of large, overall transactions that will give you the overall behavior of the code.
Once you have those, start creating unit tests for classes and smaller packages. Write the tests to demonstrate proper behavior, make your changes, and re-run the tests to demonstrate that you haven't broken everything.
Track code coverage as you go. You'll want to work it up to 70% or better. For the classes you change, you'll want those to be 70% or better before you make your changes.
Build up that safety net over time and you'll be able to refactor with some confidence.
very slowly :D
No seriously... take it one step at a time. For instance, refactor something only if it affects or helps you write the current bug/feature that you are working on right now and do no more than that. And before you refactor make darn sure that you have some kind of automated test in place that gets run on each build that will actually test what you are writing/refactoring. Even if you don't have unit tests, it is never too late to start adding them for all new and modified code that is being written. Over time, your code base will get better in small increments daily or weekly instead of worse - all without you making monumental heaps of changes.
In my personal opinion and experience, it's not worth it to just refactor a (legacy) codebase en masse for the sake of refactoring. In those cases, it's best to just start from scratch and do it right all over again (and very rarely are you afforded the opportunity to do such a thing). Hence, just refactoring incremental is the way to go.
For Java code, my favorite first step is to run Findbugs and then remove all the dead stores, un-used fields, unreachable catch blocks, unused private methods and likely bugs.
Next I run CPD to look for evidence of cut-copy-paste code.
It isn't unusual to be able to reduce the code base by 5% by doing this. It also saves you from refactoring code that is never used.
I think you should use Eclipse as a IDE because it is having many plugins and free of cost.You should now follow the MVC pattern and yes must write test cases using JUnit.Eclipse also have plugin for JUnit and it is providing code refactoring facility too so that will reduce your some work.And always remember that writing a code is not important the main thing is to write clean code.So now give comments everywhere so that not only you but any other person read the code then while reading the code he must feel that he is reading an essay.
Refactor the low-hanging fruit. Nibble away at the easy bits, and as you do that, the harder bits will begin to be a little easier. When there aren't any bits left to refactor, you're done.
The refactorings you'll probably find most useful are Rename Method (and even more trivial Renamings like Field, Variable, and Parameter), Extract Method, and Extract Class. For each refactoring you perform, write the necessary unit tests to make the refactoring safe, and run the entire suite of unit tests after each refactoring. It's tempting - and, let's be honest, pretty safe - to rely on the automated refactorings of your IDE, without the tests - but it's good practice and will be good to have the tests into the future as you add functionality to your project.
You might want to look at Martin Fowler's book Refactoring. This is the book that popularized the term and technique (my thought when taking his course: "I've been doing a lot of this all along, I didn't know it had a name"). A quote from the link:
Refactoring is a controlled technique
for improving the design of an
existing code base. Its essence is
applying a series of small
behavior-preserving transformations,
each of which "too small to be worth
doing". However the cumulative effect
of each of these transformations is
quite significant. By doing them in
small steps you reduce the risk of
introducing errors. You also avoid
having the system broken while you are
carrying out the restructuring - which
allows you to gradually refactor a
system over an extended period of
time.
As others have pointed out, unit tests will allow you to refactor with confidence. And start by reducing code duplication. The book will give you lots of other insights.
Here is a catalog of refactorings.
The correct definition of messy code, is code that hard to maintain and change.
To use more mathematical definition, you can check your code by code metrics tools.
This way, you will keep the code that already good enough, and find very fast, the wrong code.
My experience say, that is very powerful way to improve the quality of your code. (if your tool can show you the result on each build or on realtime)
Throw it away, build it new.

How long should it take for someone to be able to type code from memory? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I understand that this question could be answered with a simple sentence and that it may be viewed as subjective, however, I am a young student who is interested in pursuing a career in programming and wondered how long it took some of you to get to the level of experience you are now?.
I ask this because I am currently working on building an application in Java on the Android platform and it bothers me that I am constantly having to look up how to write a certain section of code in my application such as writing to a database, or how the if statement should be structured.
My question really is, how long did it take for you to become experienced enough to actually know exactly how your next line of code was going to look, before you even wrote it?
The speed at which you can quickly recall language syntax, common library functions, and best practice patterns is directly proportional to the amount of time you spend using them.
In other words you will find yourself getting faster the more you do it.
I have been a C++ programmer for the last 20 years. It has taken me that long to get to this expertise level. I'm mostly a Windows programmer, and I keep the msdn website up on one of my monitors all the time.
Doesn't matter how long you've been doing it. You will never know everything from memory. Don't sweat it.
I've been programing for almost half of my life and I sill can't always recall simple syntax, let alone entire tracks of code for more complex tasks. If you ask me that's what reference books and Google are for.
A far more important skill to have is the general knowledge of programming in any language, i.e. recursion, looping, object oriented design, working with APIs, error handling etc... Once you have all that down, you can apply it to any language and platform.
I can tell you that after 25 years there are lines of code that I don't know how they're exactly going to look like.
Want an example? I'm programming in Java since last century and I can honestly still make a mistake if I were to type hashcode() or hashCode().
Why? Because actually typing such a method name yourself is so last century. Your intention is to override Object's hashCode() method, so you use programming by intention.
You hit Ctrl-O then h and you get a list of the methods starting with an 'h' that you can override. Then you hit enter. As a bonus, the "#Override" gets inserted for you too.
4 keys. 4, to get this:
#Override
public int hashCode() {
}
And honestly, whether hashCode takes an uppercase 'c' or not... I couldn't care less. This is not what an hashcode is about and my intention is not to know all the inconsistencies languages and API designers came up with. My intention is to override the method that gives back an object's hashcode and my (modern) IDE allows me to get that skeleton in four keypresses, including hitting enter.
Another example: there are people who do really type this countless times a day:
for (int i = 0; i < ; i++) {
}
or the more tricky:
for (int i = ; i >= 0; i--) {
}
Note in that latter case I can still mess up and type "i++" instead of "i--" (a 'thinko' as its called).
But I don't care at all, because I type "fi<tab>" (three keys) and I get the first one or "fir<tab>" (four keys, "for i (in) reverse") and I get the second one. You ain't beating that (especially seen that I'm a touch typist so I type these three or four keys fast). In addition to speed, as an added bonus the autocompletion won't mess you "i--".
In many case I don't know exactly the line I'll get: sure, I know it "more or less" and that's exactly the way it should be.
Don't sweat it too much. As others have mentioned it eventually gets easier to write code without looking things up so much as you work with a particular language over time.
HOWEVER
There are a few reasons that even veteran programmers find themselves constantly using reference material:
(1) Unlike days of yore, most projects now require you to use a number of languages to get the job done. For example single web site-project a web-site may require C#, XML, JavaScript, SQL, HTML, XHTML, RegEx and CSS all in the same project. Switching between some of these languages can really throw you for a loop sometimes because many of them are just similar enough to be familiar, but just different enough to make you forget the subtle differences in syntax.
(2) Just when you start getting comfortable that you know a language inside and out, the vendor will release a new update that changes everything you knew about it. For example ASP->ASP.NET.
I still look up simple things fairly frequently and I've been at this almost 20 years. The important thing is that you understand the underlying concepts and principles.
It took me 12 years to get where I am at today, which is my experience in professional programming. You will always improve when working with some programming language, even if you have been working with it for the past 5 years.
About your question, it depends. I think that you should be comfortable with the syntax after a week, comfortable with the main libraries after a month, and comfortable with the platform after 6 months.
But when you get there, don't stop!
If you code every day with the same language, you'll probably have the common language elements and patterns memorized in a month or two. But there are plenty of things that you'll probably never memorize, simply because you don't use them often enough, and also because modern IDE's can help you so much that you don't really need to remember everything if you utilize all their features (like code snippets, shortcuts, intellisense).
I've been coding for 15 years, and doing C# for the last three, and I still use the MSDN reference material every day. However, as far as the basic building blocks of the language are concerned, I had them memorized in the first month or two.
Also, the more often you code, the better you'll commit it to memory.
There's a false assumption going on here I think...
At my job I end up working all over the stack in different languages and platforms. If I'm away from a project for 6 months I end up having to look at code to get even basic things done. The advantage of experience is reducing the re-ramp up time on productivity though.
So, instead of it taking weeks or months to get back to a point where I can write 80% of the code from memory it takes a few or several days (if that sometimes). I've been programming for about 5 years now. I'm just now getting to the point of being able to visualize small applications in their entirety.
As long as you're working on solving a problem that you haven't already solved (numerous times) you'll probably always have to look up code.
If it can be done I'm guessing it takes longer than 5 years for most people, unless they work in one language with one editor and in one area. (ex: C#, Visual Studio, file system operations) My company isn't big enough to employee someone that specific.
Don't be downhearted by having to consult documentation all the time man, that's what it's there for. Over time you get used to syntax and things like that but don't sweat it if you can't remember library methods or ways to connect to a DB.
Over time (with experience) you might remember these off the top of your head but in reality there's nothing wrong with taking a quick consultation of the documentation to refresh the memory.
Also remember that technology is ever changing so it's good to keep the mind fresh with new ways of thinking/ways to do things.
Question is not 100% clear. One of the best programmers I know doesn't remember anything and needs to look up printf formatting strings almost daily. On the other hand if you are having hard time figuring out how to write that for (int i = 0; i < len; i++) loop after doing this for 6 months -- that doesn't sound right.
the idea that one could every bit of code from even a single language and then type plainly from memory is pretty far out. the amount of pre-defined functions for, say PHP or Java alone is immense.
but that being said, its important to learn the programming structures, and know them the best you can. structures like foreach, if then else, switch, etc. are really the things that need to be integrated thoroughly. also, conceptual things like Object Orientation (not just "using" objects like mysqli, but understanding things like controlling code, client code, bottom-up and top-down architecture) are the real things that make good programmers great. for myself, i know that i have not the capacity to learn every defined function thats provided by language writers so i instead learn whether or not it can be done(and of course still try on occasion to do things that cant be done, lol). if you know that, then its a matter of google and books to find the "specific" mechanisms on how.
cheers my friend.
Not an answer per se, but I just wanted to say that as a novice myself, this q&a is one of the most useful things I've read on SO. It seems that yes, experts can probably code the basic stuff from memory, but even they revert to book for complex problems, and for beginners, that's what the book is for.
I feel like I'm just beginning
You should use code snippet if you are using certain piece of code repeatedly. I doesn't bother me if I cannot remember some piece of code from memory.
For me, it depends heavily on what I'm writing. For example, I doubt that most people ever quite memorize all the parameters to some Windows functions. I may know that I need to call CreateFile on the next line, but don't know all the parameters in order until they're typed in (with help from Intellisense, and sometimes MSDN).
With something that's doing simple computation, I'm a lot more likely to be limited by my typing speed (but I'm a fairly poor typist, so thinking faster than I can type doesn't take much).
That means it's really a question of how much of the time you need to refer to something to type the next line of code -- at first, it's a pretty small percentage, and over time it grows. I doubt it ever gets to 100% for anybody though. I, for one, don't think I'd want it to -- that would indicate I wasn't working on new and different things...
If you use eclipse and java, you may find yourself already there.
Other combinations may be a little slower to a lot slower.
Java has the advantage that it's pretty easy for the environment to build an entire parse tree while you are typing. At any place in your code, typing ctrl-space can give you an entire list of valid options.
Also syntax errors are always underlined.
If you want to go hard core though, I started in C before the day of decent editors and it took me about a year before I typed in more than a few lines and didn't get a compile error.
I don't know about memorization. Repetition is mother of all learning and that applies to all aspects of life. Look at the experienced accountant vs. the novice when filing taxes, who looks up stuff more. But what I did discover recently is that I navigate documentations much quicker and have a sense for going directly to where my question is answered. I got 6h sense - I can see the code! Seriously, it all comes with experience. Still, when learning something completely new, there's no shame in looking up how to do certain things. That's what separates humans right, learning from others. The more you work on something, the better you become.
There is absolutely nothing wrong with having to lookup the documentation every time you want to write some code. I got lucky in that once I use a certain function, I don't forget it very easily. However, most of the time, especially when I'm coding in a language that I haven't used in a while,
I start out by writing a flowchart of the algorithm that I want to code - just the pure logic. The most important reason for doing this is so that I don't lose my train of thought and forget the algorithm that solves my problem in the midst of technical problems like syntax and < what library functions exist? >
Then I look at the documentation and check to see if there are simple function calls that will help me accomplish each task in my algorithm
If such functions do not exist, then I either modify my algorithm to accommodate for what functions the language does provide or write helper functions do fill in the gaps.
I only start coding NOW, which is not too difficult to do anymore, because I already have all the relevant functions written down. So now it's just a question of translating pure logic into syntactically accurate code.
Proper syntax usually does not elude me, but if that does happen (VERY rare), Google provides very nice code snippets if you ask nicely.
Hope this helps
May the Force be with you
I'm programming Java now for about 5 years, and I never have had any trouble remembering syntax. I can <brag>write almost all java.util.*, java.io.*, java.lang.* and javax.swing.* stuff out of my memory</brag>, but does it help me? Not very much. It doesn't make me a better programmer than someone who can't!
I'm using Netbeans, which greatly helps working with libraries. Also, the documentation, just in the place where you need it. Sometimes, it's quite unnecessary, but sometimes you'd wish the "auto complete" screen would popup faster!
The best thing as a student is to concentrate on what you are doing, not how fast you are doing it. Looking up things isn't bad; as it'll help your so-called "unconsciousness mind" process what you are really trying to accomplish. Having such breaks, e.g. by looking up a certain documentation or syntax reference, may even let you be better at programming (no proof for this, though).
Question is quite subjective.
With the great many IDE's available and the "newbie" tutorials on getting started, it won't take you long before you're off writing your own apps.
That said, unless you have a "thirst" for how stuff works kinda attitude all the time towards everything, you won't go far. In this field, you really have to have a passion for what you do to be great.
... It bothers me that I am continually having to look things up... How long did it take you to get to the level where you are now?
For me, and for most of the students I teach, the answer depends on two variables:
How many lines of code have I written?
Do I use the language or library every day?
(Reading other people's code is very helpful for learning a language and learning how to think in a language, but for me at least, it hasn't helped me become a fluent writer of programs in a language. Only writing code does that.) So my first comment is that time should be measured in lines of code written, not hours or years.
(Ray Bradbury once advised aspiring writers of fiction to write a thousand words a day six days a week, and after they've written a million words they might start to know something about their craft. This is good advice for programmers too.)
As for my own experience, across a half dozen languages that I currently know well or once knew well, it's been pretty consistent for me that
After writing 100 lines I am continually looking things up in the manual and don't really know what I am doing.
After writing 1,000 lines I use the manual occasionally and am starting to learn how to think in the language.
After writing 10,000 lines I am about as good as I'm going to get without making special efforts.
After writing 25,000 lines I probably will not need the manual again.
It's also true that
To learn to write 100 lines I had to read 100 to 1,000 lines that someone else wrote.
For the first 1,000 lines I write it is good for me to read 2,000 lines someone else wrote. For the next 1,000 lines it is good for me to read 1,000 lines someone else wrote.
After I've written 5,000 lines I learn the most by reading code written by world experts or by people who designed the language and understand what is there. I no longer have much to learn by reading just any program.
On the other hand, my experience about when I stop having to refer continually to the documentation is much less consistent.
I find it especially hard when two languages are very similar; I will never stop needing the ksh manual to tell me what is different from sh or bash, and I will never stop needing the Haskell manual to tell me what is different from Standard ML (though the need grows less with each additional 1,000 lines of Haskell that I write). I also find it interesting that while I have written over 35,000 lines of Lua code, and I will never need the manual again for a language question, I have to look up libraries and API functions almost every time I write something longer than 500 lines. (I've written a lot of short Lua programs and a couple of long ones, and I don't use the language every day, although I definitely use it at least several days each month.)
As for the unspoken question, when are you personally going to get better?, take some advice from Watts Humphrey: measure your own performance and track it over time. I think if each day you count the number of times you had to stop and look things up, and graph that against number of lines of code written or edited (which you can get from source-code control), you will be pleasantly surprised by objective evidence of improvement. And I think once you have such evidence, you will be able to focus more on continuing to improve, and not so much on where you are now or where you hope to be in a year.
It's true that after some years of programming you'll be able to remember a lot of thing without having to check the "manual". For me this is not an important milestone in your programming life though, the really important moment is when you reach the point where you don't know how to do something... but you're sure that can be done and you know where and how to research about it :-)
You made a very important step participating on this site. Exchanging ideas and helping each other it's a excellent way of learning.
Sociologist Malcolm Gladwell believes that ten thousand hours is a good benchmark for the amount of practice required to become world class at many fields of endeavour. I think that sort of number applies to programming as well. This isn't quite what you asked, though; being able to code competently certainly requires familiarity with your environment (language constructs, system libraries, third party libraries and perhaps something of the concepts underpinning them), but there are many soft skills involved which are harder to describe and can only really be acquired through practice.
As others have said, being good at programming is not about typing code from memory; it's about recognising patterns, understanding systems, solving problems. It's about choosing the right tools for the job (languages, libraries, algorithms, whatever) and being able to make proper use of them.
In all the jobs I've had, it's about adaptibility and flexibility; you might have to learn a new language or pick up somebody else's poorly documented code tomorrow, and a good programmer will be able to take this in their stride.
I've been coding professionally for nearly 10 years now; there's all sorts of code I use semi-regularly which I look up the options for at least some of the time. There are too many commands with too many options in too many languages for me to remember each and every last one in detail and Google is quite good enough at getting the information I want.
That said - there are some bits of routine code which I use all the time but can count on one hand the number of times I've written - the exact syntax for populating a dataset in .Net for example. One of the skills I've most come to value over this time and which saves me the most time is spotting when some code can be quickly and easily moved into utility libraries. If it's fiddly but routine, consider this approach to save yourself hassle and improve your overall code quality.
In the context of this question ; java, c++, javascript the languages are still evolving. I can't say about other languages.
The language standard/specification changes over time
Libraries are added to supplement the language constructs e.g Boost, Google Collections, Apache Commons, jQuery
Applications will rarely be bound to a single aspect of a language
Across organizations/projects, coding standards change
A project I worked upon recommended against using primitives
When unfamiliar with the constructs used, I put in pseudo-code flagged with //TODO first .. then go in and find the actual API to use.
IMO, the answer to your question is - there is no definite answer.
As a Java programmer the sheer size of the runtime library makes it impossible to remember everything. Swing is big, there is an XSLT engine (which contains TWO languages), the Concurrent support evolves and grows.
The direct access to the Javadoc API from within Eclipse combined with code completion makes it possible to find the information you cannot remember but you know is there, quickly and efficiently.
I have found the javaalmanac.com (which has been reworked into the more convoluted http://www.exampledepot.com/egs/index.html) invaluable in presenting short, concise and above all correct programs doing just one small thing. Strongly recommended.
Most weeks I program in Java, C#, Python, PHP, JavaScript, SQL, Smarty, Django Templating and occasionally C++ and Objective C. I'm a student so this is partially school work and partially my part-time job. Instead of learning syntax I've learned what concepts to look for.
Seeing patterns and concepts is key, once you know the concepts and what to look for the syntax is secondary.
I find that even when I am just being exposed to a language I can accomplish a lot just by looking for 'what ought to be there'
Why should you be able to remember all of this stuff? Personally I embrace the fact that I can't remember all of this stuff and simply try and remember where to find the information that I need. I find that much more useful. This takes the form of blogging, taking notes, keeping large amounts of 'sample' code and reusable libraries and writing about code that I find useful and interesting; oh and lots of books, some of which I hardly ever 'need' and some of which I hardly ever really read but they've been skimmed and I know where they live.
Technologies come and go and there's just no way I could have kept all of the things that I may one day find useful in my head; so I page them out and just keep the index in memory... For example; 9 years ago I was doing some stuff with Java and CORBA and whatever. There's no way that I could drop back into that now without the notes that I wrote up for my website back when I was doing it: http://www.lenholgate.com/blog/2001/02/corba---enumeration.html. Likewise I have code that I use on a daily basis that has been kicking around since 1997 or earlier. I don't remember how to type it in, I have it in a file with tests (if I'm lucky) and docs (if I'm even luckier).
Whilst I realise that most of what I'm talking about is 'big stuff' I also often have to go and look at some of my old code to simply work out how to structure a typedef...
Of course the day to day stuff will come with time and practice; but you need to work out that you have to page some of it out in a form that you can reload later very quickly. Embrace the fact that your memory is never going to be able to hold it all and outsource it :)

Should I prepare my code for future changes? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Should I prepare my code for possible/predicted future changes so that it's easier to make these changes even if I don't really know if these changes will be required anytime?
I am likely to get lynched for my opinion on this, but here I go.
While I have had this hammered into me over years of reading idealistic articles and sitting through far too many seminars and lectures categorically stating the nirvana like benefits of this, I too had similar questions in my mind. This line of thought can lead to massive over-engineering of the code, adding many man hours or more to design, development and testing estimates, increasing cost and overheads, when in reality this is not often the case. How many times have you actually reused your code or a library. If it is going to be used in many places, through numerous projects, then yes you should.
However, most of the time this is not the case. You will often find it more economical (in time and money) to only refactor your code for reuse and configurability when you actually know that you are going to use it again. The rest of the time the real benefits are lost.
This is not, I repeat NOT, an excuse to write sloppy, poorly designed, poorly documented code. This should be a fundamental that is so wholly ingrained in you that you could not break it, but writing a class for reuse is a waste most of the time as it will never get reused.
There are obvious exceptions to this. If you are writing third party libraries then obviously this is not the case and reuse and expansion should be key to your design. Certain other types of code should be obvious for reuse (Logging, Configuration etc.)
I asked a similar question here Code Reusability: Is it worth it It might help.
Within reason and certainly if its not much effort.
I don't think you can always apply this, as it can make you over-engineer everything and then it takes too long and you don't make much money. Consider how likely the client is to implement something, how much extra it will take to prepare for it now and how much time it will save later.
If it requires a lot of work, but makes sense to save money, you could raise it with the client.
I seem to be in disagreement with a lot of people here, who say always - but I've seen a lot of things where effort has been put into make future features easy to implement ... but they've never been implemented. If a client hasn't paid for the time spent making the feature easy to implement, that's money straight off your bottom line.
Edit: I think its relevant to point out that I'm coming from an agency environment. If you are working on code for yourself, you can probably predict future development with a greater level of certainty, and so its probably feasible to do this in more cases.
yagni.
http://en.wikipedia.org/wiki/YAGNI (*inserted by friendly editor :-) *)
fix the bugs in that horrenous code you're writing today.
refactor when the time comes.
If you work in a refactoring-friendly lanuguage I'd say NO. (In other languages I'm not sure)
You should make your code as loosley coupled as possible and keep things as simple as possible. Stay specific and dont generalize to unknown use cases.
This will make your code base prepared for the things the future will bring.
(And frankly, most anticipations of the future tend to be sufficiently off-mark not to warrant coding for it today)
Edit: It also depends on what you're doing. Designing apis for external users is not the same as developing a web app for your company
Yes -- by doing less.
You won't know what the future requirements for your code. The best preparation for the future is not to implement anything that's not needed right away, and have good unit-test coverage everything you do implement.
Scalability in your code is one thing you should always consider.
The more time you spent today in catering for scalable solutions, the less time you will spend in the future when actually expanding
Predicted or very likely changes - yes, generally it's good to have them in mind.
"Take anything that might ever happen in the universe into account" - no. You don't know what could happen, trying to cover for everything unknown is just over engineering.
Remember that most of you code will be changed/refactored. If you know that you will have to change your code within the next week, prepare it. But don't start making everything exchangeable and modular by default. Just because "maybe in the future" you shouldn't create a framework, if three lines of code do the job for now.
But think twice, if the system behind makes refactoring difficult (databases).
One thing I learned in my mere year of coding for the company I work for, everything you do, no matter how perfect you think it is will come back haunting you for an update or needs to be altered because client X suddenly decided not to like it.
Now I am making my code highly customizable so when that day comes to do some adjustments, it would be ready in no time and I can continue with my work.
In a word, yes.
In a few more words, you should always make your code as readable as possible, include comments, and always assume that at some time in the future, you will be called upon, or someone else will be, to modify the code.
If that someone in the future comes across a block of code, uncommented, unformatted, with no indication of what it does or should do, then they will curse you forever :)
No, never. Write good code that is easy to reuse/refactor but preparing for half thought out enhancements is, imo, the brother of premature optimisation; you'll likely end up doing things you don't need or that push you down a certain (possibly non-optimal) design path at a future date. As mfx says, do the minimum required now and unit test everything; it makes extending code a doddle.
In two words: yes, always.
What you describe is part and parcel of being a good developer.
See On Writing Maintainable Code
The obvious answer is YES. But the more important question is how!
Orthogonality, Reversibility, Flexible architecture, Decoupling, and Metaprogramming are some of the keywords that address this problem. Check out chapters 2 and 5 of "The Pragmatic Programmer"
I find it is generally a better strategy to design a "change-accommodating" architecture than trying to specify specific changes that might (or might not) happen. It is a good exercise, though, to ask "What may change in the future?", and then resist the temptation to prematurely implement potentially unnecessary features, but rather have such possibilities in mind when creating the application architecture.
I find that there is a parallel here to something I heard on test-driven development recently. The person talking about it had observed that while at first it could be a little annoying to always write unit tests and think about how your code can be written to be testable, it turned out that at some point it just begins to come naturally to write test friendly code.
The point is that if you always write with modifiability in mind, you might end up doing it more or less by reflex, thereby making the overhead of writing the extra code very small. If you can reach a state where high quality, extendable code is what comes naturally to you, I think that would definately be worth the initial cost. I do still believe, though, that you must to it in moderation and that sometimes it's just not the right thing to do for a given customer.
Two simple principles to follow:
KISS
Always avoid dependencies
Thats a good start
Yes, but only by writing maintainable, easily refactored code.
You should definitely NOT try to guess what might be required in the future. Such efforts are not only pointless and time-wasting for your current targets, but are more often than not counterproductive when any future changes become apparent.
This is really important. It needs to be done well, but takes experience.
If I count up the number of edits (via "diff") after implementing a typical requirements change, numbers like 10-50 are common.
If I make a mistake on any of them, I've inserted a bug.
So personally, I always try to design to keep that number down.
I also try to leave instructions in the code for how to make anticipated changes. If I'm maintaining code, I really appreciate it if the prior author also did this.
To balance the effort with the
benefits is the skill of design.
Not all our code needs to be flexible. Some things will not be changing.
No wasted effort. Finding the right parts to devote our attention to.
Tricky.
Yes, always think of where your code may need to evolve in the future. In the current project I am working on there are thousands of files and every single one of them has atleast one revision to it. Even leaving aside bug fixes plenty of those revisions are to make way for additional software features.
I wouldn't change my could to prepare for an unknown future feature.
But I would refactor to get the best solution to the current problem.
You can't design against an unknown (future), and as other people have said, trying to build a flexible design can easily lead to overengineering, so I think that the best pragmatic approach is think in terms of avoiding things that you know will make it harder for you to maintain your code in future. Every time that you make a design decision, just ask yourself whether you are making it harder to change things in future, and if so, what you are going to do to limit the problem.
Obvious things that will always cause problems:
Scattered configuration information - you need to be able to check and change this stuff easily
Untested code - you need tests to make changes with confidence
Mingling of storage and output concerns with the core logic - you will switch database instances and output formats, if only for testing
Complex architecture - you need to have a clear mental model of the system
Arrangements that require manual intervention or updates to keep them running

What's your most controversial programming opinion?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.
The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).
So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.
Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.
Programmers who don't code in their spare time for fun will never become as good as those that do.
I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.
(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)
The only "best practice" you should be using all the time is "Use Your Brain".
Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)
EDIT:
Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?
"Googling it" is okay!
Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.
Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.
What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.
(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)
Most comments in code are in fact a pernicious form of code duplication.
We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.
I think eventually many people just blank them out, especially those flowerbox monstrosities.
Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.
On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.
XML is highly overrated
I think too many jump onto the XML bandwagon before using their brains...
XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.
My 5 cents
Not all programmers are created equal
Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.
It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)
I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.
For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax
For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.
Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.
If you only know one language, no matter how well you know it, you're not a great programmer.
There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.
It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.
Performance does matter.
Print statements are a valid way to debug code
I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.
Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)
Your job is to put yourself out of work.
When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.
If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.
Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.
1) The Business Apps farce:
I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.
Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.
How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?
The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.
I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.
2) The n-years-of-experience-required:
Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.
3) The common "computer science" degree curriculum:
The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.
Getters and Setters are Highly Overused
I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).
I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!
UPDATE:
This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).
First of all: anyone who uses public fields deserves jail time
Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.
Many people think:
private fields + public accessors == encapsulation
I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.
Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):
There is a reason that we keep our
variables private. We don't want
anyone else to depend on them. We want
the freedom to change their type or
implementation on a whim or an
impulse. Why, then, do so many
programmers automatically add getters
and setters to their objects, exposing
their private fields as if they were
public?
UML diagrams are highly overrated
Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.
Opinion: SQL is code. Treat it as such
That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.
I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?
Readability is the most important aspect of your code.
Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
If you're a developer, you should be able to write code
I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.
I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!
Edit:
There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.
Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.
The use of hungarian notation should be punished with death.
That should be controversial enough ;)
Design patterns are hurting good design more than they're helping it.
IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.
And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
Less code is better than more!
If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.
PHP sucks ;-)
The proof is in the pudding.
Unit Testing won't help you write good code
The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.
And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.
In fact, I'll generalize that even further,
Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.
They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.
Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.
I think that a method should be created wherever you can name one.
It's ok to write garbage code once in a while
Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.
Code == Design
I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.
Here's an article on the topic of Code as Design.
Software development is just a job
Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.
But in the grand scheme of things, it is just a job.
It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.
I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.
I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.
Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.
Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)
Software Architects/Designers are Overrated
As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.
How's that for controversial?
Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.
There is no "one size fits all" approach to development
I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.
Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.
People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.
It isn't.
Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.

How do you stop interim solutions from lasting forever?

Say there are two possible solutions to a problem: the first is quick but hacky; the second is preferable but would take longer to implement. You need to solve the problem fast, so you decide to get the hack in place as quickly as you can, planning to start work on the better solution afterwards. The trouble is, as soon as the problem is alleviated, it plummets down the to-do list. You're still planning to put in the better solution at some point, but it's hard to justify implementing it right now. Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
Does this sound familiar? I know it's happened more than once where I work. One colleague describes deliberately making a bad GUI so that it wouldn't be accidentally adopted long-term. Do you have a better strategy?
Write a test case which the hack fails.
If you can't write a test which the hack fails, then either there's nothing wrong with the hack after all, or else your test framework is inadequate. If the former, run away quick before you waste your life on needless optimisation. If the latter, seek another approach (either to flagging hacks, or to testing...)
Strategy 1 (almost never selected): Don't implement the kluge. Don't even let people know it's a possibility. Just do it the right way the first time. Like I said, this one is almost never selected, due to time constraints.
Strategy 2 (dishonest): Lie and Cheat. Tell management that there are bugs in the hack, and they could cause major problems later on. Unfortunately, most of the time, the managers just say to wait until the bugs become a problem, then fix the bugs.
Strategy 2a: Same as strategy 2, except there really are bugs. Same problem, though.
Strategy 3 (and my personal favorite): Design the solution whenever you can, and do it well enough that an intern or code-monkey could do it. It's easier to justify spending the small amount of code-monkey money than to justify your own salary, so it might just get done.
Strategy 4: Wait for a rewrite. Keep waiting. Sooner or later (probably later), someone is going to have to rewrite the thing. Might as well do it right then.
Here is a great related article on technical debt.
Basically, it is an analogy of debt with all the technical decisions you make. There is good debt and bad debt... and you have to pick the debt that is going to achieve the goals you want with the least long term cost.
The worst kind of debt is small little accumulating shortcuts that are analogous to credit card debt... each one doesn't hurt, but pretty soon you are in the poor house.
This is a major issue when doing deadline driven work. I find that adding very detailed comments about why this way was chosen and some hints at how it should be coded help. This way people looking at the code see it and keep it fresh.
Another option that will work is add a bug.feature in your tracking framework (you do have one, right?) detailing the rework. That way it is visible and may force the issue at some point.
The only time you can ever justify fixing these things (because they're not really broken, just ugly) is when you have another feature or bug fix that touches the same section of code, and you might as well re-write it.
You have to do the math on what a developer's time costs. If software requirements are being met, and the only thing wrong is that the code is embarrasing under the hood, it's not really worth fixing.
Whole companies can go out of business because over-zealous engineers insist on a re-architecture every year or so when they get antsy.
If it's bug-free and meets requirements, it's done. Ship it. Move on.
[Edit]
Of course I'm not advocating that everything be hacked in all the time. You have to design and write code carefully in the normal course of the development process. But when you do end up with hacks that just had to be done quickly, you have to do a cost-benefit analysis on whether or not it's worth it to clean up the code. If over the lifetime of the application you will spend more time coding around a messy hack than you would have fixing it, then of course fix it. But if not, it's way too expensive and risky to re-code a working, bug-free application just because looking at the source makes you ill.
YOU DON'T DO INTERIM SOLUTIONS.
Sometimes I think programmers just need to be told this.
Sorry about that, but seriously--a hacky solution is worthless and even on the first iteration can take longer than doing a portion of the solution correctly.
Please stop leaving me your crap code to maintain. Just ALWAYS CODE IT RIGHT. No matter how long it takes and who yells at you.
When you are sitting there twiddling your thumbs after delivering early while everyone else is debugging their stupid hacks, you'll thank me.
Even if you don't think you are a great programmer, always strive to do the best you can, never take shortcuts--it doesn't cost you ANY time to do it right. I can justify this statement if you don't believe me.
Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
If you're cursing it, why is it at the bottom of the TODO list?
If it's not affecting you, why are you cursing it?
If it is affecting you, then it's a problem that needs to be fixed NOW.
I make sure that I'm vocal about the priority of the long term fix ESPECIALLY after the short term fix has gone in.I detail the reasons why it's a hack and not a good long term solution and use those to get the stakeholders (managers, clients, etc) to understand why it needs to be fixed Depending on the case, I may even inject a bit of worst case scenario fear in there. "If this safely line snaps, the whole bridge could collapse!"I take responsibility for coming up with a long term solution and make sure that it gets deployed
It is a hard call. I have done hacks personally cause, sometimes you HAVE to get that product out the door and into the customers hands. However, the way that I take care of it is to just do it.
Tell the project lead or your boss, or the customer: There are some spots that need to be cleaned up, and coded better. I need a week to do it, and it is going to cost less to do it now, then it will be to do it 6 months from now, when we need to implement an extension onto the subsystem.
Usually problems like this arise from bad communication with management or the customer. If the solution works for the customer then they see no reason to ask for it to be changed. So they need to be told about the tradeoffs you made beforehand so they can plan extra time to fix the problems after you've implemented the quick solution.
How to solve it depends a bit on why it's a bad solution. If your solution is bad because it's hard to change or maintain then the first time you have to do maintenance and have a bit more time then that is the right time to upgrade to a better solution. In this case it helps if you tell the customer or your boss that you took a shortcut in the first place. That way they know that they can't expect a fast solution next time around. Cripling the UI can be a good way to make sure the customer comes back to get stuff fixed.
If the solution is bad because it's risky or unstable then you really need to talk to the person doing the planning and have some time planned in to fix the problem asap.
Good luck. In my experience this is almost impossible to achieve.
Once you go down the slippery slope of implementing a hack because you are under pressure then you might as well get used to living with it for all time. There is almost NEVER enough time to re-work something that already works, no matter how badly it is implemented internally. What makes you think you will magically have more time "at some later date" to fix the hack?
The only exception I can think of to this rule is if the hack completely prevents you from implementing another piece of functionality that is needed by a customer. Then you have no choice but to do the re-work.
I try to build the hacky solution so that it can be migrated to the longterm way as painlessly as possible. Say you got a guy who is building a database in SQL Server cuz that's his strongest DB, but your corporate standard is Oracle. Build the db with as few non-transferable features (like Bit datatypes) as possible. In this example, it's not hard to avoid bit types, but it makes transitioning later an easier process.
Educate whoever is in charge of making the final decision why the hacky way of doing things is bad in the long-run.
Describe the problem in terms they can relate to.
Include a graph of cost, productivity, and revenue curves.
Teach them about technical debt.
Regularly refactor if you're pushed forward.
Never call it "refactoring" or "going back and cleaning up" in front of non-technical people. Instead, call it "adapting" the system to handle "new features".
Basically, people who don't understand software don't get the concept of revisiting things that already work. The way they look at it, developers are like mechanics who want to keep taking apart and reassembling the entire car every time someone wants to add a feature, which sounds insane to them.
It helps to make analogies to everyday things. Explain to them how when you made the interim solution, you made choices that suited building it quickly, as opposed to being stable, maintainable, etc. It's like choosing to build with wood instead of steel because wood is easier to cut, and thus, you could build the interim solution quicker. The wood, however, simply can not support the foundation of a 20-story building.
We use Java and Hudson for continuous integration. 'Interim solutions' must be commented with:
// TODO: Better solution required.
Every time Hudson runs a build it provides a report of each TODO item so that we have an up to date, highly visible record of any outstanding items that need improved.
Great question. This bothers me a lot, too - and most of the time I'm the sole person responsible for prioritizing issues in my own projects (yep, small business).
I found out that the problem that needs to be fixed is usually just a subset of the problem. IOW, the customer that needs an urgent fix does not need the whole problem to be solved, just a part of it - smaller or larger. That sometimes enables me to create a workaround that is not solution to the complete problem but just to the customer's subset and that allows me to leave the bigger issue open in the issue tracker.
That may of course not apply at all to your work environment :(
This reminds me of the story of "CTool". In the beginning CTool was put forward by one of our devs, I'll call him Don, as one possible way to solve the problem we were having. Being an earnest hard-working type, Don plugged away and delivered a working prototype. You know where I am going with this. Overnight, CTool became a part of the company work flow with an entire department depending on it. By the second or third day, bitter complaints started streaming in about CTool's shortcomings. Users questioned Don's competence, commitment and IQ. Don's protests that this was never supposed to be a production app fell on deaf ears. This went on for years. Finally, someone got around to re-writing the app, well after Don had departed. By this time, so much loathing had become attached to the name CTool that naming it CTool version 2 was out of the question. There was even a formal funeral for CTool, somewhat reminiscent of the copier (or was it a printer?) execution scene in Office Space.
Some might say Don deserved the slings and arrows for not making it go right to fix CTool. My only point is that saying you should never hack out a solution is probably unjustifiable in the Real World. But if you are the one to do it, tread cautiously.
Get it in writing (an email). So when it becomes a problem later management doesn't "forget" that it was supposed to be temporary.
Make it visible to the users. The more visible it is the less likely people are going to forget to go back and do it the right way when the crisis is over.
Negotiate before the temp solution is in place for a project, resources, and time lines to get the real fix in. Work for the real solution should probably begin as soon as the temp solution is finished.
You file a second very descriptive bug against your own "fix" and put a to-do comment right in the affected areas that says, "This area needs a lot of work. See defect #555" (use the right number of course). People who say "don't put in a hack" don't seem to understand the question. Assume you have a system that needs to be up and running now, your non-hack solution is 8 days of work, your hack is 38 minutes of work, the hack is there to buy you time to do the work and not lose money while you're doing it.
Now you still have to get your customer or management agree to schedule the N*100 minutes of time required to do the real fix in addition to the N minutes needed now to fix it. If you must refuse to implement the hack until you get such agreement, then maybe that's what you have to do, but I've worked with some understanding people in that regard.
The real price of introducing a quick-fix is that when someone else needs to introduce a 2nd quick fix, they will introduce it based on your own quick-fix. So, the longer a quick-fix is in place, the more entrenched it will become. Quite often, a hack takes only a little bit longer than doing things right, until you encounter a 2nd hack which builds on the first.
So, obviously it is (or seems to be) sometimes necessary to introduce a quick fix.
One possible solution, assuming your version control supports it, is to introduce a fork from the source whenever you make such a hack. If people are encouraged to avoid coding new features within these special "get it done" forks, then it will eventually be more work to integrate the new features with the fork than it will be to get rid of the hack. More likely, though, the "good" fork will get discarded. And if you are far enough away from release that making such a fork will not be practical (because it is not worth doing the dual integration mentioned above), then you probably shouldn't even be using a hack anyways.
A very idealistic approach.
A more realistic solution is to keep your program segmented into as many orthogonal components as possible and to occasionally do a complete rewrite of some of the components.
A better question is why the hacky solution is bad. If it is bad because it reduces flexibility, ignore it until you need flexibility. If it is bad because it impacts the programs behavior, ignore it and eventually it will become a bug fix and WILL be addressed. If it is bad because it looks ugly, ignore it, as long as the hack is localized.
Some solutions I've seen in the past:
Mark it with a comment HACK in the code (or similar scheme such as XXX)
Have an automatic report run and emailed weekly to those that care which counts how many times the HACK comments appear
Add a new entry in your bug tracking system with the line number and description of the right solution (so the knowledge gained from the research before writing the hack isn't lost)
write a test case that demonstrates how the hack fails (if possible) and check it into the appropriate test suite (i.e. so that it throws errors that someone will eventually want to cleanup)
once the hack is installed and the pressure is off, immediately start on the right solution
This is an excellent question. One thing I've noticed as I get more experience: hacks buy you a very short amount of time, and often cost you a huge amount more. Closely related is the 'quick fix' that solves what you think is the problem -- only to find when it blows up that that it wasn't the problem at all.
Setting aside the debate about whether you should do it, let's assume that you have to do it. The trick now is to do it in a way that minimizes long range affects, it easily ripped out later, and makes itself a nuisance so you remember to fix it.
The nuisance part is easy: make it issue a warning every time you execute the kludge.
The ripped out part can be easy: I like to do this be putting the kludge behind a subroutine name. That makes it easier to update since you compartmentalize the code. When you get your permanent solution, you're subroutine can either implement it or be a no-op. Sometimes a subclass can work nicely for this too. Don't let other people depend on whatever your quick fix is, though. It's difficult to recommend any particular technique without seeing the situation.
Minimizing long range effects should be easy if the rest of the code is nice. Always go through the published interface, and so on.
Try to make the cost of the hack clear to the business folks. Then they can make an informed decision either way.
You could intentionally write it in way that is overly restrictive and singe purposed and would require a re-write to be modified.
We had to do this once - make a short term demo version that we knew we did not want to keep. The customer wanted it on a winTel box, so we developed the prototype in SGI/XWindows. (We were fluent in both, so it wasn't a problem).
Confession:
I have used '#define private public' in C++ in order to read data from some other code layer. It went in as a hack but works well and fixing it has never become a priority. It is now 3 years later...
One of the main reasons hacks do not get removed is the risk that one introduces new bugs while fixing the hack. (Especially when dealing with pre-TDD code bases.)
My answer is a bit different from the others. My experience is that the following practices help you stay agile and move from hackey first iteration/alpha solutions to beta/production ready:
Test Driven Development
Small units of refactoring
Continous Integration
Good Configuration management
Agile database techniques/database refactoring
And it should go without saying you have to have stakeholder support to do any of these correctly. But with these products in place you have the right tools and processes to quickly change a product in major ways with confidence. Sometimes your ability to change is your ability to manage the risk of the changes and from the development perspective these tools/techniques give you surer footing.