Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I understand that this question could be answered with a simple sentence and that it may be viewed as subjective, however, I am a young student who is interested in pursuing a career in programming and wondered how long it took some of you to get to the level of experience you are now?.
I ask this because I am currently working on building an application in Java on the Android platform and it bothers me that I am constantly having to look up how to write a certain section of code in my application such as writing to a database, or how the if statement should be structured.
My question really is, how long did it take for you to become experienced enough to actually know exactly how your next line of code was going to look, before you even wrote it?
The speed at which you can quickly recall language syntax, common library functions, and best practice patterns is directly proportional to the amount of time you spend using them.
In other words you will find yourself getting faster the more you do it.
I have been a C++ programmer for the last 20 years. It has taken me that long to get to this expertise level. I'm mostly a Windows programmer, and I keep the msdn website up on one of my monitors all the time.
Doesn't matter how long you've been doing it. You will never know everything from memory. Don't sweat it.
I've been programing for almost half of my life and I sill can't always recall simple syntax, let alone entire tracks of code for more complex tasks. If you ask me that's what reference books and Google are for.
A far more important skill to have is the general knowledge of programming in any language, i.e. recursion, looping, object oriented design, working with APIs, error handling etc... Once you have all that down, you can apply it to any language and platform.
I can tell you that after 25 years there are lines of code that I don't know how they're exactly going to look like.
Want an example? I'm programming in Java since last century and I can honestly still make a mistake if I were to type hashcode() or hashCode().
Why? Because actually typing such a method name yourself is so last century. Your intention is to override Object's hashCode() method, so you use programming by intention.
You hit Ctrl-O then h and you get a list of the methods starting with an 'h' that you can override. Then you hit enter. As a bonus, the "#Override" gets inserted for you too.
4 keys. 4, to get this:
#Override
public int hashCode() {
}
And honestly, whether hashCode takes an uppercase 'c' or not... I couldn't care less. This is not what an hashcode is about and my intention is not to know all the inconsistencies languages and API designers came up with. My intention is to override the method that gives back an object's hashcode and my (modern) IDE allows me to get that skeleton in four keypresses, including hitting enter.
Another example: there are people who do really type this countless times a day:
for (int i = 0; i < ; i++) {
}
or the more tricky:
for (int i = ; i >= 0; i--) {
}
Note in that latter case I can still mess up and type "i++" instead of "i--" (a 'thinko' as its called).
But I don't care at all, because I type "fi<tab>" (three keys) and I get the first one or "fir<tab>" (four keys, "for i (in) reverse") and I get the second one. You ain't beating that (especially seen that I'm a touch typist so I type these three or four keys fast). In addition to speed, as an added bonus the autocompletion won't mess you "i--".
In many case I don't know exactly the line I'll get: sure, I know it "more or less" and that's exactly the way it should be.
Don't sweat it too much. As others have mentioned it eventually gets easier to write code without looking things up so much as you work with a particular language over time.
HOWEVER
There are a few reasons that even veteran programmers find themselves constantly using reference material:
(1) Unlike days of yore, most projects now require you to use a number of languages to get the job done. For example single web site-project a web-site may require C#, XML, JavaScript, SQL, HTML, XHTML, RegEx and CSS all in the same project. Switching between some of these languages can really throw you for a loop sometimes because many of them are just similar enough to be familiar, but just different enough to make you forget the subtle differences in syntax.
(2) Just when you start getting comfortable that you know a language inside and out, the vendor will release a new update that changes everything you knew about it. For example ASP->ASP.NET.
I still look up simple things fairly frequently and I've been at this almost 20 years. The important thing is that you understand the underlying concepts and principles.
It took me 12 years to get where I am at today, which is my experience in professional programming. You will always improve when working with some programming language, even if you have been working with it for the past 5 years.
About your question, it depends. I think that you should be comfortable with the syntax after a week, comfortable with the main libraries after a month, and comfortable with the platform after 6 months.
But when you get there, don't stop!
If you code every day with the same language, you'll probably have the common language elements and patterns memorized in a month or two. But there are plenty of things that you'll probably never memorize, simply because you don't use them often enough, and also because modern IDE's can help you so much that you don't really need to remember everything if you utilize all their features (like code snippets, shortcuts, intellisense).
I've been coding for 15 years, and doing C# for the last three, and I still use the MSDN reference material every day. However, as far as the basic building blocks of the language are concerned, I had them memorized in the first month or two.
Also, the more often you code, the better you'll commit it to memory.
There's a false assumption going on here I think...
At my job I end up working all over the stack in different languages and platforms. If I'm away from a project for 6 months I end up having to look at code to get even basic things done. The advantage of experience is reducing the re-ramp up time on productivity though.
So, instead of it taking weeks or months to get back to a point where I can write 80% of the code from memory it takes a few or several days (if that sometimes). I've been programming for about 5 years now. I'm just now getting to the point of being able to visualize small applications in their entirety.
As long as you're working on solving a problem that you haven't already solved (numerous times) you'll probably always have to look up code.
If it can be done I'm guessing it takes longer than 5 years for most people, unless they work in one language with one editor and in one area. (ex: C#, Visual Studio, file system operations) My company isn't big enough to employee someone that specific.
Don't be downhearted by having to consult documentation all the time man, that's what it's there for. Over time you get used to syntax and things like that but don't sweat it if you can't remember library methods or ways to connect to a DB.
Over time (with experience) you might remember these off the top of your head but in reality there's nothing wrong with taking a quick consultation of the documentation to refresh the memory.
Also remember that technology is ever changing so it's good to keep the mind fresh with new ways of thinking/ways to do things.
Question is not 100% clear. One of the best programmers I know doesn't remember anything and needs to look up printf formatting strings almost daily. On the other hand if you are having hard time figuring out how to write that for (int i = 0; i < len; i++) loop after doing this for 6 months -- that doesn't sound right.
the idea that one could every bit of code from even a single language and then type plainly from memory is pretty far out. the amount of pre-defined functions for, say PHP or Java alone is immense.
but that being said, its important to learn the programming structures, and know them the best you can. structures like foreach, if then else, switch, etc. are really the things that need to be integrated thoroughly. also, conceptual things like Object Orientation (not just "using" objects like mysqli, but understanding things like controlling code, client code, bottom-up and top-down architecture) are the real things that make good programmers great. for myself, i know that i have not the capacity to learn every defined function thats provided by language writers so i instead learn whether or not it can be done(and of course still try on occasion to do things that cant be done, lol). if you know that, then its a matter of google and books to find the "specific" mechanisms on how.
cheers my friend.
Not an answer per se, but I just wanted to say that as a novice myself, this q&a is one of the most useful things I've read on SO. It seems that yes, experts can probably code the basic stuff from memory, but even they revert to book for complex problems, and for beginners, that's what the book is for.
I feel like I'm just beginning
You should use code snippet if you are using certain piece of code repeatedly. I doesn't bother me if I cannot remember some piece of code from memory.
For me, it depends heavily on what I'm writing. For example, I doubt that most people ever quite memorize all the parameters to some Windows functions. I may know that I need to call CreateFile on the next line, but don't know all the parameters in order until they're typed in (with help from Intellisense, and sometimes MSDN).
With something that's doing simple computation, I'm a lot more likely to be limited by my typing speed (but I'm a fairly poor typist, so thinking faster than I can type doesn't take much).
That means it's really a question of how much of the time you need to refer to something to type the next line of code -- at first, it's a pretty small percentage, and over time it grows. I doubt it ever gets to 100% for anybody though. I, for one, don't think I'd want it to -- that would indicate I wasn't working on new and different things...
If you use eclipse and java, you may find yourself already there.
Other combinations may be a little slower to a lot slower.
Java has the advantage that it's pretty easy for the environment to build an entire parse tree while you are typing. At any place in your code, typing ctrl-space can give you an entire list of valid options.
Also syntax errors are always underlined.
If you want to go hard core though, I started in C before the day of decent editors and it took me about a year before I typed in more than a few lines and didn't get a compile error.
I don't know about memorization. Repetition is mother of all learning and that applies to all aspects of life. Look at the experienced accountant vs. the novice when filing taxes, who looks up stuff more. But what I did discover recently is that I navigate documentations much quicker and have a sense for going directly to where my question is answered. I got 6h sense - I can see the code! Seriously, it all comes with experience. Still, when learning something completely new, there's no shame in looking up how to do certain things. That's what separates humans right, learning from others. The more you work on something, the better you become.
There is absolutely nothing wrong with having to lookup the documentation every time you want to write some code. I got lucky in that once I use a certain function, I don't forget it very easily. However, most of the time, especially when I'm coding in a language that I haven't used in a while,
I start out by writing a flowchart of the algorithm that I want to code - just the pure logic. The most important reason for doing this is so that I don't lose my train of thought and forget the algorithm that solves my problem in the midst of technical problems like syntax and < what library functions exist? >
Then I look at the documentation and check to see if there are simple function calls that will help me accomplish each task in my algorithm
If such functions do not exist, then I either modify my algorithm to accommodate for what functions the language does provide or write helper functions do fill in the gaps.
I only start coding NOW, which is not too difficult to do anymore, because I already have all the relevant functions written down. So now it's just a question of translating pure logic into syntactically accurate code.
Proper syntax usually does not elude me, but if that does happen (VERY rare), Google provides very nice code snippets if you ask nicely.
Hope this helps
May the Force be with you
I'm programming Java now for about 5 years, and I never have had any trouble remembering syntax. I can <brag>write almost all java.util.*, java.io.*, java.lang.* and javax.swing.* stuff out of my memory</brag>, but does it help me? Not very much. It doesn't make me a better programmer than someone who can't!
I'm using Netbeans, which greatly helps working with libraries. Also, the documentation, just in the place where you need it. Sometimes, it's quite unnecessary, but sometimes you'd wish the "auto complete" screen would popup faster!
The best thing as a student is to concentrate on what you are doing, not how fast you are doing it. Looking up things isn't bad; as it'll help your so-called "unconsciousness mind" process what you are really trying to accomplish. Having such breaks, e.g. by looking up a certain documentation or syntax reference, may even let you be better at programming (no proof for this, though).
Question is quite subjective.
With the great many IDE's available and the "newbie" tutorials on getting started, it won't take you long before you're off writing your own apps.
That said, unless you have a "thirst" for how stuff works kinda attitude all the time towards everything, you won't go far. In this field, you really have to have a passion for what you do to be great.
... It bothers me that I am continually having to look things up... How long did it take you to get to the level where you are now?
For me, and for most of the students I teach, the answer depends on two variables:
How many lines of code have I written?
Do I use the language or library every day?
(Reading other people's code is very helpful for learning a language and learning how to think in a language, but for me at least, it hasn't helped me become a fluent writer of programs in a language. Only writing code does that.) So my first comment is that time should be measured in lines of code written, not hours or years.
(Ray Bradbury once advised aspiring writers of fiction to write a thousand words a day six days a week, and after they've written a million words they might start to know something about their craft. This is good advice for programmers too.)
As for my own experience, across a half dozen languages that I currently know well or once knew well, it's been pretty consistent for me that
After writing 100 lines I am continually looking things up in the manual and don't really know what I am doing.
After writing 1,000 lines I use the manual occasionally and am starting to learn how to think in the language.
After writing 10,000 lines I am about as good as I'm going to get without making special efforts.
After writing 25,000 lines I probably will not need the manual again.
It's also true that
To learn to write 100 lines I had to read 100 to 1,000 lines that someone else wrote.
For the first 1,000 lines I write it is good for me to read 2,000 lines someone else wrote. For the next 1,000 lines it is good for me to read 1,000 lines someone else wrote.
After I've written 5,000 lines I learn the most by reading code written by world experts or by people who designed the language and understand what is there. I no longer have much to learn by reading just any program.
On the other hand, my experience about when I stop having to refer continually to the documentation is much less consistent.
I find it especially hard when two languages are very similar; I will never stop needing the ksh manual to tell me what is different from sh or bash, and I will never stop needing the Haskell manual to tell me what is different from Standard ML (though the need grows less with each additional 1,000 lines of Haskell that I write). I also find it interesting that while I have written over 35,000 lines of Lua code, and I will never need the manual again for a language question, I have to look up libraries and API functions almost every time I write something longer than 500 lines. (I've written a lot of short Lua programs and a couple of long ones, and I don't use the language every day, although I definitely use it at least several days each month.)
As for the unspoken question, when are you personally going to get better?, take some advice from Watts Humphrey: measure your own performance and track it over time. I think if each day you count the number of times you had to stop and look things up, and graph that against number of lines of code written or edited (which you can get from source-code control), you will be pleasantly surprised by objective evidence of improvement. And I think once you have such evidence, you will be able to focus more on continuing to improve, and not so much on where you are now or where you hope to be in a year.
It's true that after some years of programming you'll be able to remember a lot of thing without having to check the "manual". For me this is not an important milestone in your programming life though, the really important moment is when you reach the point where you don't know how to do something... but you're sure that can be done and you know where and how to research about it :-)
You made a very important step participating on this site. Exchanging ideas and helping each other it's a excellent way of learning.
Sociologist Malcolm Gladwell believes that ten thousand hours is a good benchmark for the amount of practice required to become world class at many fields of endeavour. I think that sort of number applies to programming as well. This isn't quite what you asked, though; being able to code competently certainly requires familiarity with your environment (language constructs, system libraries, third party libraries and perhaps something of the concepts underpinning them), but there are many soft skills involved which are harder to describe and can only really be acquired through practice.
As others have said, being good at programming is not about typing code from memory; it's about recognising patterns, understanding systems, solving problems. It's about choosing the right tools for the job (languages, libraries, algorithms, whatever) and being able to make proper use of them.
In all the jobs I've had, it's about adaptibility and flexibility; you might have to learn a new language or pick up somebody else's poorly documented code tomorrow, and a good programmer will be able to take this in their stride.
I've been coding professionally for nearly 10 years now; there's all sorts of code I use semi-regularly which I look up the options for at least some of the time. There are too many commands with too many options in too many languages for me to remember each and every last one in detail and Google is quite good enough at getting the information I want.
That said - there are some bits of routine code which I use all the time but can count on one hand the number of times I've written - the exact syntax for populating a dataset in .Net for example. One of the skills I've most come to value over this time and which saves me the most time is spotting when some code can be quickly and easily moved into utility libraries. If it's fiddly but routine, consider this approach to save yourself hassle and improve your overall code quality.
In the context of this question ; java, c++, javascript the languages are still evolving. I can't say about other languages.
The language standard/specification changes over time
Libraries are added to supplement the language constructs e.g Boost, Google Collections, Apache Commons, jQuery
Applications will rarely be bound to a single aspect of a language
Across organizations/projects, coding standards change
A project I worked upon recommended against using primitives
When unfamiliar with the constructs used, I put in pseudo-code flagged with //TODO first .. then go in and find the actual API to use.
IMO, the answer to your question is - there is no definite answer.
As a Java programmer the sheer size of the runtime library makes it impossible to remember everything. Swing is big, there is an XSLT engine (which contains TWO languages), the Concurrent support evolves and grows.
The direct access to the Javadoc API from within Eclipse combined with code completion makes it possible to find the information you cannot remember but you know is there, quickly and efficiently.
I have found the javaalmanac.com (which has been reworked into the more convoluted http://www.exampledepot.com/egs/index.html) invaluable in presenting short, concise and above all correct programs doing just one small thing. Strongly recommended.
Most weeks I program in Java, C#, Python, PHP, JavaScript, SQL, Smarty, Django Templating and occasionally C++ and Objective C. I'm a student so this is partially school work and partially my part-time job. Instead of learning syntax I've learned what concepts to look for.
Seeing patterns and concepts is key, once you know the concepts and what to look for the syntax is secondary.
I find that even when I am just being exposed to a language I can accomplish a lot just by looking for 'what ought to be there'
Why should you be able to remember all of this stuff? Personally I embrace the fact that I can't remember all of this stuff and simply try and remember where to find the information that I need. I find that much more useful. This takes the form of blogging, taking notes, keeping large amounts of 'sample' code and reusable libraries and writing about code that I find useful and interesting; oh and lots of books, some of which I hardly ever 'need' and some of which I hardly ever really read but they've been skimmed and I know where they live.
Technologies come and go and there's just no way I could have kept all of the things that I may one day find useful in my head; so I page them out and just keep the index in memory... For example; 9 years ago I was doing some stuff with Java and CORBA and whatever. There's no way that I could drop back into that now without the notes that I wrote up for my website back when I was doing it: http://www.lenholgate.com/blog/2001/02/corba---enumeration.html. Likewise I have code that I use on a daily basis that has been kicking around since 1997 or earlier. I don't remember how to type it in, I have it in a file with tests (if I'm lucky) and docs (if I'm even luckier).
Whilst I realise that most of what I'm talking about is 'big stuff' I also often have to go and look at some of my old code to simply work out how to structure a typedef...
Of course the day to day stuff will come with time and practice; but you need to work out that you have to page some of it out in a form that you can reload later very quickly. Embrace the fact that your memory is never going to be able to hold it all and outsource it :)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
As we program, we all develop practices and patterns that we use and rely on. However, over time, as our understanding, maturity, and even technology usage changes, we come to realize that some practices that we once thought were great are not (or no longer apply).
An example of a practice I once used quite often, but have in recent years changed, is the use of the Singleton object pattern.
Through my own experience and long debates with colleagues, I've come to realize that singletons are not always desirable - they can make testing more difficult (by inhibiting techniques like mocking) and can create undesirable coupling between parts of a system. Instead, I now use object factories (typically with a IoC container) that hide the nature and existence of singletons from parts of the system that don't care - or need to know. Instead, they rely on a factory (or service locator) to acquire access to such objects.
My questions to the community, in the spirit of self-improvement, are:
What programming patterns or practices have you reconsidered recently, and now try to avoid?
What did you decide to replace them with?
//Coming out of university, we were taught to ensure we always had an abundance
//of commenting around our code. But applying that to the real world, made it
//clear that over-commenting not only has the potential to confuse/complicate
//things but can make the code hard to follow. Now I spend more time on
//improving the simplicity and readability of the code and inserting fewer yet
//relevant comments, instead of spending that time writing overly-descriptive
//commentaries all throughout the code.
Single return points.
I once preferred a single return point for each method, because with that I could ensure that any cleanup needed by the routine was not overlooked.
Since then, I've moved to much smaller routines - so the likelihood of overlooking cleanup is reduced and in fact the need for cleanup is reduced - and find that early returns reduce the apparent complexity (the nesting level) of the code. Artifacts of the single return point - keeping "result" variables around, keeping flag variables, conditional clauses for not-already-done situations - make the code appear much more complex than it actually is, make it harder to read and maintain. Early exits, and smaller methods, are the way to go.
Trying to code things perfectly on the first try.
Trying to create perfect OO model before coding.
Designing everything for flexibility and future improvements.
In one word overengineering.
Hungarian notation (both Forms and Systems).
I used to prefix everything. strSomeString or txtFoo.
Now I use someString and textBoxFoo. It's far more readable and easier for someone new to come along and pick up. As an added bonus, it's trivial to keep it consistant -- camelCase the control and append a useful/descriptive name. Forms Hungarian has the drawback of not always being consistent and Systems Hungarian doesn't really gain you much. Chunking all your variables together isn't really that useful -- especially with modern IDE's.
The "perfect" architecture
I came up with THE architecture a couple of years ago. Pushed myself technically as far as I could so there were 100% loosely coupled layers, extensive use of delegates, and lightweight objects. It was technical heaven.
And it was crap. The technical purity of the architecture just slowed my dev team down aiming for perfection over results and I almost achieved complete failure.
We now have much simpler less technically perfect architecture and our delivery rate has skyrocketed.
The use of caffine. It once kept me awake and in a glorious programming mood, where the code flew from my fingers with feverous fluidity. Now it does nothing, and if I don't have it I get a headache.
Commenting out code. I used to think that code was precious and that you can't just delete those beautiful gems that you crafted. I now delete any commented-out code I come across unless there's a TODO or NOTE attached because it's too perilous to leave it in. To wit, I've come across old classes with huge commented-out portions and it really confused me why they were there: were they recently commented out? is this a dev environment change? why does it do this unrelated block?
Seriously consider not commenting out code and just deleting it instead. If you need it, it's still in source control. YAGNI though.
The overuse / abuse of #region directives. It's just a little thing, but in C#, I previously would use #region directives all over the place, to organize my classes. For example, I'd group all class properties together in a region.
Now I look back at old code and mostly just get annoyed by them. I don't think it really makes things clearer most of the time, and sometimes they just plain slow you down.
So I have now changed my mind and feel that well laid out classes are mostly cleaner without region directives.
Waterfall development in general, and in specific, the practice of writing complete and comprehensive functional and design specifications that are somehow expected to be canonical and then expecting an implementation of those to be correct and acceptable. I've seen it replaced with Scrum, and good riddance to it, I say. The simple fact is that the changing nature of customer needs and desires makes any fixed specification effectively useless; the only way to really properly approach the problem is with an iterative approach. Not that Scrum is a silver bullet, of course; I've seen it misused and abused many, many times. But it beats waterfall.
Never crashing.
It seems like such a good idea, doesn't it? Users don't like programs that crash, so let's write programs that don't crash, and users should like the program, right? That's how I started out.
Nowadays, I'm more inclined to think that if it doesn't work, it shouldn't pretend it's working. Fail as soon as you can, with a good error message. If you don't, your program is going to crash even harder just a few instructions later, but with some nondescript null-pointer error that'll take you an hour to debug.
My favorite "don't crash" pattern is this:
public User readUserFromDb(int id){
User u = null;
try {
ResultSet rs = connection.execute("SELECT * FROM user WHERE id = " + id);
if (rs.moveNext()){
u = new User();
u.setFirstName(rs.get("fname"));
u.setSurname(rs.get("sname"));
// etc
}
} catch (Exception e) {
log.info(e);
}
if (u == null){
u = new User();
u.setFirstName("error communicating with database");
u.setSurname("error communicating with database");
// etc
}
u.setId(id);
return u;
}
Now, instead of asking your users to copy/paste the error message and sending it to you, you'll have to dive into the logs trying to find the log entry. (And since they entered an invalid user ID, there'll be no log entry.)
I thought it made sense to apply design patterns whenever I recognised them.
Little did I know that I was actually copying styles from foreign programming languages, while the language I was working with allowed for far more elegant or easier solutions.
Using multiple (very) different languages opened my eyes and made me realise that I don't have to mis-apply other people's solutions to problems that aren't mine. Now I shudder when I see the factory pattern applied in a language like Ruby.
Obsessive testing. I used to be a rabid proponent of test-first development. For some projects it makes a lot of sense, but I've come to realize that it is not only unfeasible, but rather detrimental to many projects to slavishly adhere to a doctrine of writing unit tests for every single piece of functionality.
Really, slavishly adhering to anything can be detrimental.
This is a small thing, but: Caring about where the braces go (on the same line or next line?), suggested maximum line lengths of code, naming conventions for variables, and other elements of style. I've found that everyone seems to care more about this than I do, so I just go with the flow of whoever I'm working with nowadays.
Edit: The exception to this being, of course, when I'm the one who cares the most (or is the one in a position to set the style for a group). In that case, I do what I want!
(Note that this is not the same as having no consistent style. I think a consistent style in a codebase is very important for readability.)
Perhaps the most important "programming practice" I have since changed my mind about, is the idea that my code is better than everyone else's. This is common for programmers (especially newbies).
Utility libraries. I used to carry around an assembly with a variety of helper methods and classes with the theory that I could use them somewhere else someday.
In reality, I just created a huge namespace with a lot of poorly organized bits of functionality.
Now, I just leave them in the project I created them in. In all probability I'm not going to need it, and if I do, I can always refactor them into something reusable later. Sometimes I will flag them with a //TODO for possible extraction into a common assembly.
Designing more than I coded.
After a while, it turns into analysis paralysis.
The use of a DataSet to perform business logic. This binds the code too tightly to the database, also the DataSet is usually created from SQL which makes things even more fragile. If the SQL or the Database changes it tends to trickle to everything the DataSet touches.
Performing any business logic inside an object constructor. With inheritance and the ability to create overloaded constructors tend to make maintenance difficult.
Abbreviating variable/method/table/... Names
I used to do this all of the time, even when working in languages with no enforced limits on lengths of names (well they were probably 255 or something). One of the side-effects were a lot of comments littered throughout the code explaining the (non-standard) abbreviations. And of course, if the names were changed for any reason...
Now I much prefer to call things what they really are, with good descriptive names. including standard abbreviations only. No need to include useless comments, and the code is far more readable and understandable.
Wrapping existing Data Access components, like the Enterprise Library, with a custom layer of helper methods.
It doesn't make anybody's life easier
Its more code that can have bugs in it
A lot of people know how to use the EntLib data access components. No one but the local team knows how to use the in house data access solution
I first heard about object-oriented programming while reading about Smalltalk in 1984, but I didn't have access to an o-o language until I used the cfront C++ compiler in 1992. I finally got to use Smalltalk in 1995. I had eagerly anticipated o-o technology, and bought into the idea that it would save software development.
Now, I just see o-o as one technique that has some advantages, but it's just one tool in the toolbox. I do most of my work in Python, and I often write standalone functions that are not class members, and I often collect groups of data in tuples or lists where in the past I would have created a class. I still create classes when the data structure is complicated, or I need behavior associated with the data, but I tend to resist it.
I'm actually interested in doing some work in Clojure when I get the time, which doesn't provide o-o facilities, although it can use Java objects if I understand correctly. I'm not ready to say anything like o-o is dead, but personally I'm not the fan I used to be.
In C#, using _notation for private members. I now think it's ugly.
I then changed to this.notation for private members, but found I was inconsistent in using it, so I dropped that too.
I stopped going by the university recommended method of design before implementation. Working in a chaotic and complex system has forced me to change attitude.
Of course I still do code research, especially when I'm about to touch code I've never touched before, but normally I try to focus on as small implementations as possible to get something going first. This is the primary goal. Then gradually refine the logic and let the design just appear by itself. Programming is an iterative process and works very well with an agile approach and with lots of refactoring.
The code will not look at all what you first thought it would look like. Happens every time :)
I used to be big into design-by-contract. This meant putting a lot of error checking at the beginning of all my functions. Contracts are still important, from the perspective of separation of concerns, but rather than try to enforce what my code shouldn't do, I try to use unit tests to verify what it does do.
I would use static's in a lot of methods/classes as it was more concise. When I started writing tests that practice changed very quickly.
Checked Exceptions
An amazing idea on paper - defines the contract clearly, no room for mistake or forgetting to check for some exception condition. I was sold when I first heard about it.
Of course, it turned to be such a mess in practice. To the point of having libraries today like Spring JDBC, which has hiding legacy checked exceptions as one of its main features.
That anything worthwhile was only coded in one particular language. In my case I believed that C was the best language ever and I never had any reason to code anything in any other language... ever.
I have since come to appreciate many different languages and the benefits/functionality they offer. If I want to code something small - quickly - I would use Python. If I want to work on a large project I would code in C++ or C#. If I want to develop a brain tumour I would code in Perl.
When I needed to do some refactoring, I thought it was faster and cleaner to start straightaway and implement the new design, fixing up the connections until they work. Then I realized it's better to do a series of small refactorings to slowly but reliably progress towards the new design.
Perhaps the biggest thing that has changed in my coding practices, as well as in others, is the acceptance of outside classes and libraries downloaded from the internet as the basis for behaviors and functionality in applications. In school at the time I attended college we were encouraged to figure out how to make things better via our own code and rely upon the language to solve our problems. With the advances in all aspects of user interface and service/data consumption this is no longer a realistic notion.
There are certain things which will never change in a language, and having a library that wraps this code in a simpler transaction and in fewer lines of code that I have to write is a blessing. Connecting to a database will always be the same. Selecting an element within the DOM will not change. Sending an email via a server-side script will never change. Having to write this time and again wastes time that I could be using to improve my core logic in the application.
Initializing all class members.
I used to explicitly initialize every class member with something, usually NULL. I have come to realize that this:
normally means that every variable is initialized twice before ever being read
is silly because in most languages automatically initialize variables to NULL.
actually enforces a slight performance hit in most languages
can bloat code on larger projects
Like you, I also have embraced IoC patterns in reducing coupling between various components of my apps. It makes maintenance and parts-swapping much simpler, as long as I can keep each component as independent as possible. I'm also utilizing more object-relational frameworks such as NHibernate to simplify database management chores.
In a nutshell, I'm using "mini" frameworks to aid in building software more quickly and efficiently. These mini-frameworks save lots of time, and if done right can make an application super simple to maintain down the road. Plug 'n Play for the win!
What are your criteria or things that you consider when you are an early adopter of a programming language or technology?
Two of the most common explanations I've heard are:
It should be "fun" (what I've heard from technical people).
It should be capable of solving our problem (what I've heard from business people).
So what's yours?
I've made this change several times over my career spanning various companies, moving from C to Java to Ruby to Haskell for the majority of my software development.
In all cases, I've been looking for more expressive power and better abstractions. This is always driven by business needs: how can I develop better software more cheaply? To me, the challenge of this problem is "fun," so fun rather automatically comes along with it. Justifying the business value to managers can be difficult, however; they often don't have the technical skills to understand why one programming language can be better than another, and are worried about moving to technology that they understand even less than the current one. (I solved this problem by taking over the manager's job as well: I started a company.)
It's hard to say what exactly to look for in a new language. You obviously don't have a detailed grasp of the language, or you would already be using it or know why you're not. Vast experience will bring an instinct that will make certain languages "smell" better than others, but—and this can make it especially hard to convince others to look at a new language—you won't know precisely what features give you big advantages. An example would be pattern matching: it's a feature found in relatively few languages, and though I knew about it, I had no idea when I started in with Haskell that this would be a key contributor to productivity improvement.
While it's negative ("avoid this") advice rather than positive ("do this") advice, one fairly easy rule is to avoid spending a lot of time on languages very similar to ones you already know well. If you already know Ruby, learning Python is not likely to teach you much in the way of big new things; C# and Java would be another example. (Although C# is starting to get a few interesting features that Java doesn't have.)
Looking at what the academic community is doing with a language may be helpful. If it's a fertile area of research for academics, there's almost certainly going to be interesting stuff in there, whereas if it's not it's quite possible that there's nothing interesting there to learn.
My criteria is simple:
wow factor
simple
gets things done
quick
I want it to do something easily that is hard to do with the tools I'm used to. So I moved to Python, and then Ruby, over Java because I could build a program incrementally, add functions easily, and express programs more concisely (esp. with Ruby, where I can pass blocks/Procs and have clean closures, plus the ability to define nice DSLs making use of blocks and yield.)
I took up Erlang because it expresses Actor-based concurrency well; this makes for easier network programs.
I took up Haskell because it fit with a number of formal methods tools I wanted to experiment with.
Open source.
Active developer community
Active user community, with a friendly mailing list or forum.
Some examples and documentation, preferably a tutorial
Desirable features (solves problems).
If it's for my personal fun, I need very little excuse, as I do love learning new things, and the best way of learning is by doing. If it's for an employer, customer, or client, the bar is MUCH higher -- I must be convinced that the "new stuff", even after accounting for ramp-up effects and the costs that come with being at the bleeding edge, will do a substantially better job at delivering value to the client (or customer or employer). It's a matter of professional attitude: my job's to deliver top value to the client -- having fun while so doing is auxiliary and secondary. So, in practice, "new" technologies (including languages) that I introduce in a professional setting will generally be ones I've previously grown comfortable and confident with in my own spare time.
Someone has once said something to the effect of:
"If learning a programming language doesn't change the way you think about programming, it's not worth learning."
That's one metric (out of many) to judge the value of learning new languages (or other technology) by. Using this, one might suggest learning the following languages:
C, because it makes you understand the Von Neumann architecture better than any other language (and it's sort-of random-access Turing Machine like, sorta'...).
LaTeX (as a programming language, not only as a typesetting system) because it makes you learn about string rewriting systems as a model of computation. Here, sed is similar; learn both, because they're also both useful tools :-)
Haskell, because it teaches you about functional programming, lambda calculus (yet another model of computation), lazy evaluation, type inference, algebraic datatypes (done with ease), decidability of type systems (i.e. learn to fear C++)
Scheme `(or (another) ,Lisp) for its macro system, and dynamic typing, and functional programming done somewhat differently.
SmallTalk, to learn Object-orientation (so I hear)
Java, to learn what earning money feels like :D
Forth, because wisdom bestowed forth learned implies.
... that doesn't explain why I learn python or shell scripting, though. I think you should take enlightenment with a grain of salt and a shovelful of pragmatism :)
Should be capable of solving the problem
Should be more adequate to solve the problem than other alternatives
Should be fun
Should have prompt support, either from a community or the company promoting it
A language should be:
Easy to use, to learn and to code in.
Consistent. Many languages have 50 legacy ways of doing things, this increases the learning curve and turns quite annoying. C# for me is one of those languages.
It should provide the most useful solution with the least amount of code. On the other hand sometimes you do need a bit of expressiveness to make sure you're not making a huge mistake.
The right tool for the right job and maybe the right tool for any job
My criteria that the language should have:
1. New ideas - If the language is just another Scheme variant, if you know one than I don't feel the need to learn this new one. I will learn it if I think I will learn something new.
2. Similar to another language, but better. For example, while Java and C++ have many of the same ideas, Java's automatic garbage collection makes it a better choice in many cases.
Gets the most done with the least amount of effort
Extremely interoperable with different protocols, out of the box
Fast
Has lots of libraries built in for stuff 99% of web developers do (PDF's, emailing, reporting, etc..)
It depends on why I'm learning the new language. If I'm learning it for fun, then it has to meet these criteria:
Is well it supported on my platform?
Something that runs only on Linux
isn't interesting to a Windows
programmer.
Will I learn something new? In
other words, does it come up with a
new way of doing things?
Does it look fun? I don't want to learn Ada even if it has new ways of doing things.
If I'm learning it for work, the criteria are different:
How mature is it? Has it been
proven to work in the real world?
How big is the community?
Will it make my job easier? I.e. is
it worth the time investment versus
just doing the task with a language
I already know.
I have heard many developers refer to code as "legacy". Most of the time it is code that has been written by someone who no longer works on the project. What is it that makes code, legacy code?
Update in response to:
"Something handed down from an ancestor or a predecessor or from the past" http://www.thefreedictionary.com/legacy. Clearly you wanted to know something else. Could you clarify or expand your question? S.Lott
I am looking for the symptoms of legacy code that make it unusable or a nightmare to work with. When is it better to throw it away? It is my opinion that code should be thrown away more often and that reinventing the wheel is valuable part of development. The academic ideal of not reinventing the wheel is a nice one but it is not very practical.
On the other hand there is obviously legacy code worth keeping.
By using hardware, software, APIs, languages, technologies or features that are either no longer supported or have been superceded, typically combined with little to no possibility of ever replacing that code, instead using it til it or the system dies.
What is it that makes code, legacy code?
As with plain legacy, when the author is dead or missing, you as a heir get all or some of his code.
You shed some tears and try to figure out what to do with all this rubbish.
Michael Feathers has an interesting definition in his book Working Effectively with Legacy Code. According to him legacy code is code without automated tests.
It is a very general (and oft abused term) but any of the following would be legitimate reasons to call an app legacy:
The code base is based on a language/platform which is entirely unsupported by the manufacturer of the original product (often said manufacturer has gone out of business).
(really 1a) The code base or platform on which it is built is so old that getting qualified or experienced developers for the system is both hard and expensive.
The application supports some aspect of the business which is no longer actively grown and for which alterations are extremely rare, normally to fix it if something entirely unexpected changes around it (the canonical example being the Y2K issue) or if some regulation/external pressure forces it. Since both reasons are pressing and normally unavoidable but no significant development has occurred on the project it is likely that those people assigned to deal with this will be unfamiliar with the system (and it's accumulated behaviours and intricacies). In these cases this would often be reason to increase the perceived and planned for risk associated with the project.
The system has/or is being replaced with another. As such the system may be used for much less than originally intended, or perhaps only as a means of viewing historical data.
Legacy generally refers to code that is no longer being developed - meaning that if you use it, you have to use it on its original terms - you cannot just edit it to support the way the world looks today. For example, legacy code has to run on hardware that may not exist today - or is no longer supported.
According to Michael Feathers, the author of the excellent Working Effectively with Legacy Code, legacy code is a code which has no tests. When there is no way to know what breaks when this code changes.
The main thing that distinguishes
legacy code from non-legacy code is
tests, or rather a lack of tests. We
can get a sense of this with a little
thought experiment: how easy would it
be to modify your code base if it
could bite back, if it could tell you
when you made a mistake? It would be
pretty easy, wouldn't it? Most of the
fear involved in making changes to
large code bases is fear of
introducing subtle bugs; fear of
changing things inadvertently. With
tests, you can make things better with
impunity. To me, the difference is so
critical, it overwhelms any other
distinction. With tests, you can make
things better. Without them, you just
don’t know whether things are getting
better or worse.
Nobody is gonna read this, but I feel the other answers don't get it quite right:
It has value, if it wasn't useful it would've been thrown away long ago
Its hard to reason about because either of
Lack of documentation,
Original author cannot be found or forgot (yes 2 months later your code can be legacy code too!!),
Lack of tests or typesystem
Doesn't follow modern practices (ie no context to hold on too)
There is a requirement to change or extend it.
If there isn't a requirement to change it, it isn't legacy code
since nobody cares about it. It does its thing and there is nobody
around to call it legacy code.
A colleague once told me that legacy code was any code that you hadn't written yourself.
Arguably, it's just a pejorative term for code that we don't like any more for whatever reason (typically because it's not cool or fashionable but it works).
The TDD brigade might suggest that any code without tests is legacy code.
Legacy code is source code that relates to a no-longer supported or manufactured operating system or other computer technology.
http://en.wikipedia.org/wiki/Legacy_code
"Legacy code is source code that relates to a no-longer supported or manufactured "
Any code with support (or documentation) missing. Be it:
inline comments
technical documentation
spoken documentation (the person who wrote it)
unit tests documenting the workings of the code
For me legacy code is code that was written prior to some paradigm shift.
It may still be very much in use but it is in the process of being refactored to bring it into line.
e.g. Old procedural code hanging around in an otherwise OO system.
Code (or anything else, really) becomes "legacy" when it has been replaced by something newer/better, and yet despite this it's still used and kept alive "in the wild".
Preserving legacy code is not so much an academic ideal as it is keeping code that works, no matter how poorly. In many conservative enterprise situations, that would be considered more practical than throwing it away and starting again from scratch. Better the devil you know...
Legacy code is code that is painful/expensive to keep current with changing requirements.
There are two ways that this can happen:
The code is unsuitable for change
The semantics of the code have been swapped out to silicon
1) is the easier of the two to recognize. It is software that has fundamental limits making it unable to keep up with the ecosystem around it. For example, a system built around O(n^2) algorithm won't scale beyond a certain point and must be re-written if requirements move in that direction. Another example is code using libraries that are not supported on the latest OS versions.
2) Is harder to recognize, but all code of this kind shares the characteristic that people are afraid to change it. This could be because it was badly written/documented to begin with, because it is untested, or because it is non-trivial and the original authors who understood it left the team.
The ASCII/Unicode chars that comprise living code have semantic meaning, the "why's", "what's" and to some degree the "how's", in the minds of people associated with it. Legacy code is either un-owned or the owners do not have meaning associated with large portions of it. Once this happens (and it could happen the next day with really poorly-written code), to change this code, someone must learn it and understand it. This process is a significant fraction of the time it takes to write it in the first place.
The day you're afraid to refactor your code is the day when your code has become legacy.
I consider code "legacy" if any or all of the following conditions apply:
It was written using a language or methodology that is a generation behind current standards
The code is a complete mess with no planning or design behind it
It is written in outdated languages and in an outdated, non object-oriented style
It is difficult to find developers who know the language because it is so old
Unlike some of the other opinions here, I've seen plenty of modern applications that work decently without unit tests. Unit testing still has not caught on with everyone. Perhaps ten years from now the next generation of programmers will look at our current applications and consider them "legacy" for not containing unit tests, just as I consider non object-oriented applications to be legacy.
If few changes need to be made to a legacy codebase, it's better to simply leave it as-is and go with the flow. If the application needs drastic functionality changes, a GUI overhaul, and/or you can't find anyone who knows the programming language, it's time to throw away and start over. A word of warning, however: rewriting from scratch can be very time-consuming, and it's difficult to know if you've replicated all functionality. You'll probably want to have test cases and unit tests written for the legacy application and the new application.
Quite honestly legacy code is any code, framework, api, of other software construct thta's not "cool" anymore. For example COBOL is unanimously regarded as legacy while APL is not. Now one can also make the case that COBOL is consideed legacy and APL not because it has about 1m times the install base as APL. However, if you say that you need to work on APL code the reply would not be "oh no, that legacy stuff" but rather "oh my god, guess you won't be doing anything for the next century" see the difference?
This is a general term thrown around quite often (and quite generically) in the software ecosystem.
Well, I like to think of legacy code as inherited code. This is simply code that was written in the past. In most cases, legacy code do not follow new/current practices and is often considered archaic.
Legacy code is anything written more than a month ago :-)
It's often any code that isn't written in the trendy scripting language du jour, and I'm only half joking.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.
The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).
So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.
Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.
Programmers who don't code in their spare time for fun will never become as good as those that do.
I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.
(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)
The only "best practice" you should be using all the time is "Use Your Brain".
Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)
EDIT:
Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?
"Googling it" is okay!
Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.
Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.
What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.
(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)
Most comments in code are in fact a pernicious form of code duplication.
We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.
I think eventually many people just blank them out, especially those flowerbox monstrosities.
Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.
On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.
XML is highly overrated
I think too many jump onto the XML bandwagon before using their brains...
XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.
My 5 cents
Not all programmers are created equal
Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.
It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)
I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.
For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax
For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.
Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.
If you only know one language, no matter how well you know it, you're not a great programmer.
There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.
It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.
Performance does matter.
Print statements are a valid way to debug code
I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.
Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)
Your job is to put yourself out of work.
When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.
If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.
Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.
1) The Business Apps farce:
I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.
Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.
How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?
The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.
I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.
2) The n-years-of-experience-required:
Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.
3) The common "computer science" degree curriculum:
The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.
Getters and Setters are Highly Overused
I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).
I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!
UPDATE:
This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).
First of all: anyone who uses public fields deserves jail time
Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.
Many people think:
private fields + public accessors == encapsulation
I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.
Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):
There is a reason that we keep our
variables private. We don't want
anyone else to depend on them. We want
the freedom to change their type or
implementation on a whim or an
impulse. Why, then, do so many
programmers automatically add getters
and setters to their objects, exposing
their private fields as if they were
public?
UML diagrams are highly overrated
Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.
Opinion: SQL is code. Treat it as such
That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.
I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?
Readability is the most important aspect of your code.
Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
If you're a developer, you should be able to write code
I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.
I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!
Edit:
There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.
Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.
The use of hungarian notation should be punished with death.
That should be controversial enough ;)
Design patterns are hurting good design more than they're helping it.
IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.
And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
Less code is better than more!
If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.
PHP sucks ;-)
The proof is in the pudding.
Unit Testing won't help you write good code
The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.
And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.
In fact, I'll generalize that even further,
Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.
They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.
Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.
I think that a method should be created wherever you can name one.
It's ok to write garbage code once in a while
Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.
Code == Design
I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.
Here's an article on the topic of Code as Design.
Software development is just a job
Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.
But in the grand scheme of things, it is just a job.
It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.
I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.
I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.
Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.
Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)
Software Architects/Designers are Overrated
As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.
How's that for controversial?
Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.
There is no "one size fits all" approach to development
I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.
Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.
People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.
It isn't.
Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.