The Difference Between Deprecated, Depreciated and Obsolete [closed] - terminology

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
There is a lot of confusion about this and I'd like to know, what exactly is the difference between depreciated, deprecated and obsolete, in a programming context, but also in general.
I know I could just look at an online dictionary, and I have, even at many, but they don't all agree, or there are differences in what they say. So I decided to just ask here, considering I also want an answer in a programming context.
If I understand right, deprecated means it shouldn't be used anymore, because it has been replaced by a better alternative, or just because it has been abandoned. Obsolete means it doesn't work anymore, was removed, or doesn't work as it should anymore. And depreciated, if I understand right, once more, has completely nothing to do with programming and just means something has a lowered value, or was made worse.
Am I right, or am I wrong, and if I am wrong, what exactly do each of these mean?

You are correct.
Deprecated means that it is still in use, but only for historical purposes and it will be removed probably in the next big release. It is recommended that you do not use deprecated functions or features - even if they are present in the current library for example.
Obsolete means that is already out-of-use.
Depreciated means the monetary value of something has decreased over time. E.g., cars typically depreciate in value.
Also for more precise definitions of the terms in the context of the English language I recommend using https://english.stackexchange.com/.

Records are obsolete, CDs are deprecated, and the music industry is depreciated.

In the context of describing APIs and such, "depreciated" is a misreading, misspelling, and mispronunciation of "deprecated".
I'm thinking people have just seen "depreciated" so often in other contexts, and "deprecated" so rarely, that they don't even register the "i" or lack thereof. It doesn't exactly help that their definitions are similar either.

Obsolete: should not be used any more
Deprecated: should be avoided in new code, and likely to become obsolete in a later version of the API
Depreciated: usually a typo for deprecated (depreciation is where the value of goods goes down over time, e.g. if you buy a new computer its resale value goes down month by month)

With all due respect, this is a slight pet peeve of mine and the selected answer for this is actually wrong.
Granted language evolves, e.g., "google" is now a verb, apparently. Through what's known as "common use", it has earned its way into official dictionaries. However, "google" was a new word representing something heretofore non-existent in our speech.
Common use does not cover blatantly changing the meaning of a word just because we didn't understand its definition in the first place, no matter how many people keep repeating it.
The entire English-speaking computer industry seems to use "deprecate" to mean some feature that is being phased out or no longer relevant. Not bad, just not recommended. Usually, because there is a new and better replacement.
The actual definition of deprecate is to put down, or speak negatively about, or to express disapproval, or make fun of someone or something through degradation.
It comes from Latin de- (against) precari (to pray). To "pray against" to a 21st century person probably conjures up thoughts of warding off evil spirits or something, which is probably where the disconnect occurs with people. In fact, to pray or to pray for something meant to wish good upon, to speak about in a positive way. To pray against would be to speak ill of or to put down or denigrate. See this excerpt from the Oxford English Dictionary.
Express disapproval of:
(as adjective deprecating) he sniffed in a deprecating way
another term for depreciate ( sense 2).
he deprecates the value of children’s television
What people generally mean to convey when using deprecate, in the IT industry anyway, and perhaps others, is that something has lost value. Something has lost relevance. Something has fallen out of favor. Not that it has no value, it is just not as valuable as before (probably due to being replaced by something new.) We have two words that deal with concept in English and the first is "depreciate". See this excerpt from the Oxford English Dictionary.
Diminish in value over a period of time:
the pound is expected to depreciate against the dollar
Disparage or belittle (something):
Notice that definition 2 sounds like deprecate. So, ironically, deprecate can mean depreciate in some contexts, just not the one commonly used by IT folk.
Also, just because currency depreciation is a nice common use of the word depreciate, and therefore easy to cite as an example, doesn't mean it's the only context in which the word is relevant. It's just an example. ONE example.
The correct transitive verb for this is "obsolete". You obsolete something because its value has depreciated.
See this excerpt from the Oxford English Dictionary.
Verb - Cause something to be or become obsolete by replacing it with something new.
It bugs me, it just bugs me. I don't know why. Maybe because I see it everywhere. In every computer book I read, every lecture I attend, and on every technical site on the internet, someone invariably drops the d-bomb sooner or later. If this one ends up in the dictionary at some point, I will concede, but conclude that the gatekeepers of the English lexicon have become weak and have lost their way... or at the very least, lost their nerve. Even Wikipedia espouses this misuse, and indeed, defends it. I've already edited the page thrice, and they keep removing my edits.
Something is depreciated until it is obsolete. Deprecate, in the context of IT, makes no sense at all, unless you're putting down someone's performance or work or product or the fact that they still wear parachute pants.
Conclusion: The entire IT industry uses deprecate incorrectly. It may be common use. It may be some huge mis-understanding. But it is still, completely, wrong.

In computer software standards and documentation, the term deprecation is used to indicate discouragement of usage of a particular software feature, usually because it has been superseded by a newer/better version. The deprecated feature still works in the current version of the software, but it may raise error messages or warnings recommending an alternate practice.
The Obsolete attribute marks a program entity as one that is no longer recommended for use. Each use of an entity marked obsolete will subsequently generate a warning or an error since they are no longer in use or does not exist.
EDIT:
depreciated : Not sure how this relates to programming

I wouldn't say obsolete means it doesn't work anymore. In my mind obsolete just means there are better alternatives. A thing becomes obsolete because of something else. Deprecated means you shouldn't use it, although there might not be any alternatives. A thing becomes deprecated because someone says it is -- it is prescriptive.

"Obsolete" means "has been replaced".
"Depreciated" means "has less value than its original value".
"Deprecated" means to expressly disprove of and was popularised due to misspellings in two technical articles where the authors used deprecated without an "i". One in 1999 and the other in 2002 referenced in the dictionaries as origins.
Prior to that time frame we were reading comments like // depreciated in API documentation including the MS MSDN.
The use of deprecated in the tech industry is therefore completely incorrect and evidence of how a technical writer can produce a bug that can live in a language and someone should finally put the bug to rest.

Related

Definition of the word "patch"

Personally, I use the word "patch" as the software equivalent of a symptomatic treatment, which makes a patch a quick-and-dirty bugfix. However, I'm not sure this is correct, because I often see it is used in other meanings, for example as a synonym of a program update.
What does the word "patch" mean exactly?
Update: I think terminology does matter a lot, because it is a fundamental aspect of documentation and communication, and therefore of software development in general. The problem is that computer lingo is defined rather loosely, and I don't know which dictionary provides the definite reference. Therefore I thought it was a good idea to ask this here on StackOverflow.
From Wikipedia:
A patch is a piece of software
designed to fix problems with, or
update a computer program or its
supporting data. This includes fixing
security vulnerabilities and other
bugs, and improving the usability or
performance. Though meant to fix
problems, poorly designed patches can
sometimes introduce new problems (see
software regressions).
You can find the definitions in the Jargon File: patch. Your personal usage of the term seems to fall under 1, while generally the word "patch" has become a strong synonym for "diff" in recent years.
Apart from that, I'm with S.Lott on this one: what does it matter?
In the open source area, a patch is a means of communicating amongst developers. The patch itself is a machine and human readable piece of text describing changes to source code.
The word patch itself has no connotation of "quick and dirty", or a "fix" in this context. It just is a change to source code.
Search terms for further research: diff, patch

Referencing other peoples code

I'm currently working on my Bachelors dissertation. This involves developing a software product and a 12000 word write-up, mainly covering research, design and development. Now where I am quoting other peoples written work, I am obviously referencing this, but what about code? There have been plenty of times where I've looked for a solution to a problem I was unsure of and found someone who had solved the problem. Most of the time I took their code, worked to understand what they were doing, and then wrote my own version in my application, so should it be referenced somehow?
What would you guys do, put a comment in the code referencing the original author, add a reference in the write up or my bibliography, or nothing at all?
Where its a significant or interesting piece of code used, I will probably refer to it in my write-up, but for solutions that don't warrant this, I’m trying to come up with a good solution.
If you were the author of some code I had either used, or been inspired by, what would make you happy that I wasn't plagiarizing you?
To take this a bit further, there's really 2 different things here. If I go to MSDN to lookup how to use a particular part of the .net framework, is that something that should be referenced, or is it fair use of the framework.
Where as if I've used an algorithm that someone clearly developed and put a lot of time into, that's something I would definitely reference.
It all depends on context. Many algorithms are so well known that they are generally considered public domain and as long as you reference a well known source on the subject then you shouldn't have any worries (Sorting, Searching)
When dealing with specific problems, especially in other people code, you have to read really carefully. If its published (book, journal, web, etc..) then you must always reference the original, at some point in your dissertation (technically once in then write up and then a comment in the source)
If it's other peoples work they deserve proper credit. Anything else is plagiarism
There's two aspects to this:
The citation requirements of your academic institution. You should make sure you comply with this because if you're found to have plagiarised another's work you can be guilty of academic misconduct and you don't want that; and
The ethics of using another's work. Barring "fair use" provisions (meaning there's only so much of someone's work you can reproduce before what you're doing is no longer "fair use") and the like, if you reproduce someone else's code, that you should credit. If you simply take the idea, that's possibly a bit different and is a judgement call. It depends on the significance of that idea and it's contribution to your work.
Outside of academic work, I make it a habit to leave a comment in my source code if I reference someone else to solve a particular problem. It's for my benefit as well, I might want to go back and take another look at their code months later and have forgotten where I originally found it. Of course, take a look at the included license when you reference someone's code, it should be pretty clear what you can't do with it.
There's no shame in borrowing working code if it's the best tool for the job, provided the author's licence conditions allow it - real programmers do it all the time. But for academic purposes there is a problem if you could be percieved as being at all secretive or deceptive about it. To avoid that, I'd reference it both in the code with a big, clear comment, and in the report. Full disclosure means you can't be accused of doing anything wrong.
I have the same issue for my dissertation as well:
I opt for referencing without overdoing it. Adding a reference to the referenced section, means that you've searched, found something, thought of using it and accepted it, while having nothing means that you made it up. This increases the value of your work in academical context (the more you base on previous work, the better).
Also your supervisor will be able to trace license and algorithmic issues on the referenced site.
At last you do not know where your work will get to. Suppose in 3 years after today someone tries to use it, and jumps into a patent violation? How can you help him - by a reference...

What makes code legacy?

I have heard many developers refer to code as "legacy". Most of the time it is code that has been written by someone who no longer works on the project. What is it that makes code, legacy code?
Update in response to:
"Something handed down from an ancestor or a predecessor or from the past" http://www.thefreedictionary.com/legacy. Clearly you wanted to know something else. Could you clarify or expand your question? S.Lott
I am looking for the symptoms of legacy code that make it unusable or a nightmare to work with. When is it better to throw it away? It is my opinion that code should be thrown away more often and that reinventing the wheel is valuable part of development. The academic ideal of not reinventing the wheel is a nice one but it is not very practical.
On the other hand there is obviously legacy code worth keeping.
By using hardware, software, APIs, languages, technologies or features that are either no longer supported or have been superceded, typically combined with little to no possibility of ever replacing that code, instead using it til it or the system dies.
What is it that makes code, legacy code?
As with plain legacy, when the author is dead or missing, you as a heir get all or some of his code.
You shed some tears and try to figure out what to do with all this rubbish.
Michael Feathers has an interesting definition in his book Working Effectively with Legacy Code. According to him legacy code is code without automated tests.
It is a very general (and oft abused term) but any of the following would be legitimate reasons to call an app legacy:
The code base is based on a language/platform which is entirely unsupported by the manufacturer of the original product (often said manufacturer has gone out of business).
(really 1a) The code base or platform on which it is built is so old that getting qualified or experienced developers for the system is both hard and expensive.
The application supports some aspect of the business which is no longer actively grown and for which alterations are extremely rare, normally to fix it if something entirely unexpected changes around it (the canonical example being the Y2K issue) or if some regulation/external pressure forces it. Since both reasons are pressing and normally unavoidable but no significant development has occurred on the project it is likely that those people assigned to deal with this will be unfamiliar with the system (and it's accumulated behaviours and intricacies). In these cases this would often be reason to increase the perceived and planned for risk associated with the project.
The system has/or is being replaced with another. As such the system may be used for much less than originally intended, or perhaps only as a means of viewing historical data.
Legacy generally refers to code that is no longer being developed - meaning that if you use it, you have to use it on its original terms - you cannot just edit it to support the way the world looks today. For example, legacy code has to run on hardware that may not exist today - or is no longer supported.
According to Michael Feathers, the author of the excellent Working Effectively with Legacy Code, legacy code is a code which has no tests. When there is no way to know what breaks when this code changes.
The main thing that distinguishes
legacy code from non-legacy code is
tests, or rather a lack of tests. We
can get a sense of this with a little
thought experiment: how easy would it
be to modify your code base if it
could bite back, if it could tell you
when you made a mistake? It would be
pretty easy, wouldn't it? Most of the
fear involved in making changes to
large code bases is fear of
introducing subtle bugs; fear of
changing things inadvertently. With
tests, you can make things better with
impunity. To me, the difference is so
critical, it overwhelms any other
distinction. With tests, you can make
things better. Without them, you just
don’t know whether things are getting
better or worse.
Nobody is gonna read this, but I feel the other answers don't get it quite right:
It has value, if it wasn't useful it would've been thrown away long ago
Its hard to reason about because either of
Lack of documentation,
Original author cannot be found or forgot (yes 2 months later your code can be legacy code too!!),
Lack of tests or typesystem
Doesn't follow modern practices (ie no context to hold on too)
There is a requirement to change or extend it.
If there isn't a requirement to change it, it isn't legacy code
since nobody cares about it. It does its thing and there is nobody
around to call it legacy code.
A colleague once told me that legacy code was any code that you hadn't written yourself.
Arguably, it's just a pejorative term for code that we don't like any more for whatever reason (typically because it's not cool or fashionable but it works).
The TDD brigade might suggest that any code without tests is legacy code.
Legacy code is source code that relates to a no-longer supported or manufactured operating system or other computer technology.
http://en.wikipedia.org/wiki/Legacy_code
"Legacy code is source code that relates to a no-longer supported or manufactured "
Any code with support (or documentation) missing. Be it:
inline comments
technical documentation
spoken documentation (the person who wrote it)
unit tests documenting the workings of the code
For me legacy code is code that was written prior to some paradigm shift.
It may still be very much in use but it is in the process of being refactored to bring it into line.
e.g. Old procedural code hanging around in an otherwise OO system.
Code (or anything else, really) becomes "legacy" when it has been replaced by something newer/better, and yet despite this it's still used and kept alive "in the wild".
Preserving legacy code is not so much an academic ideal as it is keeping code that works, no matter how poorly. In many conservative enterprise situations, that would be considered more practical than throwing it away and starting again from scratch. Better the devil you know...
Legacy code is code that is painful/expensive to keep current with changing requirements.
There are two ways that this can happen:
The code is unsuitable for change
The semantics of the code have been swapped out to silicon
1) is the easier of the two to recognize. It is software that has fundamental limits making it unable to keep up with the ecosystem around it. For example, a system built around O(n^2) algorithm won't scale beyond a certain point and must be re-written if requirements move in that direction. Another example is code using libraries that are not supported on the latest OS versions.
2) Is harder to recognize, but all code of this kind shares the characteristic that people are afraid to change it. This could be because it was badly written/documented to begin with, because it is untested, or because it is non-trivial and the original authors who understood it left the team.
The ASCII/Unicode chars that comprise living code have semantic meaning, the "why's", "what's" and to some degree the "how's", in the minds of people associated with it. Legacy code is either un-owned or the owners do not have meaning associated with large portions of it. Once this happens (and it could happen the next day with really poorly-written code), to change this code, someone must learn it and understand it. This process is a significant fraction of the time it takes to write it in the first place.
The day you're afraid to refactor your code is the day when your code has become legacy.
I consider code "legacy" if any or all of the following conditions apply:
It was written using a language or methodology that is a generation behind current standards
The code is a complete mess with no planning or design behind it
It is written in outdated languages and in an outdated, non object-oriented style
It is difficult to find developers who know the language because it is so old
Unlike some of the other opinions here, I've seen plenty of modern applications that work decently without unit tests. Unit testing still has not caught on with everyone. Perhaps ten years from now the next generation of programmers will look at our current applications and consider them "legacy" for not containing unit tests, just as I consider non object-oriented applications to be legacy.
If few changes need to be made to a legacy codebase, it's better to simply leave it as-is and go with the flow. If the application needs drastic functionality changes, a GUI overhaul, and/or you can't find anyone who knows the programming language, it's time to throw away and start over. A word of warning, however: rewriting from scratch can be very time-consuming, and it's difficult to know if you've replicated all functionality. You'll probably want to have test cases and unit tests written for the legacy application and the new application.
Quite honestly legacy code is any code, framework, api, of other software construct thta's not "cool" anymore. For example COBOL is unanimously regarded as legacy while APL is not. Now one can also make the case that COBOL is consideed legacy and APL not because it has about 1m times the install base as APL. However, if you say that you need to work on APL code the reply would not be "oh no, that legacy stuff" but rather "oh my god, guess you won't be doing anything for the next century" see the difference?
This is a general term thrown around quite often (and quite generically) in the software ecosystem.
Well, I like to think of legacy code as inherited code. This is simply code that was written in the past. In most cases, legacy code do not follow new/current practices and is often considered archaic.
Legacy code is anything written more than a month ago :-)
It's often any code that isn't written in the trendy scripting language du jour, and I'm only half joking.

Are comments necessary for a programming language? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
After some stupid musings about Klingon languages, that came from this post I began a silly hobby project creating a Klingon programming language that compiles to Lua byte-code. During the initial language design phase I looked up information about Klingon programmers, and found out about this Klingon programming rule:
A TRUE Klingon Warrior does not comment his code!
So I decided my language would not support commenting, as any good Klingon would never use them.
Now many of the Klingon ways don't seem reasonable to us Human programmers, however while dabbling with the design and implementation of my hobby language I came to realize that this Klingon rule about commenting is indeed very reasonable, if not great.
Removing the ability to comment from a programming language meant I HAVE to write literate code, no exceptions.
So it got me wondering if there are any languages out there that don't support comments?
Is there are any really good arguments to not remove commenting from a language?
Edit: Any good examples of comments required?
P.S.> My hobby language above is partially silly anyways, so don't focus too much on my implementation, as much as the concept of comments required in general
Do not comment WHAT you are doing, but WHY you are doing it.
The WHAT is taken care of by clean, readable and simple code with proper choice of variable names to support it. Comments show a higher level structure to the code that can't be (or is hard to) show by the code itself.
I am not sure I agree with the "Have" in the statement "Removing the ability to comment from a programming language meant I HAVE to write literate code, no exceptions", since it is not as if all code is documented. My guess is that most people would write unreadable code.
More to the point, I personally do not believe in the reality of the self-explanatory program or API in the practical world.
My experience from manually analyzing the documentation of entire APIs for my dissertation suggests that all too often you would have to carry more information than you could convey in the signature alone. If you eliminate interface comments from your language, what are the alternatives? No documentation is not an option. External documentation is less likely to be read.
As for internal documentation, I can see your point in wanting to reduce documentation to convince people to write better. However, comments serve many collaboration and coordination purposes and are meant to raise awareness of things. By banishing these details to extenral locations, you are reducing the chances that they come to a future reader's awareness, unless your tooling is great.
Ugh, not being able to quickly comment out a line (or lines) during testing sounds annoying to me, especially when scripting.
In general comments are a wart that indicates poor design, especially long rambling comments where its clear the developer didn't have a clue what the heck they where doing and tried to make up for it by writing a comment.
Places where comments are useful:
Leaving a ticket number next to a fix so future programmers can understand business requirements
Explaining a particularly tricky hack
Commentary on business logic for a piece of code
Terse descriptions in API docs so a third-party can use your API
In all circumstances programmers should endeavor to write code that is descriptive and NOT write comments that describe poorly written code. That being said I think there are plenty of valid reasons that languages should and must support comments.
Your code has two distinct audiences:
The compiler
Human beings like us
If you choose to remove comments altogether, the assumption you are taking is that you will be catering only to the compiler, and to nothing else.
Of course you, being Klingon, may not need comments because you are not human. Perhaps you could clearly demonstrate to us your ability by speaking in IL instead?
You don't need a single assertion in your code because, in release mode, they're all gone. But when C++ didn't have assertions built-in, someone wrote the assert macro to replace it.
Of course you don't need comments, either, for more or less the same reason. But if you design a language without comments, people will start doing things like:
HelperFunctionDoesNothing("This is a comment! Blah Blah Blah...");
I'm curious. How do you stop someone from declaring a static string containing a comment and then ignoring the variable for the rest of the func/method/procedure/battle/whatever?
var useless_comment = "Can we destroy our enemies?"
if (phasers on full) return Qapla'
Languages need comments. At least 95% of comments can be replaced by clearer code but there are still assumptions you need to document and you absolutely need to document if there's some external problem you are working around.
I never write a comment without first considering if I can change the code to eliminate the need for it but sometimes you can't.
While all source code is copyrighted by default. It is often nice to:
remind the person reading the source code that it is subject to copyright
tell people what the licensing terms are for that source code file
tell them whether or not they are looking at a protected trade secret
Unfortunately, without comments, it is difficult to do this.
Am I the only one who comments out a couple of lines code for a number of purposes?
It's going to be harder than you think to make a language where comments are impossible.
if (false) {
print("This is a comment. Chew on that, Klingons!")
}
While it's true that humans need to be able to comment code, it is not absolutely necessary that the language directly support commenting: for most languages, it would be trivial to write a script that deletes one line comments (for example, all lines beginning with '#' or some other character) then runs the compiler.
Actually, though, I am surprised and disappointed to learn that even my favorite esoteric programming languages support comments: Brainf**k and Whitespace. These languages are meant to be hard to read, so it seems like they shouldn't support commenting. (As opposed to my other favorite esoteric language: LOLCode, which is meant to be self-documenting, in lolcats-speech)
I would dissent from the other answerers on this point: I say, be true to your vision of a Klingon programming language, and do not support comments!
A point against comments is that they tend to often fall out of date with the code. Any time you add a redundancy, you're risking this sort of inconsistency.
There's actually some interesting research that I've seen when a group used NLP to analyze locking comments in some large system and then compare them to the results of static analysis and were able to fix a few bugs that way.
Isn't literate programming as much comment as it is code? Certainly, much of what I've seen of literate programming has as much explanation as code, if not more comment.
You might think that developers writing in your language will make an extra effort to write clear code but the onus will actually be on you to design a language that's so expressive that it doesn't need to be commented. Hell, not even English is like that (we still parenthesize!). If your language isn't so designed it may very well be as usable as Brainfuck and enjoy the popularity and respect of Brainfuck.
Should I add links or are links considered commentlike?
Besides, people will find ways to add comments if they need to by highjacking strings and misusing variable names (that do nothing other than stand in for comments). Have you read Godel Escher Bach?
It will be a bad idea to remove the commenting facility altogether. Surely developers must learn to write code with minimum comments i.e. to write self documenting code but there a lot of cases where one has to explain why something is being done the way it is. Consider the following cases:
a new developer might start maintaining the code and the original dev has left/ out of the project
a change in specification or market requirement leads to something that is counter intuitive
copy right notice especially if open source (some open source libs require you to do this)
It is also my experience that new programmers tend to comment more and as they develop expertise their code tends to become self documenting and concise. In general comments should be about WHY and not HOW or WHAT.
NO -- there is not a single programming language out there that requires comments.
The language is for the computer. The comments are for the humans. You can write a program with 0% comments. It'll execute, rightly or wrongly. You can't write a program with 100% comments. It'll either not compile -- no main(), etc. -- or, for scripting languages, do exactly nothing.
And, besides, real programmers don't comment their code. Just like Klingons.
While I agree with Uri's responses, I too have made a language with no comments. (ichbins.) The language was to be as simple as possible while still being able to express its own compiler cleanly; since you can do that without comments, they got jettisoned.
I'm working off and on on a revision that does support commentary, but a bit differently: literate-programming style with code nested in text instead of comments embedded in code. It might also get examples/test-cases later as a first-class language feature.
Good luck with the Klingon hacking. :-)
I can't tell you how thankful I am for Javadoc - which is really simple to set up within comments. So that's at least one sense in which comments are useful.
No, of course a language doesn't have to have commenting. But a (useful) program does have to have comments... I don't agree with your idea that literate code lacks comments. Some very good code is easily comprehensible with comments, but only with difficulty without.
I think the comments are required in many situations.
For instance, think of the algorithmic ones. Suppose there is a function written in C which solves the Traveling Salesperson Problem, there are wide range of techniques that can be used to deal with this problem. And the codes are usually cryptic by its nature.
Without explicitly describing the parameters and the algorithm used, by using comments, it is almost impossible to reuse this piece of code.
Can we live without comments on code? Sure, but that won't make live easier.
Comments are useful because they reassure the person reading your code - probably the "future you" - that you've thought about her welfare.
I think the question may become how self-contained would the language without comments be? If for example, it compiles down to DLLs that get used within other code, then how does one know anything beyond the function signature in terms of what it requires, changes and returns? I wouldn't want to have function names being dozens of characters to try to express what may be very easily done with comments above the function that can be used as documentation within something like the Object Browser within Visual Studio for example.
Of course!!
The main reason is novice developers. Not everyone knows how to write literate code. Actually there are millions out there don't get a NullPointerException when they see one.
We all start at some point.
But if you're targeting to "expert" developers only, why bother in the language in first place. You should be using butterflies !!! That's what real developer use!
Comments is a must, try to make it harder if you wish ( like using #//##/ sequence to create a comment or something like that ) but don't leave it out.
:)
I agree with you that nicely written code does not need any comments as "Code is only good documentation available to programmer. However this is very ideal condition, not everyone writes good code all time.
So to make poorly written code good in future comments are required.
I once wrote a VB app (a silly board game inspired by Monopoly) without any comments. But I did that just to piss off my teacher, who had told us comments were for "whatever we found relevant, so we could remember it later".
Perfect code needs zero comments. It should be simple, and understandible by complete novices.
Any code needs comments, I try to explain the reason for and workings of every function I write in 1 or 2 lines.
Code that explains itself only exists in a perfect world, there is always some weird hack or a reason to do something quick-n-dirty instead of the propper way.
The best thing to remember is to comment WHY code does what it does, good code explains WHAT it does 99% of the time.
Write something simple, like a piece of code that can solve a Sudoku puzzle (3 reasonably simple while loops) and try reading that 3 months later. You will immidiatly find something that isn't exactly clear.
Code is written once, but read many times over the course of its lifetime; thus it pays to optimize for readability. Clear and consistent naming of everything from constants to classes is necessary, but may or may not be sufficient to achieve this objective. If not, fill in the gaps with comments, and maintain them as you would the code.

What's your most controversial programming opinion?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.
The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).
So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.
Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.
Programmers who don't code in their spare time for fun will never become as good as those that do.
I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.
(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)
The only "best practice" you should be using all the time is "Use Your Brain".
Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)
EDIT:
Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?
"Googling it" is okay!
Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.
Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.
What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.
(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)
Most comments in code are in fact a pernicious form of code duplication.
We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.
I think eventually many people just blank them out, especially those flowerbox monstrosities.
Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.
On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.
XML is highly overrated
I think too many jump onto the XML bandwagon before using their brains...
XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.
My 5 cents
Not all programmers are created equal
Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.
It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)
I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.
For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax
For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.
Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.
If you only know one language, no matter how well you know it, you're not a great programmer.
There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.
It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.
Performance does matter.
Print statements are a valid way to debug code
I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.
Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)
Your job is to put yourself out of work.
When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.
If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.
Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.
1) The Business Apps farce:
I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.
Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.
How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?
The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.
I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.
2) The n-years-of-experience-required:
Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.
3) The common "computer science" degree curriculum:
The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.
Getters and Setters are Highly Overused
I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).
I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!
UPDATE:
This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).
First of all: anyone who uses public fields deserves jail time
Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.
Many people think:
private fields + public accessors == encapsulation
I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.
Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):
There is a reason that we keep our
variables private. We don't want
anyone else to depend on them. We want
the freedom to change their type or
implementation on a whim or an
impulse. Why, then, do so many
programmers automatically add getters
and setters to their objects, exposing
their private fields as if they were
public?
UML diagrams are highly overrated
Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.
Opinion: SQL is code. Treat it as such
That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.
I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?
Readability is the most important aspect of your code.
Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
If you're a developer, you should be able to write code
I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.
I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!
Edit:
There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.
Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.
The use of hungarian notation should be punished with death.
That should be controversial enough ;)
Design patterns are hurting good design more than they're helping it.
IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.
And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
Less code is better than more!
If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.
PHP sucks ;-)
The proof is in the pudding.
Unit Testing won't help you write good code
The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.
And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.
In fact, I'll generalize that even further,
Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.
They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.
Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.
I think that a method should be created wherever you can name one.
It's ok to write garbage code once in a while
Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.
Code == Design
I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.
Here's an article on the topic of Code as Design.
Software development is just a job
Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.
But in the grand scheme of things, it is just a job.
It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.
I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.
I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.
Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.
Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)
Software Architects/Designers are Overrated
As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.
How's that for controversial?
Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.
There is no "one size fits all" approach to development
I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.
Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.
People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.
It isn't.
Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.