I'm not an English speaker, and I'm not very good at English. I'm self thought. I have not worked together with others on a common codebase. I don't have any friends who program. I don't work with other programmers (at least nobody who cares about these things).
I guess this might explain some of my problems in finding good unambiguous class names. I have tried to find some sort of "Programmers dictionary" containing words often used and their meanings. When reading others code I have to look up words quite often, and as many use abbreviations this poses an additional challenge.
My very limited vocabulary "forces" me to use bad class names like xxManager, xxProvider, xxWhatever. It's usually less problematic choosing variable and method names.
Other non English people out here: How have you managed to cope with this? Have you studied English so well it's not a problem? Or have you read so much code naming comes natural? Or discussed a lot with English speakers? Found any good websites, articles or other publications? As I've never read anything regarding programming in my own language, I often have more problems trying to find the words in my language...
PS: All other posts I've found was regarding mixing native tongue and English... And I understand this might be a bit off topic and might be closed.
Edit: Some resources from the answers and other stuff I use:
Jargon / The New Hacker's Dictionary
Common design patterns
Google translate
Dictionary
The Jargon file will help with the more obscure references people will give in the industry.
http://catb.org/jargon/html/go01.html
Other than that..finding good names for your variables/classes/etc is hard. Often times, it's harder than actually solving the problem. Here's a good resource for some common design pattern names people like to use: http://en.wikipedia.org/wiki/Design_pattern_%28computer_science%29
Examples:
AbcFactory
XyzBridge
Could be an unorthodox suggestion, but I would recommend studying English more deeply (I am also a non-native speaker).
Expose yourself to as much English as possible! Watch movies, read English fiction, listen to technical podcasts.
Mind you, if you really want to deepen your knowledge of English, you're probably not going to learn a lot watching "Transformers". On the other hand, diving into Ulysses probably is not a good strategy either.
If you're feeling adventurous, you could always get a subscription to the New Yorker magazine. It'll do things to you - yes this is flamebaiting. :P
Other non English people out here:
How have you managed to cope with this?
Good naming in code matters. Using English is the preferred, but if you don't know English very well the result could be counterproductive.
I had a friend who just guessed what the correct name would be and the result was horrible. ie
String employiiNeim; // employeeName
int eich; // age
The problem with English, is that is not pronounced as written ( french have this minor ... ehrm characteristic ) Other languages like Spanish, German, Dutch, and others, do type and pronounce every letter in the word.
This becomes particular relevant when what you are coding are business rules or business models. In this case it is much better to use your native language.
String nombreEmpleado;
int edad;
Way much better, specially when you work with others.
Have you studied English so well it's not a problem?
Yeap, there is no other way, and a lot of practice.
You can study English the same way you study programming languages though. You can have a teacher and attend to a class room and study an hour a day. Or ( what I did ) you can just grab something that is interesting to you and try to understand it. For instance, you have a small document describing something you care, you read blogs or read content here at StackOverflow, you translate a song you like, etc. etc.
All these are study forms. There is no other way, you won't wake up one day and say: "...I know kung fu" I mean, and say: ..."I know English"
Or have you read so much code naming comes natural?
Also helps, but if you don't understand what the code means, you ... well won't make any progress.
You'll learn the programming language, and that will help you to understand English bit better, but won't help you to learn it. That's because when we program we learn the programming language not the native language.
Or discussed a lot with English speakers?
Eerhh..nope. If you have that chance go ahead, it will improve your listening and speaking, but not necessarily your writting.
The most effective way to improve your English vocabulary and grammar is by READING ( reading in your native language also improves your own language btw )
So, I would say, read as much as you can. Use your native language while you gain more confidence, and keep studying.
The English will come with time.
If you can't find the "Programmer's Dictionary" you're looking for, start one. Post a new question: "What entries are missing from this Dictionary for English-as-a-Second-Language-Programmers?" and seed it with 10 or 20 words/definitions you've already discovered. Once posters have suggested enough additions, move it to a a wiki somewhere and keep accepting contributions. You might end up creating a valuable resource.
Documenting your code with excellent prose like your question above will go a long way!
If you stick to common design patterns endemic to the language, platform, and architecture for which you're working with, other engineers should understand your nomenclature fairly easily.
If you are worried about it in terms of naming your own objects, just think of what your native word is for what you want to do, then go get an english language translation dictionary, and use the english language version.
How about using your native language?
Of course (like for me as an Austrian) some letters may not be allowed - but who cares if there is Mörder or Moerder (Murder) in the class name :)
Or (as I do) use a dictionary like dict.cc or something else.
I do - think what the class does - it manages game session (for an example) so it will become GameSessionManager.
Abbreviations are (at least for me) a problem - but what I've learned from other code - event native speakers use different abbreviations.
And if the class is called GameSessionMgr or GameSessionMngr doesn't make a difference.
Your are not writing books or some kind of "english poem" where spelling, grammar and... counts.
You write code - and if you follow "your sepcial rules" - you and others will (after some time) be able to understand you code and class names.
It will come with time and experience. Above all attempt to (like #Mike A says) document things until the code becomes clearer and try to be consistent.
This is an issue that I run into as well, even as a native English speaker. As a programmer, I often find that I need to find a descriptive word for a class, variable, function, etc. I often find myself asking a friend or coworker what verbage they would use by explaining my idea, carefully excluding any words I myself have considered as a possible choice for the class/function/variable name so as not to inhibit their creativity.
It seems to me that the English Language & Usage site proposal over at Area51 is a good place to ask such questions as "What would you call a class (or thing) that does this, this and that, and has properties x, y, and z?
Related
What is the key difference between natural languages (such as English and French) and programming languages like C++ and Perl?
I am familiar with the ambiguity problem, but can't it be solved using an interactive compiler or using a subset of the natural language using a strict grammar but all the time still retaining the essence of the language?
Another issue is context. But lawyers have ways to solve this issue. (This question is not about reducing the programming complexity, it's simply about concise reasons and roadblock in using natural languages for instructing computer.)
Is there any other significant problem besides these two? Or do these two have greater consequences than I mentioned above? Is the interactive solution and lawyers language technically not feasible for programming?
This is an extremely interesting question and in short, yes, there are some very good reasons why we don't use English to write programs.
It's been said before that the greatest gift that computer science has given us is not the ability to talk to computers but now that formal languages exist for describing algorithms we now have even better tools for communicating these ideas to other people. Even if a computer is not involved. Indeed the best software engineers see their jobs primarily as writing software that is readable to other people so as to make maintenance and addition of new features as easy as possible. This is not possible in a language as big and as free form as any natural, spoken language.
Ambiguity
One reason is that of ambiguity. Have you ever looked a menu in a restaurant and seen that with your burger you can get "Coleslaw and fries or salad"? What does this mean? Can I get both coleslaw and fries or the other option is a salad alone? Or do I always get coleslaw and I have to chose between the fries or a salad? English is full of these things.
I used to teach a class on this and an example I liked to use to explain ambiguity was as follows. I asked the students to write a one paragraph story ending with the sentence "Tom asked Chris if he could help him". About half the time the stories written indicated that the student interpreted the sentence as Tom asking for assistance from Chris. The other half of the time people thought Tom was offering to lend Chris a hand.
If you think about it, there are a lot of people who do write programs in English. They're called product managers and the compiler they use is software engineers. The problem here is that a software engineer has to inject a lot of his own understanding of the problem to understand what the description really means. And trust me, there is a lot of back and forth. Even on very simple business requirements I must clarify ambiguities.
Context
I would not agree that lawyers have ways to solve the context problem. We continually have ongoing arguments in courts among, in some cases, some of the most educated people in the country, about the meanings of various laws. Sometimes this involves arguing about the context in which the laws were written long ago. Sometimes in involves applying it to a new, previously non-existent context like the Internet. The fact that we have thousands of lawyers working on disambiguating these issues is proof that it cannot be handled by a simple computer program like a compiler. It's just too hard of a problem.
Conciseness
Another issue is just the ability to be concise. Mathematics long ago invented notations for many different concepts in the maths because it's just easier to read if there's a special syntax that is concise and has a well defined meaning. A mathematician knows what it means when I say "f(x) = 3x+1". It means the same thing as "There is a function called f and it has one argument. The value of applying f to a number is the number that is one more than three times the number given." But the former is a lot easier to read once you've learned the syntax. The same is true for programming languages. Programming languages are specialized to describe computations.
Implementation
Creators of programming languages deliberately create very small languages. These are, in fact, subsets of English also with some extra syntax. The idea of understanding all of English in all of its free form ways and, worse yet, increasing vocabulary is a job for Natural Language Processing (NPL). A very hard job. If you want to be able to assign unambiguous meaning to a program and have the program's behavior never change, you need a well defined syntax and semantics.
The take-away point here is that English is a very big, very flexible language with no formal specification. Programming languages need to have a well-defined syntax and semantics in order for algorithms to have unambiguous and unchanging meaning. Someone could, in fact, write a formal syntax for a subset of English and give it unambiguous meaning. But this would be a huge, huge job.
Its been done
Check out BabelBuster. The idea here was to take C and convert it to and from a very small subset of very rigorous English such that one could write a program in C and then convert it to English. During the DeCSS DVD decryption arguments, the MPAA was trying to get programs that could decrypt their DVDs declared illegal. BabelBuster fought back with a very interesting idea. Create a way to convert English, which is protected under the freedom of speech into working code in C and thus make the point that C code is also just a language which should be protected as such. Therefor one should be able to publish code that cracks DeCSS. It's an interesting piece of work relevant to your question regardless of which side you're on.
The problem with BabelBuster is that you need to write your program in a very, very limited subset of English. But it is possible to do this.
Conclusion
English, like all natural languages, allows us to describe a computation or algorithm but the language is verbose, offers many ways to say the same thing, is dependent on the context of the speaker, and not formally specified. If your goal is to describe computations, you should take English, chose a minimal workable subset in which you can say everything you need. Formally specify what each word in this subset will mean. Then create a few special notations to make it concise to say the things you say just like mathematics did. If you do this, you do this you'll end up with a typical programming language, or something like it.
There are three principal reasons.
First, as Gabe says - people have figured out through trial and error that programming in things that are close to English sentences only forces programmers to type more useless cruft. (And yes, COBOL was explicitly designed to read more "naturally".)
To a programmer,
windows++
is more readable than
You should now increment the number of windows by one.
For example, Tetris is a rather easy game to code. I would be terribly surprised if you managed to make an English explanation that is detailed enough for a computer (remember, computers are dumb, so you have to spell it all out) in less pages than a short novel.
The second reason is that the range of things a computer knows how to do is rather small, so the number of language constructs that are needed for that is also limited. In contrast, natural languages need to be able to express the entirety of human experience, which does require many language constructs to pull off. For example, "According to his wife, John would have caught the fish yesterday if it hadn't rained" is not expressible in C - and does not need to be.
And third is, indeed, ambiguity, as you yourself note. There are a lot of places where a software error is simply not permissible. People do enough bugs in unambiguous languages; allowing ambiguity would be a disaster waiting to happen. And on the same subject, we are still unable to parse human language sufficiently well - state of the art parsers still have unacceptably high error rates.
It is possible to automatically translate structured English into code, as long as a restricted subset of the English language is used.
As a proof-of-concept, I have developed an programming language called EngScript, which translates English sentences sentences into Python source code.
Arithmetic operations can be written in plain English:
#print{3 to the power of 2}
#print{3 raised to the power of 2}
#Both of these statements print "9".
print{3 plus (the sum of 1 and 2)}
#This prints "5".
Variables can be initialized in plain English, too:
let x be (x plus 1)
if (x is not equal to 7) :
print x
I'm trying to find a good high level explanation of how statistical machine translation works. That is, supposing I have a corpus of non-aligned English, French and German texts, how could I use that to translate any sentence from one language to another ? It's not that I'm looking to build a Google Translate myself, but I'd like to understand how it works in more detail.
I've seen searched Google but come across nothing good, it either quickly needs advanced mathematics knowledge to understand or is way too generalized. Wikipedia's article on SMT seems to be both, so it doesn't really help much. I'm skeptical that this is such a complex area that it's simply not possible to understand without all the mathematics.
Can anyone give, or know of, a general step-by-step explanation of how such a system works, targeted towards programmers (so code examples are fine) but without needing a mathematics degree to understand ? Or a book that's like this would be great too.
Edit: A perfect example of what I'm looking for would be an SMT equivalent to Peter Norvig's great article on spelling correction. That gives a good idea of what it's involved in writing a spell checker, without going into detailed maths on Levenshtein/soundex/smoothing algorithms etc...
Here is a nice video lecture (in 2 parts):
http://videolectures.net/aerfaiss08_koehn_pbfs/
For in-depth details, I highly advise this book:
http://www.amazon.com/Statistical-Machine-Translation-Philipp-Koehn/dp/0521874157
Both are from the guy who created the most widely used MT system in research. It covers all the fundamental stuff, is very well explained and accurate. This probably one of the de-facto standard books that any researcher beginning in this field should read.
The Atlantic Online had a very straightforward nontechnical description of statistical machine translation back in December 1998:
Lost in Translation by Stephen Budiansky
I've read nontechnical stuff on statistical MT before but always wondered "yeah but how does the statistical stuff know which words map to which when word orders vary and supposedly no dictionary and no grammar are used?" Well this article actually does answer that and it's simple and straightforward and I was quite surprised.
A Peter Norvig talk from Google Developer Day 2007, Theorizing from Data: Avoiding the Capital Mistake, contains some accessible high-level explanation of the principles of statstical machine translation (starting from about 21:20).
We all hear that math at least helps a little bit with programming. My question though, does English or other natural language skills help with programming? I know it has to help with technical documentation, but what about actual programming? Are certain constructs in a programming language also there in natural languages? Does knowing how to write a 20 page research paper help with writing a 20k loc programming project?
Dijkstra went so far as to say: "Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer."
Edit: yes, I'm reasonably certain he was talking about the programming part of the job. Here's a bit more complete quote:
The problems of business administration in general and database management in particular are much too difficult for people who think in IBMerese, compounded by sloppy English.
About the use of language: it is impossible to sharpen a pencil with a blunt axe. It is equally vain to try to do it with ten blunt axes instead.
Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer.
From EWD498.
I certainly can't speak for Dijkstra, but I think it's impossible to cleanly separate the part where you're doing actual programming from the part where you're interacting with people. Just for example, even when you're working alone, it's crucial that you're able to understand (clearly and unambiguously) notes you wrote down about what to do, the nature of a bug, etc. A good command of English is necessary even when nobody else is involved at all (and, of course, that's unusual except on trivial tasks).
I don't know about causality, but the skill set required to write well overlaps quite a bit with those required for programming: knowing how to plan, being able to keep a myriad of details consistent, being able to make things clear for a future reader, knowing how to organize your thoughts and the resultant product. That isn't to say that a successful author would make a good programmer, but a programmer with good language skills and the same logic/math/deductive skills is probably a better programmer than one with poor language skills -- at least the code has a greater chance of being understandable.
Yes. Strong natural language skills help you to organize your thoughts in a coherent way that can easily be understood by others. That can help improve your code in everything from naming variables, methods, classes, etc., to expressing the contexts of objects in your model. Practices such as pair programming require you to be able to communicate well with your partner in order to write good code. Techniques such as Domain Driving Design emphasize using the domain language of the business in your code. Natural language skills facilitate that. And there is a strong drive in the development industry toward more natural language-like tools, e.g. many of the newer testing tools like rspec, gherkin, etc., are moving toward more natural language-like syntax. One of the things many people like about dynamic languages like Ruby and Python are that the code tends to read more like a natural language.
Let me state what should be the obvious: every healthy person above 12 knows at least one natural language. Moreover, every healthy person above 12 is able to generate and parse natural language a complex and rich language, and express and understand an extremely large set of ideas. In general, people are not likely to be limited in their ability to discuss issues by their language, but by the type of things they experienced and learned.
Having said that, there are several language-related skills that you might have thought about.
Writing style. You mentioned those specifically. Written language is different from spoken language. Way less intuitive. This is one reason people have to get coached in writing through their years in the education system.
Coding doesn't really involve writing. I mean, there's comments, but they can be rather laconic. Of course the work of a programmer usually involves at least some writing of documents, and writing abilities to make a difference there.
Analytical skills. Analytical skills are a complicated (not to say fuzzy) concept. Analytical skills aren't really about language, but insomuch they are taught and tested at all, it's in the context of writing essays.
Analytical skills are obviously very important in programming. I am not sure that these are exactly the same skills required to write a good essay about Euthanasia or whatever, but as was previously suggested, they may be related.
Foreign language. For people whose native language isn't English, a certain command of English may be needed. Not in the coding itself (knowing what "while" means in English isn't really critical to understanding what it does in Java), but because much training and support material is available mainly in English (did anyone mention Stack Overflow?). The English requirement may differ on the country you are in, and the company you work for, though.
Communication Skills. Ahhm. I was never exactly sure what this means exactly. Maybe it's a cultural thing. I do suspect it's less about knowing a language and more about knowing people.
So to some up, Dijkstra is a venerable computer scientist, but I am not sure he knew that much about language.
Programming isn't just about writing code. On any programming project of any size there will be the need for:
initial project proposal documents
design and architectural documents
programmers manual
users manual
training materials
communication with third party suppliers
etc.
On every big project I've worked on I'd guess I spent at least 50% of my time on the English language documents. So yes, an ability to explain and express yourself well is extremely important. Does it lead to writing better code? Once again, I would say yes - the need to provide clear documentation spills over into the need to write better code, itnerfaces et al.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
After some stupid musings about Klingon languages, that came from this post I began a silly hobby project creating a Klingon programming language that compiles to Lua byte-code. During the initial language design phase I looked up information about Klingon programmers, and found out about this Klingon programming rule:
A TRUE Klingon Warrior does not comment his code!
So I decided my language would not support commenting, as any good Klingon would never use them.
Now many of the Klingon ways don't seem reasonable to us Human programmers, however while dabbling with the design and implementation of my hobby language I came to realize that this Klingon rule about commenting is indeed very reasonable, if not great.
Removing the ability to comment from a programming language meant I HAVE to write literate code, no exceptions.
So it got me wondering if there are any languages out there that don't support comments?
Is there are any really good arguments to not remove commenting from a language?
Edit: Any good examples of comments required?
P.S.> My hobby language above is partially silly anyways, so don't focus too much on my implementation, as much as the concept of comments required in general
Do not comment WHAT you are doing, but WHY you are doing it.
The WHAT is taken care of by clean, readable and simple code with proper choice of variable names to support it. Comments show a higher level structure to the code that can't be (or is hard to) show by the code itself.
I am not sure I agree with the "Have" in the statement "Removing the ability to comment from a programming language meant I HAVE to write literate code, no exceptions", since it is not as if all code is documented. My guess is that most people would write unreadable code.
More to the point, I personally do not believe in the reality of the self-explanatory program or API in the practical world.
My experience from manually analyzing the documentation of entire APIs for my dissertation suggests that all too often you would have to carry more information than you could convey in the signature alone. If you eliminate interface comments from your language, what are the alternatives? No documentation is not an option. External documentation is less likely to be read.
As for internal documentation, I can see your point in wanting to reduce documentation to convince people to write better. However, comments serve many collaboration and coordination purposes and are meant to raise awareness of things. By banishing these details to extenral locations, you are reducing the chances that they come to a future reader's awareness, unless your tooling is great.
Ugh, not being able to quickly comment out a line (or lines) during testing sounds annoying to me, especially when scripting.
In general comments are a wart that indicates poor design, especially long rambling comments where its clear the developer didn't have a clue what the heck they where doing and tried to make up for it by writing a comment.
Places where comments are useful:
Leaving a ticket number next to a fix so future programmers can understand business requirements
Explaining a particularly tricky hack
Commentary on business logic for a piece of code
Terse descriptions in API docs so a third-party can use your API
In all circumstances programmers should endeavor to write code that is descriptive and NOT write comments that describe poorly written code. That being said I think there are plenty of valid reasons that languages should and must support comments.
Your code has two distinct audiences:
The compiler
Human beings like us
If you choose to remove comments altogether, the assumption you are taking is that you will be catering only to the compiler, and to nothing else.
Of course you, being Klingon, may not need comments because you are not human. Perhaps you could clearly demonstrate to us your ability by speaking in IL instead?
You don't need a single assertion in your code because, in release mode, they're all gone. But when C++ didn't have assertions built-in, someone wrote the assert macro to replace it.
Of course you don't need comments, either, for more or less the same reason. But if you design a language without comments, people will start doing things like:
HelperFunctionDoesNothing("This is a comment! Blah Blah Blah...");
I'm curious. How do you stop someone from declaring a static string containing a comment and then ignoring the variable for the rest of the func/method/procedure/battle/whatever?
var useless_comment = "Can we destroy our enemies?"
if (phasers on full) return Qapla'
Languages need comments. At least 95% of comments can be replaced by clearer code but there are still assumptions you need to document and you absolutely need to document if there's some external problem you are working around.
I never write a comment without first considering if I can change the code to eliminate the need for it but sometimes you can't.
While all source code is copyrighted by default. It is often nice to:
remind the person reading the source code that it is subject to copyright
tell people what the licensing terms are for that source code file
tell them whether or not they are looking at a protected trade secret
Unfortunately, without comments, it is difficult to do this.
Am I the only one who comments out a couple of lines code for a number of purposes?
It's going to be harder than you think to make a language where comments are impossible.
if (false) {
print("This is a comment. Chew on that, Klingons!")
}
While it's true that humans need to be able to comment code, it is not absolutely necessary that the language directly support commenting: for most languages, it would be trivial to write a script that deletes one line comments (for example, all lines beginning with '#' or some other character) then runs the compiler.
Actually, though, I am surprised and disappointed to learn that even my favorite esoteric programming languages support comments: Brainf**k and Whitespace. These languages are meant to be hard to read, so it seems like they shouldn't support commenting. (As opposed to my other favorite esoteric language: LOLCode, which is meant to be self-documenting, in lolcats-speech)
I would dissent from the other answerers on this point: I say, be true to your vision of a Klingon programming language, and do not support comments!
A point against comments is that they tend to often fall out of date with the code. Any time you add a redundancy, you're risking this sort of inconsistency.
There's actually some interesting research that I've seen when a group used NLP to analyze locking comments in some large system and then compare them to the results of static analysis and were able to fix a few bugs that way.
Isn't literate programming as much comment as it is code? Certainly, much of what I've seen of literate programming has as much explanation as code, if not more comment.
You might think that developers writing in your language will make an extra effort to write clear code but the onus will actually be on you to design a language that's so expressive that it doesn't need to be commented. Hell, not even English is like that (we still parenthesize!). If your language isn't so designed it may very well be as usable as Brainfuck and enjoy the popularity and respect of Brainfuck.
Should I add links or are links considered commentlike?
Besides, people will find ways to add comments if they need to by highjacking strings and misusing variable names (that do nothing other than stand in for comments). Have you read Godel Escher Bach?
It will be a bad idea to remove the commenting facility altogether. Surely developers must learn to write code with minimum comments i.e. to write self documenting code but there a lot of cases where one has to explain why something is being done the way it is. Consider the following cases:
a new developer might start maintaining the code and the original dev has left/ out of the project
a change in specification or market requirement leads to something that is counter intuitive
copy right notice especially if open source (some open source libs require you to do this)
It is also my experience that new programmers tend to comment more and as they develop expertise their code tends to become self documenting and concise. In general comments should be about WHY and not HOW or WHAT.
NO -- there is not a single programming language out there that requires comments.
The language is for the computer. The comments are for the humans. You can write a program with 0% comments. It'll execute, rightly or wrongly. You can't write a program with 100% comments. It'll either not compile -- no main(), etc. -- or, for scripting languages, do exactly nothing.
And, besides, real programmers don't comment their code. Just like Klingons.
While I agree with Uri's responses, I too have made a language with no comments. (ichbins.) The language was to be as simple as possible while still being able to express its own compiler cleanly; since you can do that without comments, they got jettisoned.
I'm working off and on on a revision that does support commentary, but a bit differently: literate-programming style with code nested in text instead of comments embedded in code. It might also get examples/test-cases later as a first-class language feature.
Good luck with the Klingon hacking. :-)
I can't tell you how thankful I am for Javadoc - which is really simple to set up within comments. So that's at least one sense in which comments are useful.
No, of course a language doesn't have to have commenting. But a (useful) program does have to have comments... I don't agree with your idea that literate code lacks comments. Some very good code is easily comprehensible with comments, but only with difficulty without.
I think the comments are required in many situations.
For instance, think of the algorithmic ones. Suppose there is a function written in C which solves the Traveling Salesperson Problem, there are wide range of techniques that can be used to deal with this problem. And the codes are usually cryptic by its nature.
Without explicitly describing the parameters and the algorithm used, by using comments, it is almost impossible to reuse this piece of code.
Can we live without comments on code? Sure, but that won't make live easier.
Comments are useful because they reassure the person reading your code - probably the "future you" - that you've thought about her welfare.
I think the question may become how self-contained would the language without comments be? If for example, it compiles down to DLLs that get used within other code, then how does one know anything beyond the function signature in terms of what it requires, changes and returns? I wouldn't want to have function names being dozens of characters to try to express what may be very easily done with comments above the function that can be used as documentation within something like the Object Browser within Visual Studio for example.
Of course!!
The main reason is novice developers. Not everyone knows how to write literate code. Actually there are millions out there don't get a NullPointerException when they see one.
We all start at some point.
But if you're targeting to "expert" developers only, why bother in the language in first place. You should be using butterflies !!! That's what real developer use!
Comments is a must, try to make it harder if you wish ( like using #//##/ sequence to create a comment or something like that ) but don't leave it out.
:)
I agree with you that nicely written code does not need any comments as "Code is only good documentation available to programmer. However this is very ideal condition, not everyone writes good code all time.
So to make poorly written code good in future comments are required.
I once wrote a VB app (a silly board game inspired by Monopoly) without any comments. But I did that just to piss off my teacher, who had told us comments were for "whatever we found relevant, so we could remember it later".
Perfect code needs zero comments. It should be simple, and understandible by complete novices.
Any code needs comments, I try to explain the reason for and workings of every function I write in 1 or 2 lines.
Code that explains itself only exists in a perfect world, there is always some weird hack or a reason to do something quick-n-dirty instead of the propper way.
The best thing to remember is to comment WHY code does what it does, good code explains WHAT it does 99% of the time.
Write something simple, like a piece of code that can solve a Sudoku puzzle (3 reasonably simple while loops) and try reading that 3 months later. You will immidiatly find something that isn't exactly clear.
Code is written once, but read many times over the course of its lifetime; thus it pays to optimize for readability. Clear and consistent naming of everything from constants to classes is necessary, but may or may not be sufficient to achieve this objective. If not, fill in the gaps with comments, and maintain them as you would the code.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.
The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).
So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.
Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.
Programmers who don't code in their spare time for fun will never become as good as those that do.
I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.
(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)
The only "best practice" you should be using all the time is "Use Your Brain".
Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)
EDIT:
Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?
"Googling it" is okay!
Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.
Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.
What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.
(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)
Most comments in code are in fact a pernicious form of code duplication.
We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.
I think eventually many people just blank them out, especially those flowerbox monstrosities.
Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.
On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.
XML is highly overrated
I think too many jump onto the XML bandwagon before using their brains...
XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.
My 5 cents
Not all programmers are created equal
Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.
It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)
I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.
For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax
For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.
Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.
If you only know one language, no matter how well you know it, you're not a great programmer.
There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.
It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.
Performance does matter.
Print statements are a valid way to debug code
I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.
Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)
Your job is to put yourself out of work.
When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.
If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.
Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.
1) The Business Apps farce:
I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.
Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.
How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?
The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.
I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.
2) The n-years-of-experience-required:
Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.
3) The common "computer science" degree curriculum:
The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.
Getters and Setters are Highly Overused
I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).
I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!
UPDATE:
This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).
First of all: anyone who uses public fields deserves jail time
Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.
Many people think:
private fields + public accessors == encapsulation
I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.
Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):
There is a reason that we keep our
variables private. We don't want
anyone else to depend on them. We want
the freedom to change their type or
implementation on a whim or an
impulse. Why, then, do so many
programmers automatically add getters
and setters to their objects, exposing
their private fields as if they were
public?
UML diagrams are highly overrated
Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.
Opinion: SQL is code. Treat it as such
That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.
I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?
Readability is the most important aspect of your code.
Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
If you're a developer, you should be able to write code
I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.
I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!
Edit:
There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.
Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.
The use of hungarian notation should be punished with death.
That should be controversial enough ;)
Design patterns are hurting good design more than they're helping it.
IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.
And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
Less code is better than more!
If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.
PHP sucks ;-)
The proof is in the pudding.
Unit Testing won't help you write good code
The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.
And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.
In fact, I'll generalize that even further,
Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.
They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.
Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.
I think that a method should be created wherever you can name one.
It's ok to write garbage code once in a while
Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.
Code == Design
I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.
Here's an article on the topic of Code as Design.
Software development is just a job
Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.
But in the grand scheme of things, it is just a job.
It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.
I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.
I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.
Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.
Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)
Software Architects/Designers are Overrated
As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.
How's that for controversial?
Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.
There is no "one size fits all" approach to development
I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.
Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.
People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.
It isn't.
Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.