I have heard Foo a lot, what does it mean? [duplicate] - terminology

This question already has answers here:
What does 'foo' really mean? [closed]
(9 answers)
Closed 8 years ago.
Call me a noob, 'cause I really am, but what does 'foo' mean? I have seen it a lot. but I don't know what it means. Could someone clarify? All help is appreciated.

There is an an American military expression, FUBAR, which means "... beyond all recognition". You can figure out what the FU means for yourself. Programmers use the two syllables, foo and bar quite often for throw-away filenames, test programs, etc.

foo is used as a place-holder name, usually in example code to signify that the object being named, or the choice of name, is not part of the crux of the example. foo is often followed by bar, baz, and even bundy, if more than one such name is needed. Wikipedia calls these names Metasyntactic Variables. Python programmers supposedly use spam, eggs, ham, instead of foo, etc.
There are good uses of foo in SA.
I have also seen foo used when the programmer can't think of a meaningful name (as a substitute for tmp, say), but I consider that to be a misuse of foo.
Edit= Will answered this on another question linked here.
I took this from this question: What does 'foo' really mean?
Or you could use the internet slang term in which foo means fool.

Related

Adding user defined functions to a simple calculator YACC [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've been searching all over the internet for a comprehensible example to how you can define and call a function in a simple calculator interpreter. Maybe I've found the answer but since I'm not familiar with YACC I couldn't see it.
So the question is, how do you set up a symbol table for user defined functions and how do you store/call these functions in a calculator interpreter?
I'm basically looking to achieve something like this:
def sum(a,b) { a + b }
sum(5,5)
result:
10
Any pointers or examples would be appreciated
That's definitely diving in to the concepts required to interpret (or compile) a programming language, which makes it difficult to provide an answer in a format suitable for StackOverflow. Here's a quick outline:
You need a symbol table which can hold both functions and variables. Or two symbol tables. In the first case, the mapped value will be some kind of variant type, such as a discriminated union; you might need that anyway if you have more than one type of variable. In the second case, you can use a specific type for the mapped value of function names. I'd go for the first option, because it allows functions to be first-class objects.
You need some kind of type which represents the "value" of a function definition. The obvious type is the Abstract Syntax Tree (AST) of an expression (or a program), and doing that will generally simplify your code so I'd highly recommend it. That means that the calculator/parser will not actually evaluate 5+5 (even if that is the literal input) or a+b, but rather will return an AST to whoever called the parser. Consequently, you will need:
A function which can evaluate an AST. That's usually straightforward to write, since it's just a depth-first tree walk. But now you need to worry about variable scope because when you do evaluate they body of your function sum, you probably want to only locally set the values of the parameters.
If you manage all that, you will have gone several steps beyond the usual "let's build a calculator with flex and bison" project, and I definitely encourage you to do so. You may want to take a look at the classic text Structure and Interpretation of Computer Programs (Abelson & Sussman, 1996; often referred to simply as SICP).

What is pass-reference-by-value?

I came across this yesterday and was wondering what the relationship of pass-by-value, pass-by-reference and pass-reference-by-value is. How do they all behave? Which programming language uses which?
Additionally:
I have also occasionally heard other terms like pass-by-pointer, pass-by-copy, pass-by-object...how do they all fit in?
It's a made-up term that refers to something that is actually pass-by-value, but where people always get confused because the only values in the language are references (pointers to things). Many modern languages work this way, including Java (for non-primitives), JavaScript, Python, Ruby, Scheme, Smalltalk, etc.
Beginners always ask questions on StackOverflow about why it's not pass-by-reference, because they don't understand what pass-by-reference really is. So to satisfy them some people make up a new term that they also don't understand, but makes it seem like they learned something, and which sounds vaguely like both pass-by-value and pass-by-reference.
Use of terminology varies widely between different language communities. For example, the Java community almost always describes this semantics as pass-by-value, while the Python and Ruby communities rarely describe it as pass-by-value, even though they are semantically identical.
Let's start with the basics. You probably already know this or at least think you do. Surprisingly many people are actually misusing these terms quite a bit (which certainly doesn't help with the confusion).
What is pass-by-value and pass-by-reference?
I am not going to tell you the answer just yet and explain it using examples instead. You will see why later.
Imagine you are trying to learn a programming language and a friend tells you that he knows a great book on the topic. You have two options:
borrow the book from him or
buy your own copy.
Case 1: If you borrow his book and you make notes in the margins (and add some bookmarks, references, drawings, ...) your friend can later benefit from those notes and bookmarks as well. Sharing is caring. The world is great!
Case 2: But if, on the other hand, your friend is a bit of a neat freak or holds his books very dear he might not appreciate your notes and even less the occasional coffee stain or how you fold the corners for bookmarking. You should really get your own book and everybody can happily treat his book the way he likes. The world is great!
These two cases are - you guessed it - just like pass-by-reference and pass-by-value.
The difference is whether the caller of the function can see changes that the callee makes.
Let's look at a code example.
Say you have the following code snippet:
function largePrint(text):
text = makeUpperCase(text)
print(text)
myText = "Hello"
largePrint(myText)
print(myText)
and you run it in a pass-by-value language the output will be
HELLO
Hello
while in a pass-by-reference language the output will be
HELLO
HELLO
If you pass-by-reference the variable "myText" gets changed by the function. If you just pass the value of the variable the function cannot change it.
If you need another example, this one using websites got quite a few upvotes.
Okay, now that we have the basics down let's move on to...
What is pass-reference-by-value?
Let's use our book example again. Assume that you have decided to borrow the book from your friend although you know how much he loves his books. Let us also assume that you then - being the total klutz that you are - spilled a gallon of coffee over it. Oh, my...
Not to worry, your brain starts up and realizes that you can just buy your friend a new book. It was in such good condition he won't even notice!
What does that story have to do with pass-reference-by-value?
Well, imagine on the other hand that your friend loved his books so muchs he doesn't want to lend it to you. He does however agree to bring the book and show it to you. The problem is, you can still spill a gallon of coffee over it but now you can't buy him a new book anymore. He would notice!
While this may sound dreadful (and probably has you sweating as if you had drunk that gallon of coffee) but it is actually quite common and a perfectly good way to share books call functions. It is sometimes called pass-reference-by-value.
How does the code example fare?
In a pass-reference-by-value function the output of our snippet is
HELLO
Hello
just like in the pass-by-value case. But if we change the code a little bit:
function largePrint(text):
text.toUpperCase()
print(text)
myText = "Hello"
largePrint(myText)
print(myText)
The output becomes:
HELLO
HELLO
just like in the pass-by-reference case. The reason is the same as in the book example: We can change the variable (or book) by calling text.toUpperCase() (or by spilling coffee on it). BUT we cannot change the object that the variable references anymore (text = makeUppercase(text) has no effect); in other words we cannot use a different book.
So summarizing we get:
pass-by-value:
You get your own book, your own copy of the variable and whatever you do with it the original owner will never know.
pass-by-reference:
You use the same book as your friend or the same variable as the caller. If you change the variable it changes outside your function.
pass-reference-by-value
You use the same book as your friend but he doesn't let it out of his sight. You can change the book but not EXchange it. In variable terms that means that you cannot change what the variable references but you can change that which the variable references.
So now when I show you the following Python code:
def appendSeven(list):
list.append(7)
def replaceWithNewList(list):
list = ["a", "b", "c"]
firstList = [1, 2, 3, 4, 5, 6]
print(firstList)
appendSeven(firstList)
print(firstList)
secondList = ["x", "y", "z"]
print(secondList)
replaceWithNewList(secondList)
print(secondList)
and tell you that the output is this
[1, 2, 3, 4, 5, 6]
[1, 2, 3, 4, 5, 6, 7]
['x', 'y', 'z']
['x', 'y', 'z']
what do you say?
Indeed, you are correct! Python uses pass-reference-by-value! (And so is Java and Ruby and Scheme and...)
Conclusion:
Was this extremely difficult? No.
Then why are so many people confused by it? (I'm not going to link to all the other questions about "How do I pass a variable by reference in Python" or "Is Java pass-by-reference?" and so on...)
Well, I see several reasons:
Saying pass-value-by-reference is really long (and it behaves almost the same as pass-by-reference anyway so get of my back and stop splitting hairs!)
There are nasty special cases that complicate the issue
like primitive types (which are often pass-by-value in pass-value-by-reference languages)
immutable types (which cannot be changed so passing them by-reference doesn't get you anywhere)
Some people just like to use different names:
pass-by-copy for pass-by-value because if you paid attention, you need your own book, so a copy has to be made. (But that is an implementation detail, other tricks like copy-on-write might be used to only sometimes make a copy.)
pass-by-reference for pass-reference-by-value because, after all, you did get a reference and that means you can change the object that the caller had (even if you cannot EXchange it)
pass-by-reference for pass-reference-by-value because almost no language actually has pass-by-reference (notable Exceptions (that I know): C++, C# but you have to use special syntax)
pass-by-sharing for pass-reference-by-value (remember our book example? that name is actually way better and everybody should use it!)
pass-by-object for pass-reference-by-value because allegedly Barbara Liskov first named it and used that, so that's its birthname)
pass-by-pointer for pass-by-reference because C doesn't have pass-by-reference and that is how it allows you to accomplish the same thing.
If there is anything you think I should add, please let me know. If I made a mistake, PLEASE LET ME KNOW!
For those of you who still haven't got enough: http://en.wikipedia.org/wiki/Evaluation_strategy

Acronyms in CamelCase [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Closed 4 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I have a doubt about CamelCase. Suppose you have this acronym: Unesco = United Nations Educational, Scientific and Cultural Organization.
You should write: unitedNationsEducationalScientificAndCulturalOrganization
But what if you need to write the acronym? Something like:
getUnescoProperties();
Is it right to write it this way? getUnescoProperties() OR getUNESCOProperties();
There are legitimate criticisms of the Microsoft advice from the accepted answer.
Inconsistent treatment of acronyms/initialisms depending on number of characters:
playerID vs playerId vs playerIdentifier.
The question of whether two-letter acronyms should still be capitalized if they appear at the start of the identifier:
USTaxes vs usTaxes
Difficulty in distinguishing multiple acronyms:
i.e. USID vs usId (or parseDBMXML in Wikipedia's example).
So I'll post this answer as an alternative to accepted answer. All acronyms should be treated consistently; acronyms should be treated like any other word. Quoting Wikipedia:
...some programmers prefer to treat abbreviations as if they were lower case words...
So re: OP's question, I agree with accepted answer; this is correct: getUnescoProperties()
But I think I'd reach a different conclusion in these examples:
US Taxes → usTaxes
Player ID → playerId
So vote for this answer if you think two-letter acronyms should be treated like other acronyms.
Camel Case is a convention, not a specification. So I guess popular opinion rules.
( EDIT: Removing this suggestion that votes should decide this issue; as #Brian David says; Stack Overflow is not a "popularity contest", and this question was closed as "opinion based")
Even though many prefer to treat acronyms like any-other word, the more common practice may be to put acronyms in all-caps (even though it leads to "abominations")
See "EDXML" in this XML schema
See "SFAS158" in this XBRL schema
Other Resources:
Note some people distinguish between abbreviation and acronyms
Note Microsoft guidelines distinguish between two-character acronyms, and "acronyms more than two characters long"
Note some people recommend to avoid abbreviations / acronyms altogether
Note some people recommend to avoid camelCase / PascalCase altogether
Note some people distinguish between "consistency" as "rules that seem internally inconsistent" (i.e. treating two-character acronyms different than three-character acronyms); some people define "consistency" as "applying the same rule consistently" (even if the rule is internally inconsistent)
Framework Design Guidelines
Microsoft Guidelines
Some guidelines Microsoft has written about camelCase are:
When using acronyms, use Pascal case or camel case for acronyms more than two characters long. For example, use HtmlButton or htmlButton. However, you should capitalize acronyms that consist of only two characters, such as System.IO instead of System.Io.
Do not use abbreviations in identifiers or parameter names. If you must use abbreviations, use camel case for abbreviations that consist of more than two characters, even if this contradicts the standard abbreviation of the word.
Summing up:
When you use an abbreviation or acronym that is two characters long, put them all in caps;
When the acronym is longer than two chars, use a capital for the first character.
So, in your specific case, getUnescoProperties() is correct.
To convert to CamelCase, there is also Google's (nearly) deterministic Camel case algorithm:
Beginning with the prose form of the name:
Convert the phrase to plain ASCII and remove any apostrophes.
For example, "Müller's algorithm" might become "Muellers
algorithm". Divide this result into words, splitting on
spaces and any remaining punctuation (typically hyphens).
Recommended: if any word already has a conventional camel case
appearance in common usage, split this into its constituent parts
(e.g., "AdWords" becomes "ad words"). Note that a word such
as "iOS" is not really in camel case per se; it defies any
convention, so this recommendation does not apply.
Now lowercase everything (including acronyms), then uppercase only
the first character of: … each word, to yield upper
camel case, or … each word except the first, to yield
lower camel case Finally, join all the words into
a single identifier.
Note that the casing of the original words is almost entirely
disregarded.
In the following examples, "XML HTTP request" is correctly transformed to XmlHttpRequest, XMLHTTPRequest is incorrect.
getUnescoProperties() should be the best solution...
When possible just follow the pure camelCase, when you have acronyms just let them upper case when possible otherwise go camelCase.
Generally in OO programming variables should start with lower case letter (lowerCamelCase) and class should start with upper case letter (UpperCamelCase).
When in doubt just go pure camelCase ;)
parseXML is fine, parseXml is also camelCase
XMLHTTPRequest should be XmlHttpRequest or xmlHttpRequest no way to go with subsequent upper case acronyms, it is definitively not clear for all test cases.
e.g.
how do you read this word HTTPSSLRequest, HTTP + SSL, or HTTPS + SL (that doesn't mean anything but...), in that case follow camel case convention and go for httpSslRequest or httpsSlRequest, maybe it is no longer nice, but it is definitely more clear.
There is airbnb JavaScript Style Guide at github with a lot of stars (~57.5k at this moment) and guides about acronyms which say:
Acronyms and initialisms should always be all capitalized, or all
lowercased.
Why? Names are for readability, not to appease a computer algorithm.
// bad
import SmsContainer from './containers/SmsContainer';
// bad
const HttpRequests = [
// ...
];
// good
import SMSContainer from './containers/SMSContainer';
// good
const HTTPRequests = [
// ...
];
// also good
const httpRequests = [
// ...
];
// best
import TextMessageContainer from './containers/TextMessageContainer';
// best
const requests = [
// ...
];
In addition to what #valex has said, I want to recap a couple of things with the given answers for this question.
I think the general answer is: it depends on the programming language that you are using.
C Sharp
Microsoft has written some guidelines where it seems that HtmlButton is the right way to name a class for this cases.
Javascript
Javascript has some global variables with acronyms and it uses them all in upper case (but funnily, not always consistently) here are some examples:
encodeURIComponent
XMLHttpRequest
toJSON
toISOString
Currently I am using the following rules:
Capital case for acronyms: XMLHTTPRequest, xmlHTTPRequest, requestIPAddress.
Camel case for abbreviations: ID[entifier], Exe[cutable], App[lication].
ID is an exception, sorry but true.
When I see a capital letter I assume an acronym, i.e. a separate word for each letter. Abbreviations do not have separate words for each letter, so I use camel case.
XMLHTTPRequest is ambigous, but it is a rare case and it's not so much ambiguous, so it's ok, rules and logic are more important than beauty.
The JavaScript Airbnb style guide talks a bit about this. Basically:
// bad
const HttpRequests = [ req ];
// good
const httpRequests = [ req ];
// also good
const HTTPRequests = [ req ];
Because I typically read a leading capital letter as a class, I tend to avoid that. At the end of the day, it's all preference.
disclaimer: English is not my mother tone. But I've thought about this problem for a long time, esp when using node (camelcase style) to handle database since the name of table fields should be snakeized, this is my thought:
There are 2 kinds of 'acronyms' for a programmer:
in natural language, UNESCO
in computer programming language, for example, tmc and textMessageContainer, which usually appears as a local variable.
In programming world, all acronyms in natural language should be treated as word, the reasons are:
when we programming, we should name a variable either in acronym style or non-acronym-style. So, if we name a function getUNESCOProperties, it means UNESCO is an acronym ( otherwise it shouldn't be all uppercase letters ), but evidently, get and properties are not acronyms. so, we should name this function
either gunescop or getUnitedNationsEducationalScientificAndCulturalOrganizationProperties, both are unacceptable.
natural language is evolving continuously, and
today's acronyms will become words tommorow, but programs should be independent of this trend and stand forever.
by the way, in the most-voted answer, IO is the acronym in computer language meaning (stands for InputOutput), but I don't like the name, since I think the acronym (in computer language) should only be used to name a local variable but a top-level class/function, so InputOutput should be used instead of IO
There is also another camelcase convention that tries to favor readability for acronyms by using either uppercase (HTML), or lowercase (html), but avoiding both (Html).
So in your case you could write getUNESCOProperties. You could also write unescoProperties for a variable, or UNESCOProperties for a class (the convention for classes is to start with uppercase).
This rule gets tricky if you want to put together two acronyms, for example for a class named XML HTTP Request. It would start with uppercase, but since XMLHTTPRequest would not be easy to read (is it XMLH TTP Request?), and XMLhttpRequest would break the camelcase convention (is it XM Lhttp Request?), the best option would be to mix case: XMLHttpRequest, which is actually what the W3C used. However using this sort of namings is discouraged. For this example, HTTPRequest would be a better name.
Since the official English word for identification/identity seems to be ID, although is not an acronym, you could apply the same rules there.
This convention seems to be pretty popular out there, but it's just a convention and there is no right or wrong. Just try to stick to a convention and make sure your names are readable.
UNESCO is a special case as it is usually ( in English ) read as a word and not an acronym - like UEFA, RADA, BAFTA and unlike BBC, HTML, SSL

Homework - Differences in accessing values

So I had to write a program in Pascal (A bubble sort, it was pretty simple) and at the end my professor asked a question about our code. He had us write two separate print procedures. The first printArray took in an Array of Integers as it's parameter, where printArray2 took in a type called arrayType which is defined as such:
TYPE
arrayType = ARRAY[1..20] OF INTEGER;
I'm kind of rambling now, but his question was "What was the difference in how the values are accessed when using the different print procedures?"
Just wondering if someone could maybe give me a hint. My original thought was it had something to do with how the memory locations are accessed, but I don't really know how to word it correctly.
Well, hopefully one of you fine people can help me out.
I assume your teacher has introduced you to the concepts of pass by value and pass by reference. I believe you're teacher is trying to get you to think about those concepts as it applies to a primitive array declaration vs declaring your own arrayType. That should at least give you a hint on your homework assignment.
This depends a bit on Pascal dialect+compiler, but I assume its the difference between typed array and open array, the latter of which has a different range (0..number_of_elements-1) than the former (1..number_of_elements)

Why shouldn't I use "Hungarian Notation"?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I know what Hungarian refers to - giving information about a variable, parameter, or type as a prefix to its name. Everyone seems to be rabidly against it, even though in some cases it seems to be a good idea. If I feel that useful information is being imparted, why shouldn't I put it right there where it's available?
See also: Do people use the Hungarian naming conventions in the real world?
vUsing adjHungarian nnotation vmakes nreading ncode adjdifficult.
Most people use Hungarian notation in a wrong way and are getting wrong results.
Read this excellent article by Joel Spolsky: Making Wrong Code Look Wrong.
In short, Hungarian Notation where you prefix your variable names with their type (string) (Systems Hungarian) is bad because it's useless.
Hungarian Notation as it was intended by its author where you prefix the variable name with its kind (using Joel's example: safe string or unsafe string), so called Apps Hungarian has its uses and is still valuable.
Joel is wrong, and here is why.
That "application" information he's talking about should be encoded in the type system. You should not depend on flipping variable names to make sure you don't pass unsafe data to functions requiring safe data. You should make it a type error, so that it is impossible to do so. Any unsafe data should have a type that is marked unsafe, so that it simply cannot be passed to a safe function. To convert from unsafe to safe should require processing with some kind of a sanitize function.
A lot of the things that Joel talks of as "kinds" are not kinds; they are, in fact, types.
What most languages lack, however, is a type system that's expressive enough to enforce these kind of distinctions. For example, if C had a kind of "strong typedef" (where the typedef name had all the operations of the base type, but was not convertible to it) then a lot of these problems would go away. For example, if you could say, strong typedef std::string unsafe_string; to introduce a new type unsafe_string that could not be converted to a std::string (and so could participate in overload resolution etc. etc.) then we would not need silly prefixes.
So, the central claim that Hungarian is for things that are not types is wrong. It's being used for type information. Richer type information than the traditional C type information, certainly; it's type information that encodes some kind of semantic detail to indicate the purpose of the objects. But it's still type information, and the proper solution has always been to encode it into the type system. Encoding it into the type system is far and away the best way to obtain proper validation and enforcement of the rules. Variables names simply do not cut the mustard.
In other words, the aim should not be "make wrong code look wrong to the developer". It should be "make wrong code look wrong to the compiler".
I think it massively clutters up the source code.
It also doesn't gain you much in a strongly typed language. If you do any form of type mismatch tomfoolery, the compiler will tell you about it.
Hungarian notation only makes sense in languages without user-defined types. In a modern functional or OO-language, you would encode information about the "kind" of value into the datatype or class rather than into the variable name.
Several answers reference Joels article. Note however that his example is in VBScript, which didn't support user-defined classes (for a long time at least). In a language with user-defined types you would solve the same problem by creating a HtmlEncodedString-type and then let the Write method accept only that. In a statically typed language, the compiler will catch any encoding-errors, in a dynamically typed you would get a runtime exception - but in any case you are protected against writing unencoded strings. Hungarian notations just turns the programmer into a human type-checker, with is the kind of job that is typically better handled by software.
Joel distinguishes between "systems hungarian" and "apps hungarian", where "systems hungarian" encodes the built-in types like int, float and so on, and "apps hungarian" encodes "kinds", which is higher-level meta-info about variable beyound the machine type, In a OO or modern functional language you can create user-defined types, so there is no distinction between type and "kind" in this sense - both can be represented by the type system - and "apps" hungarian is just as redundant as "systems" hungarian.
So to answer your question: Systems hungarian would only be useful in a unsafe, weakly typed language where e.g. assigning a float value to an int variable will crash the system. Hungarian notation was specifically invented in the sixties for use in BCPL, a pretty low-level language which didn't do any type checking at all. I dont think any language in general use today have this problem, but the notation lived on as a kind of cargo cult programming.
Apps hungarian will make sense if you are working with a language without user defined types, like legacy VBScript or early versions of VB. Perhaps also early versions of Perl and PHP. Again, using it in a modern languge is pure cargo cult.
In any other language, hungarian is just ugly, redundant and fragile. It repeats information already known from the type system, and you should not repeat yourself. Use a descriptive name for the variable that describes the intent of this specific instance of the type. Use the type system to encode invariants and meta info about "kinds" or "classes" of variables - ie. types.
The general point of Joels article - to have wrong code look wrong - is a very good principle. However an even better protection against bugs is to - when at all possible - have wrong code to be detected automatically by the compiler.
I always use Hungarian notation for all my projects. I find it really helpful when I'm dealing with 100s of different identifier names.
For example, when I call a function requiring a string I can type 's' and hit control-space and my IDE will show me exactly the variable names prefixed with 's' .
Another advantage, when I prefix u for unsigned and i for signed ints, I immediately see where I am mixing signed and unsigned in potentially dangerous ways.
I cannot remember the number of times when in a huge 75000 line codebase, bugs were caused (by me and others too) due to naming local variables the same as existing member variables of that class. Since then, I always prefix members with 'm_'
Its a question of taste and experience. Don't knock it until you've tried it.
You're forgetting the number one reason to include this information. It has nothing to do with you, the programmer. It has everything to do with the person coming down the road 2 or 3 years after you leave the company who has to read that stuff.
Yes, an IDE will quickly identify types for you. However, when you're reading through some long batches of 'business rules' code, it's nice to not have to pause on each variable to find out what type it is. When I see things like strUserID, intProduct or guiProductID, it makes for much easier 'ramp up' time.
I agree that MS went way too far with some of their naming conventions - I categorize that in the "too much of a good thing" pile.
Naming conventions are good things, provided you stick to them. I've gone through enough old code that had me constantly going back to look at the definitions for so many similarly-named variables that I push "camel casing" (as it was called at a previous job). Right now I'm on a job that has many thousand of lines of completely uncommented classic ASP code with VBScript and it's a nightmare trying to figure things out.
Tacking on cryptic characters at the beginning of each variable name is unnecessary and shows that the variable name by itself isn't descriptive enough. Most languages require the variable type at declaration anyway, so that information is already available.
There's also the situation where, during maintenance, a variable type needs to change. Example: if a variable declared as "uint_16 u16foo" needs to become a 64-bit unsigned, one of two things will happen:
You'll go through and change each variable name (making sure not to hose any unrelated variables with the same name), or
Just change the type and not change the name, which will only cause confusion.
Joel Spolsky wrote a good blog post about this.
http://www.joelonsoftware.com/articles/Wrong.html
Basically it comes down to not making your code harder to read when a decent IDE will tell you want type the variable is if you can't remember. Also, if you make your code compartmentalized enough, you don't have to remember what a variable was declared as three pages up.
Isn't scope more important than type these days, e.g.
* l for local
* a for argument
* m for member
* g for global
* etc
With modern techniques of refactoring old code, search and replace of a symbol because you changed its type is tedious, the compiler will catch type changes, but often will not catch incorrect use of scope, sensible naming conventions help here.
There is no reason why you should not make correct use of Hungarian notation. It's unpopularity is due to a long-running back-lash against the mis-use of Hungarian notation, especially in the Windows APIs.
In the bad-old days, before anything resembling an IDE existed for DOS (odds are you didn't have enough free memory to run the compiler under Windows, so your development was done in DOS), you didn't get any help from hovering your mouse over a variable name. (Assuming you had a mouse.) What did you did have to deal with were event callback functions in which everything was passed to you as either a 16-bit int (WORD) or 32-bit int (LONG WORD). You then had to cast those parameter to the appropriate types for the given event type. In effect, much of the API was virtually type-less.
The result, an API with parameter names like these:
LRESULT CALLBACK WindowProc(HWND hwnd,
UINT uMsg,
WPARAM wParam,
LPARAM lParam);
Note that the names wParam and lParam, although pretty awful, aren't really any worse than naming them param1 and param2.
To make matters worse, Window 3.0/3.1 had two types of pointers, near and far. So, for example, the return value from memory management function LocalLock was a PVOID, but the return value from GlobalLock was an LPVOID (with the 'L' for long). That awful notation then got extended so that a long pointer string was prefixed lp, to distinguish it from a string that had simply been malloc'd.
It's no surprise that there was a backlash against this sort of thing.
Hungarian Notation can be useful in languages without compile-time type checking, as it would allow developer to quickly remind herself of how the particular variable is used. It does nothing for performance or behavior. It is supposed to improve code readability and is mostly a matter a taste and coding style. For this very reason it is criticized by many developers -- not everybody has the same wiring in the brain.
For the compile-time type-checking languages it is mostly useless -- scrolling up a few lines should reveal the declaration and thus type. If you global variables or your code block spans for much more than one screen, you have grave design and reusability issues. Thus one of the criticisms is that Hungarian Notation allows developers to have bad design and easily get away with it. This is probably one of the reasons for hatered.
On the other hand, there can be cases where even compile-time type-checking languages would benefit from Hungarian Notation -- void pointers or HANDLE's in win32 API. These obfuscates the actual data type, and there might be a merit to use Hungarian Notation there. Yet, if one can know the type of data at build time, why not to use the appropriate data type.
In general, there are no hard reasons not to use Hungarian Notation. It is a matter of likes, policies, and coding style.
As a Python programmer, Hungarian Notation falls apart pretty fast. In Python, I don't care if something is a string - I care if it can act like a string (i.e. if it has a ___str___() method which returns a string).
For example, let's say we have foo as an integer, 12
foo = 12
Hungarian notation tells us that we should call that iFoo or something, to denote it's an integer, so that later on, we know what it is. Except in Python, that doesn't work, or rather, it doesn't make sense. In Python, I decide what type I want when I use it. Do I want a string? well if I do something like this:
print "The current value of foo is %s" % foo
Note the %s - string. Foo isn't a string, but the % operator will call foo.___str___() and use the result (assuming it exists). foo is still an integer, but we treat it as a string if we want a string. If we want a float, then we treat it as a float. In dynamically typed languages like Python, Hungarian Notation is pointless, because it doesn't matter what type something is until you use it, and if you need a specific type, then just make sure to cast it to that type (e.g. float(foo)) when you use it.
Note that dynamic languages like PHP don't have this benefit - PHP tries to do 'the right thing' in the background based on an obscure set of rules that almost no one has memorized, which often results in catastrophic messes unexpectedly. In this case, some sort of naming mechanism, like $files_count or $file_name, can be handy.
In my view, Hungarian Notation is like leeches. Maybe in the past they were useful, or at least they seemed useful, but nowadays it's just a lot of extra typing for not a lot of benefit.
The IDE should impart that useful information. Hungarian might have made some sort (not a whole lot, but some sort) of sense when IDE's were much less advanced.
Apps Hungarian is Greek to me--in a good way
As an engineer, not a programmer, I immediately took to Joel's article on the merits of Apps Hungarian: "Making Wrong Code Look Wrong". I like Apps Hungarian because it mimics how engineering, science, and mathematics represent equations and formulas using sub- and super-scripted symbols (like Greek letters, mathematical operators, etc.). Take a particular example of Newton's Law of Universal Gravity: first in standard mathematical notation, and then in Apps Hungarian pseudo-code:
frcGravityEarthMars = G * massEarth * massMars / norm(posEarth - posMars)
In the mathematical notation, the most prominent symbols are those representing the kind of information stored in the variable: force, mass, position vector, etc. The subscripts play second fiddle to clarify: position of what? This is exactly what Apps Hungarian is doing; it's telling you the kind of thing stored in the variable first and then getting into specifics--about the closest code can get to mathematical notation.
Clearly strong typing can resolve the safe vs. unsafe string example from Joel's essay, but you wouldn't define separate types for position and velocity vectors; both are double arrays of size three and anything you're likely to do to one might apply to the other. Furthermore, it make perfect sense to concatenate position and velocity (to make a state vector) or take their dot product, but probably not to add them. How would typing allow the first two and prohibit the second, and how would such a system extend to every possible operation you might want to protect? Unless you were willing to encode all of math and physics in your typing system.
On top of all that, lots of engineering is done in weakly typed high-level languages like Matlab, or old ones like Fortran 77 or Ada.
So if you have a fancy language and IDE and Apps Hungarian doesn't help you then forget it--lots of folks apparently have. But for me, a worse than a novice programmer who is working in weakly or dynamically typed languages, I can write better code faster with Apps Hungarian than without.
It's incredibly redundant and useless is most modern IDEs, where they do a good job of making the type apparent.
Plus -- to me -- it's just annoying to see intI, strUserName, etc. :)
If I feel that useful information is being imparted, why shouldn't I put it right there where it's available?
Then who cares what anybody else thinks? If you find it useful, then use the notation.
Im my experience, it is bad because:
1 - then you break all the code if you need to change the type of a variable (i.e. if you need to extend a 32 bits integer to a 64 bits integer);
2 - this is useless information as the type is either already in the declaration or you use a dynamic language where the actual type should not be so important in the first place.
Moreover, with a language accepting generic programming (i.e. functions where the type of some variables is not determine when you write the function) or with dynamic typing system (i.e. when the type is not even determine at compile time), how would you name your variables? And most modern languages support one or the other, even if in a restricted form.
In Joel Spolsky's Making Wrong Code Look Wrong he explains that what everybody thinks of as Hungarian Notation (which he calls Systems Hungarian) is not what was it was really intended to be (what he calls Apps Hungarian). Scroll down to the I’m Hungary heading to see this discussion.
Basically, Systems Hungarian is worthless. It just tells you the same thing your compiler and/or IDE will tell you.
Apps Hungarian tells you what the variable is supposed to mean, and can actually be useful.
I've always thought that a prefix or two in the right place wouldn't hurt. I think if I can impart something useful, like "Hey this is an interface, don't count on specific behaviour" right there, as in IEnumerable, I oughtta do it. Comment can clutter things up much more than just a one or two character symbol.
It's a useful convention for naming controls on a form (btnOK, txtLastName etc.), if the list of controls shows up in an alphabetized pull-down list in your IDE.
I tend to use Hungarian Notation with ASP.NET server controls only, otherwise I find it too hard to work out what controls are what on the form.
Take this code snippet:
<asp:Label ID="lblFirstName" runat="server" Text="First Name" />
<asp:TextBox ID="txtFirstName" runat="server" />
<asp:RequiredFieldValidator ID="rfvFirstName" runat="server" ... />
If someone can show a better way of having that set of control names without Hungarian I'd be tempted to move to it.
Joel's article is great, but it seems to omit one major point:
Hungarian makes a particular 'idea' (kind + identifier name) unique,
or near-unique, across the codebase - even a very large codebase.
That's huge for code maintenance.
It means you can use good ol' single-line text search
(grep, findstr, 'find in all files') to find EVERY mention of that 'idea'.
Why is that important when we have IDE's that know how to read code?
Because they're not very good at it yet. This is hard to see in a small codebase,
but obvious in a large one - when the 'idea' might be mentioned in comments,
XML files, Perl scripts, and also in places outside source control (documents, wikis,
bug databases).
You do have to be a little careful even here - e.g. token-pasting in C/C++ macros
can hide mentions of the identifier. Such cases can be dealt with using
coding conventions, and anyway they tend to affect only a minority of the identifiers in the
codebase.
P.S. To the point about using the type system vs. Hungarian - it's best to use both.
You only need wrong code to look wrong if the compiler won't catch it for you. There are plenty of cases where it is infeasible to make the compiler catch it. But where it's feasible - yes, please do that instead!
When considering feasibility, though, do consider the negative effects of splitting up types. e.g. in C#, wrapping 'int' with a non-built-in type has huge consequences. So it makes sense in some situations, but not in all of them.
Debunking the benefits of Hungarian Notation
It provides a way of distinguishing variables.
If the type is all that distinguishes the one value from another, then it can only be for the conversion of one type to another. If you have the same value that is being converted between types, chances are you should be doing this in a function dedicated to conversion. (I have seen hungarianed VB6 leftovers use strings on all of their method parameters simply because they could not figure out how to deserialize a JSON object, or properly comprehend how to declare or use nullable types.) If you have two variables distinguished only by the Hungarian prefix, and they are not a conversion from one to the other, then you need to elaborate on your intention with them.
It makes the code more readable.
I have found that Hungarian notation makes people lazy with their variable names. They have something to distinguish it by, and they feel no need to elaborate to its purpose. This is what you will typically find in Hungarian notated code vs. modern: sSQL vs. groupSelectSql (or usually no sSQL at all because they are supposed to be using the ORM that was put in by earlier developers.), sValue vs. formCollectionValue (or usually no sValue either, because they happen to be in MVC and should be using its model binding features), sType vs. publishSource, etc.
It can't be readability. I see more sTemp1, sTemp2... sTempN from any given hungarianed VB6 leftover than everybody else combined.
It prevents errors.
This would be by virtue of number 2, which is false.
In the words of the master:
http://www.joelonsoftware.com/articles/Wrong.html
An interesting reading, as usual.
Extracts:
"Somebody, somewhere, read Simonyi’s paper, where he used the word “type,” and thought he meant type, like class, like in a type system, like the type checking that the compiler does. He did not. He explained very carefully exactly what he meant by the word “type,” but it didn’t help. The damage was done."
"But there’s still a tremendous amount of value to Apps Hungarian, in that it increases collocation in code, which makes the code easier to read, write, debug, and maintain, and, most importantly, it makes wrong code look wrong."
Make sure you have some time before reading Joel On Software. :)
Several reasons:
Any modern IDE will give you the variable type by simply hovering your mouse over the variable.
Most type names are way long (think HttpClientRequestProvider) to be reasonably used as prefix.
The type information does not carry the right information, it is just paraphrasing the variable declaration, instead of outlining the purpose of the variable (think myInteger vs. pageSize).
I don't think everyone is rabidly against it. In languages without static types, it's pretty useful. I definitely prefer it when it's used to give information that is not already in the type. Like in C, char * szName says that the variable will refer to a null terminated string -- that's not implicit in char* -- of course, a typedef would also help.
Joel had a great article on using hungarian to tell if a variable was HTML encoded or not:
http://www.joelonsoftware.com/articles/Wrong.html
Anyway, I tend to dislike Hungarian when it's used to impart information I already know.
Of course when 99% of programmers agree on something, there is something wrong. The reason they agree here is because most of them have never used Hungarian notation correctly.
For a detailed argument, I refer you to a blog post I have made on the subject.
http://codingthriller.blogspot.com/2007/11/rediscovering-hungarian-notation.html
I started coding pretty much the about the time Hungarian notation was invented and the first time I was forced to use it on a project I hated it.
After a while I realised that when it was done properly it did actually help and these days I love it.
But like all things good, it has to be learnt and understood and to do it properly takes time.
The Hungarian notation was abused, particularly by Microsoft, leading to prefixes longer than the variable name, and showing it is quite rigid, particularly when you change the types (the infamous lparam/wparam, of different type/size in Win16, identical in Win32).
Thus, both due to this abuse, and its use by M$, it was put down as useless.
At my work, we code in Java, but the founder cames from MFC world, so use similar code style (aligned braces, I like this!, capitals to method names, I am used to that, prefix like m_ to class members (fields), s_ to static members, etc.).
And they said all variables should have a prefix showing its type (eg. a BufferedReader is named brData). Which shown as being a bad idea, as the types can change but the names doesn't follow, or coders are not consistent in the use of these prefixes (I even see aBuffer, theProxy, etc.!).
Personally, I chose for a few prefixes that I find useful, the most important being b to prefix boolean variables, as they are the only ones where I allow syntax like if (bVar) (no use of autocast of some values to true or false).
When I coded in C, I used a prefix for variables allocated with malloc, as a reminder it should be freed later. Etc.
So, basically, I don't reject this notation as a whole, but took what seems fitting for my needs.
And of course, when contributing to some project (work, open source), I just use the conventions in place!