Reasons and history for choice of common comment signs - language-agnostic

Most of the programming languages use // or # for a single line comment (see wiki). It seems to be that # is especially used for interpreted languages. According to this question the reason for that seems to be that one of the early shells (bourne shell) used '#' as a comment and made use of it (shebang).
Is there a logical reason why to choose # as a comment sign (e.g. symobolize crossing out by #)? And why do we use // as a comment sign in many compiled languages (especially in C as it seems to be one of the earliest compiled languages with that symbol)? Are there logical reasons for that? Why not use # instead of //, or // instead of #?

Is there a logical reason why to choose # as a comment sign [in early shells]?
The Bourne shell tokenizer is quite simple. To add comment line support, a single character identifier was the simplest, and logical, choice.
The set of single characters you can choose from, if you wish to be compatible with both EBCDIC and ASCII (the two major character sets used at that time), is quite small:
! (logical not in bc)
#
% (modulo in bc)
#
^ (power in bc)
~ (used in paths)
Now, I've listed the ones used in bc, the calculator used in the same time period, not because they were a reason, but because you should understand the context of the Bourne shell developers and users. The bc notation did not arrive from out of thin air; the prevailing preferences influenced the choice, because the developers wanted the syntax to be intuitive, at least for themselves. The above bc notes are therefore useful in showing what kind of associations contemporary developers had with specific characters. I don't intend to imply that bc necessarily had an impact on Bourne shell -- but I do believe it did; that one of the reasons for developing the Bourne shell was to make using and automating tools like bc easier.
Effectively, only # and # were "unused" characters available in both ASCII and EBCDIC; and it appears "hash" won over "at".
And why do we use // as a comment sign in many compiled languages?
The // comment style is from BCPL. Many of the BCPL tokens and operators were already multiple characters long, and I suspect that at time the developers considered it better (for interoperability) to reuse an already used character for the comment line token, rather than introduce a completely new character.
I suspect that the // comment style has a historical background in margin notes; a double vertical line used to separate the actual content from notes or explanations being a clear visual separator to even those not familiar with the practice.
Why not use # instead of //, or [vice versa]?
In both of the cases above, there is clear logic. However, that does not mean that these were the only logical choices available. These are just the ones that made the most sense to the developers at the time when the choice was made -- and I've tried to shed some light on the possible reasons, the context for the choices, above.
If these kinds of questions interest you, I recommend you find old math and science (physics in particular) books, and perhaps even reproductions of old notes. Best tools are intuitive, you see; and to find what was intuitive to someone, you need to find out the context they worked in. I am absolutely certain you can find interesting "reasons" -- things that made certain choices logical and intuitive to them, while to us they may seem odd -- by finding out the habits of the early developers and their colleagues and mentors.

Related

ECMAScript 2017, 5 Notational Conventions: What are productions, terminal and nonterminal symbols? [duplicate]

Can someone explain to me what a context free grammar is? After looking at the Wikipedia entry and then the Wikipedia entry on formal grammar, I am left utterly and totally befuddled. Would someone be so kind as to explain what these things are?
I am wondering this because I wish to investigate parsing, and also on the side, the limitation of a regex engine.
I'm not sure if these terms are directly programming related, or if they are related more to linguistics in general. If that is the case, I apologize, perhaps this could be moved if so?
A context free grammar is a grammar which satisfies certain properties. In computer science, grammars describe languages; specifically, they describe formal languages.
A formal language is just a set (mathematical term for a collection of objects) of strings (sequences of symbols... very similar to the programming usage of the word "string"). A simple example of a formal language is the set of all binary strings of length three, {000, 001, 010, 011, 100, 101, 110, 111}.
Grammars work by defining transformations you can make to construct a string in the language described by a grammar. Grammars will say how to transform a start symbol (usually S) into some string of symbols. A grammar for the language given before is:
S -> BBB
B -> 0
B -> 1
The way to interpret this is to say that S can be replaced by BBB, and B can be replaced by 0, and B can be replaced by 1. So to construct the string 010 we can do S -> BBB -> 0BB -> 01B -> 010.
A context-free grammar is simply a grammar where the thing that you're replacing (left of the arrow) is a single "non-terminal" symbol. A non-terminal symbol is any symbol you use in the grammar that can't appear in your final strings. In the grammar above, "S" and "B" are non-terminal symbols, and "0" and "1" are "terminal" symbols. Grammars like
S -> AB
AB -> 1
A -> AA
B -> 0
Are not context free since they contain rules like AB -> 1 that have more than one non-terminal symbol on the left.
Language Theory is related to Theory of Computation. Which is the more philosophical side of Computer Science, about deciding which programs are possible, or which will ever be possible to write, and what type of problems is it impossible to write an algorithm to solve.
A regular expression is a way of describing a regular language. A regular language is a language which can be decided by a deterministic finite automaton.
You should read the article on Finite State Machines: http://en.wikipedia.org/wiki/Finite_state_machine
And Regular languages:
http://en.wikipedia.org/wiki/Regular_language
All Regular Languages are Context Free Languages, but there are Context Free Languages that are not regular. A Context Free Language is the set of all strings accept by a Context Free Grammer or a Pushdown Automata which is a Finite State Machine with a single stack: http://en.wikipedia.org/wiki/Pushdown_automaton#PDA_and_Context-free_Languages
There are more complicated languages that require a Turing Machine (Any possible program you can write on your computer) to decide if a string is in the language or not.
Language theory is also very related to the P vs. NP problem, and some other interesting stuff.
My Introduction to Computer Science third year text book was pretty good at explaining this stuff: Introduction to the Theory of Computation. By Michael Sipser. But, it cost me like $160 to buy it new and it's not very big. Maybe you can find a used copy or find a copy at a library or something it might help you.
EDIT:
The limitations of Regular Expressions and higher language classes have been researched a ton over the past 50 years or so. You might be interested in the pumping lemma for regular languages. It is a means of proving that a certain language is not regular:
http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages
If a language isn't regular it may be Context Free, which means it could be described by a Context Free Grammer, or it may be even in a higher language class, you could prove it's not Context Free by the pumping lemma for Context Free languages which is similar to the one for regular expressions.
A language can even be undecidable, which means even a Turing machine (may program your computer can run) can't be programmed to decide if a string should be accepted as in the language or rejected.
I think the part you're most interested in is the Finite State Machines (Both Deterministic and Deterministic) to see what languages a Regular Expression can decide, and the pumping lemma to prove which languages are not regular.
Basically a language isn't regular if it needs some sort of memory or ability to count. The language of matching parenthesis is not regular for example because the machine needs to remember if it has opened a parenthesis to know if it has to close one.
The language of all strings using the letters a and b that contain at least three b's is a regular language: abababa
The language of all strings using the letters a and b that contain more b's than a's is not regular.
Also you should not that all finite language are regular, for example:
The language of all strings less than 50 characters long using the letters a and b that contain more b's than a's is regular, since it is finite we know it could be described as (b|abb|bab|bba|aabbb|ababb|...) ect until all the possible combinations are listed.

What is an Abstract Syntax Tree/Is it needed?

I've been interested in compiler/interpreter design/implementation for as long as I've been programming (only 5 years now) and it's always seemed like the "magic" behind the scenes that nobody really talks about (I know of at least 2 forums for operating system development, but I don't know of any community for compiler/interpreter/language development). Anyways, recently I've decided to start working on my own, in hopes to expand my knowledge of programming as a whole (and hey, it's pretty fun :). So, based off the limited amount of reading material I have, and Wikipedia, I've developed this concept of the components for a compiler/interpreter:
Source code -> Lexical Analysis -> Abstract Syntax Tree -> Syntactic Analysis -> Semantic Analysis -> Code Generation -> Executable Code.
(I know there's more to code generation and executable code, but I haven't gotten that far yet :)
And with that knowledge, I've created a very basic lexer (in Java) to take input from a source file, and output the tokens into another file. A sample input/output would look like this:
Input:
int a := 2
if(a = 3) then
print "Yay!"
endif
Output (from lexer):
INTEGER
A
ASSIGN
2
IF
L_PAR
A
COMP
3
R_PAR
THEN
PRINT
YAY!
ENDIF
Personally, I think it would be really easy to go from there to syntactic/semantic analysis, and possibly even code generation, which leads me to question: Why use an AST, when it seems that my lexer is doing just as good a job? However, 100% of my sources I use to research this topic all seem adamant that this is a necessary part of any compiler/interpreter. Am I missing the point of what an AST really is (a tree that shows the logical flow of a program)?
TL;DR: Currently in route to develop a compiler, finished the lexer, seems to me like the output would make for easy syntactic analysis/semantic analysis, rather than doing an AST. So why use one? Am I missing the point of one?
Thanks!
First off, one thing about your list of components does not make sense. Building an AST is (pretty much) the syntactic analysis, so it either shouldn't be in there, or at least come before the AST.
What you got there is a lexer. All it gives you are individual tokens. In any case, you will need an actual parser, because regular languages aren't any fun to program in. You can't even (properly) nest expressions. Heck, you can't even handle operator precedence. A token stream doesn't give you:
An idea where statements and expressions start and end.
An idea how statements are grouped into blocks.
An idea Which part of the expression has which precedence, associativity, etc.
A clear, uncluttered view at the actual structure of the program.
A structure which can be passed through a myriad of transformations, without every single pass knowing and having code to accomodate that the condition in an if is enclosed by parentheses.
... more generally, any kind of comprehension above the level of a single token.
Suppose you have two passes in your compiler which optimize certain kinds of operators applies to certain arguments (say, constant folding and algebraic simplifications like x - x -> 0). If you hand them tokens for the expression x - x * 1, these passes are cluttered with figuring out that the x * 1 part comes first. And they have to know that, lest the transformation is incorrect (consider 1 + 2 * 3).
These things are tricky enough to get right as it is, so you don't want to be pestered by parsing problems as well. That's why you solve the parsing problem first, in a separate parsing step. Then you can, say, replace a function call with its definition, without worrying about adding parenthesis so the meaning remains the same. You save time, you separate concerns, you avoid repetition, you enable simpler code in many other places, etc.
A parser figures all that out, and builds an AST which consequently holds all that information. Without any further data on the nodes, the shape of the AST alone gives you no. 1, 2, 3, and much more, for free. None of the bazillion passes that follow have to worry about it anymore.
That's not to say you always have to have an AST. For sufficiently simple languages, you can do a single-pass compiler. Instead of generating an AST or some other intermediate representation during parsing, you emit code as you go. However, this becomes harder for less simple languages and you can't reasonably do a lot of stuff (such as 70% of all optimizations and diagnostics -- and yes I just made that number up). Generally, I wouldn't advise you to do this. There are good reasons single-pass compilers are mostly dead. Even languages which permit them (e.g. C) are nowadays implemented with multiple passes and ASTs. It's a simple way to get started, but will severely limit you (and the language, if you design it) later.
You've got the AST at the wrong point in your flow diagram. Typically, the output of the lexer is a series of tokens (as you have in your output), and these are fed to the parser/syntactic analyzer, which generates the AST. So the output of your lexer is different from an AST because they are used at different points in the compilation process and fulfill different purposes.
The next logical question is: What, then, is an AST? Well, the purpose of parsing/syntactic analysis is to turn the series of tokens generated by the lexer into an AST (or parse tree). The AST is an intermediate representation that captures the relationship between syntactical elements in a way that is easier to work with programmatically. One way of thinking about this is that a text program is a one dimensional construct, and can only represent ideas as a sequence of elements, while the AST is freed from this constraint, and can represent the underlying relationships between those elements in 2 dimensions (as typically drawn), or any higher dimension space if you so choose to think about it that way.
For instance, a binary operator has two operands, let's call them A and B. In code, this may be spelled 'A * B' (assuming an infix operator - another advantage of an AST is to hide such distinctions that may be important syntactically, but not semantically), but for the compiler to "understand" this expression, it must read 5 characters sequentially, and this logic can quickly become cumbersome, given the many possibilities in even a small language. In an AST representation, however, we have a "binary operator" node whose value is '*', and that node has two children, values 'A' and 'B'.
As your compiler project progresses, I think you will begin to see the advantages of this representation.

Do any general purpose languages support n + 2 = 3 and beyond?

Do any general purpose languages support, for example:
n + 2 = 3;
To ensure that possibly among other things that 'n' will now read as 1, or in other cases as a somewhat but not entirly uncertain value.
Beyond this are there any that can suport this concept for algorythmic stuff in general, for example a mixture of strings and numbers with concepts such as concatenate, substring, numerical bitwise rotate etc... not because somone hard coded it into the languege but because the languege understands about using it's knowledge of how things work (your C++ style classes, your classless scripting language like objects, functions that exist etc...) and using this knowledge to rearrange things, as is common in algebra.
I guess only Prolog can do that kind of stuff (counting only well known programming languages).
Certainly: Algol 60 purports to support this particular case if I remember rightly (not sure .. its been a while :) However only the simple linear case, which isn't useful since it is easy enough to subtract the constant from both sides in your head.
However many modern languages pose to the compiler very much harder problems to solve in terms of their type systems. Many allow posing of typing issues which have a solution but which the compiler cannot solve, this is particular true with compilers that do type inference.
Haskell had so-called "n-plus-k" patterns, where for example you could write the factorial function as:
fac 0 = 1
fac (n+1) = (n+1) * fac n
This is now viewed as A Bad Idea (some reasons here), and was removed from the language specification (deprecated in Haskell98 and removed in Haskell2010). But! There is a more sophisticated, more general form being worked on for future versions of Haskell:
View Patterns -- see the section "N+K Patterns"
General purpose languages are called general purpose for a reason. You don't solve math problems with them.
None of GP languages I know of allow expressions on the left side of assignment. Erlang has pattern matching, but that's an entirely different thing.

How forgiving should form inputs be?

I went to my bank website the other day and entered my account number with a trailing space. An error message popped that said, "Account number must consist of numeric values only." I thought to myself, "Seriously?! You couldn't have just stripped the space for me?". If I were any less of a computer geek, I may even have thought, "What? There are only numbers in there!" (not being able to see space).
The Calculator that comes with Ubuntu on the other hand merrily accepts spaces and commas, but oddly doesn't like trailing dots (without any ensuing digits).
So, that begs the question. Exactly how forgiving should web forms be? I don't think trimming whitespace is too much to ask, but what about other integer fields?
Should they allow +/- signs?
How many spaces should be allowed between the sign and the number?
What about commas for thousands separators?
What about in other parts of the world where use dots instead?
What if they're in between every 4 digits instead of every 3?
What about hexidecimal and octal representations?
Scientific notation?
What if I accidentally hit the quote button when I'm trying to hit enter, should that be stripped too?
It would be very easy for me to strip out all non-digit characters, and that would be extremely forgiving, but what if the user made an actual mistake that affects the input and should have been caught, but now I've just stripped it out?
What about things like phone numbers (which have a huge variety of formats), postal codes, zip codes, credit card numbers, usernames, emails, URLs (should I assume http? What about .com while I'm at it?)?
Where do you draw the line?
For something as important as banking, I don't mind it complaining about my input, especially if the other option is mistakenly transferring a bucketload of money into some stranger's account instead of my wife's (because of a missing or incorrect digit for example).
A classic example is one of my banks which disallows monetary values unless they have ".99" at the end (where 9 can be any digit of course). The vast majority of things I do are for exact dollar amounts and it's a tad annoying to have to always enter 500.00 instead of just 500.
But I'll be happier about that the first time I avoid accidentally paying somebody $5072 instead of $50.72 just because I forgot the decimal point. Actually, that's pretty unlikely since it also asks for confirmation and I'm pretty anal in controlling my money :-)
Having said that, the general rule I try to follow is "be liberal in what you accept, be strict in what you produce".
This allows other software using my output to expect a limited range of possibilities (making their lives easier). But it makes my software more useful if it can handle simple misteaks.
You draw the line at the point where the computer is guessing at what the correct input should be.
For example, a license key input box I wrote once accepts spaces and dashes and both upper and lower case, even though internally the keys were without said spaces, dashes and were all upper case. I could do that, since I knew that none of the keys actually had spaces or dashes.
Your example about URLs is another good one. I've noticed that modern browsers (I'm using Chrome), when something like 'flowers' is typed into the address bar, it knows it should search for it since it's not a valid URL. If instead, I type 'st' it auto corrects (or auto-suggests) 'stackoverflow.com' since it's a bookmark.
A well-written input system will complain when it would otherwise be forced to guess what the correct input should be.
Numeric input:
Stripping non-digits seems reasonable to me, but the problem is conflicting decimal notation. Some regions expect , (comma) to denote the decimal separator, while others use . (period). Unless the input would likely be in other bases, I would only assume base 10. If it's reasonable to assume non-base 10 input (base-16 for color input, for example), I would go with standard conventions for denoting the bases: leading 0 means base 8, leading 0x means base 16.
String input:
This gets a lot more complicated. It mostly depends on what the input is actually meant to represent. A username should exclude characters that will cause trouble, but the meaning of 'cause trouble' will vary depending on the use of the application and the system itself. URLs have a concrete definition of what qualifies, but that definition is rather broad. Fortunately, many languages come with tools to discern URLs, without you having to code your own parsing (whether the language does it perfectly or not is another question).
In the end, it's really a case-by-case basis. I do like paxadiablo's general rule, though: Accept as much as you can, output only what you must.
It totally depends on how the data is going to be used.
If the input is a monetary amount, for a transaction for example, then the inputted variable should be normalised to a set of standards for sure.
If it's simply a case of a phone number, then it is unlikely the stored data will provide any functional sort of use so you can be more forgiving.
There is nothing wrong with forcing correct format to make displayed look nicer, but you have to balance user irritation with micro benefits.
Once you start collecting data you can scan through it and see what sort of patterns emerge, and you can auto strip off inputted format.
Where do you draw the line?
When the consequences of accepting "invalid" data outweigh the irritation of not accepting it.
Should they allow +/- signs?
If negative values are valid, then of course they should.
If not, then don't just silently strip minus signs, as it totally changes the meaning of the data. Stripping pluses is less of a problem.
What if [thousands separators are] in between every 4 digits instead of every 3?
In countries that use three-digit grouping, "1,0000" can be assumed to be a typo. But is it a typo for "10,000" or for "1,000"? I wouldn't dare guess, as a wrong guess could cost the user $9,000.
What about hexidecimal and octal
representations?
Unless you're running the search feature for unicode.org, I can't imagine why anyone would use hexidecimal in web form.
And "01234" is almost certainly intended to be 1234 instead of 668.
What about things like...credit card numbers
Please allow spaces or hyphens in credit card numbers. It's really annoying when I have to type an undelimited 16-digit number.
I think you're over reacting a little bit. If there's anything in the field that shouldn't be there, strip it. otherwise try to force the input into whatever format you want, and if it doesn't fit, reject it.
I would say "Accept anything but process only valid data".
Expect your users to behave like a computer noob. Validate the input data using regular expressions and other validators.
Search for standard regular expressions for urls, emails and stuff.
Throw in a regular exp like this "/(?:([a-zA-Z0-9][\s,]+))([a-zA-Z0-9]+)$/" for comma or space separated values. With minor tweaking this exp will work for any number of comma separated values.
The one that irritates me as a user is credit card numbers, conventionally these appear as groups of 4 digits with spaces separating them but the odd webform will only accept a single string of digits with no spaces and no indication that this is the format it's seeking. Similarly telephone numbers, humans often use spaces to improve clarity, webforms sometimes accept the spaces and sometimes don't.

Is it possible to construct a turing-complete language in which every string is a correct program?

Is it possible to construct a turing-complete language in which every string is a correct program?
Any examples? Even better, any real-world examples?
Precisions: by "correct" I mean "compiles", although "runs without error" and "runs without error, and finishes in finite time" would be interesting questions too :)
By string I mean any sequence of bytes, although a restriction to a set of characters will do.
Yes (assuming by correct you mean compiles, not does something useful). Take brainfuck and map multiple letters to the eight commands.
Edit... oh and redefine an unmatched [ or ] to print "meh. nitpickers" to the screen.
One PhD please ;)
This is a compiler for a C-like language expressed in BNF as
<program> ::= <character> | <character> <program>
#!/bin/bash
# full_language.sh
gcc "$1"
if [ $? != 0 ]
then
echo -e "#!/bin/bash\necho 'hi'" > a.out
chmod +x a.out
fi
We can build this up out of any turing-complete language. Take C, for example. If input is a correct C program, than do what it intended to. Otherwise, print "Hello, world!". Or just do nothing.
That makes a turing-complete language where every string is a correct program.
Existence proof: perl.
No, because your definition of 'correct' would leave no room for 'incorrect' since 'correct' would include all computable numbers and non-halting programs. To answer in the affirmative would make the question meaningless as 'correct' loses it's definition.
Combinatory logic is very near to the requirement You pose. Every string (over the {K, S, #} alphabet) can be extended to a program. Thus, althogh Your requirement is not entirely fulfilled, but its straighforward weakening to prefix property is satisfied by combinatory logic.
Although these programs are syntactically correct, but they do not necessarily halt. That is not necessarily a problem: combinatory logic has originally been developed for investigating theoretical questions, not for a practical programming language (although can be used as such). Are non-halting combinatory logic "programs# interesting? Do they have at least a theoretical relevance? Of course some of them do! For example, Omega is a non-halting combinatory logic term, but it is subject of articles, book chapters, it has theroetical interestingness, thus we can say, it is meaningful.
Summary: if we regard combinatory logic over alphabet {K, S, #}, the we can say, every possible strings over this alphabet can be extended (as a prefix) to a syntactically correct combinatory logic program. Some of these won't halt, but even those who don't halt can be theoretically interesting, thus "meaningful" (e.g. Omega).
The answer TokenMacGuy provided is better than mine, becasue it approaches the poblem from a broader view, and also because Jot is inspired combinatory logic, thus TokenMacGuy's answer supercedes mine.
If by "correct" you mean syntactically, then certainly, yes.
http://en.wikipedia.org/wiki/One_instruction_set_computer
http://en.wikipedia.org/wiki/Whitespace_(programming_language)
etc
Turing-complete and "finishes in finite time" are not possible.
Excerpt from wikipedia: http://en.wikipedia.org/wiki/Turing_completeness
"One important result from computability theory is that it is impossible in general to determine whether a program written in a Turing-complete language will continue executing forever or will stop within a finite period of time (see halting problem)."
What you are describing is essentially similar to a mapping from Godel number to original program. Briefly, the idea is that every program should be reducible to a unique integer, and you could use that to draw conclusions about the program, such as with some sort of oracle. One such mapping is the Jot language, which has only two operators, 1 and 0, and the first operator must be a 1.