Meaning of foo, bar, baz, etc - terminology

I've always wanted to know what the foo, bar, baz, etc... names mean.
Several times I've found these terms in scientific articles. Furthermore this terminology in some way serves as a standard for other exemplifications.
Does anyone know where these terms come from, and how to use them properly?

While this question borders on being outside the scope of Stack Overflow, I highly recommend the Wikipedia article on the matter:
The History of Foobar
To give a quick preview:
The word foo originated as a nonsense word from the 1930s, the
military term FUBAR emerged in the 1940s... The
term foo fighter was used by Allied aircraft pilots in World War II to
describe various UFOs or mysterious aerial phenomena. The first known
use of the terms in print in a programming context appears in a 1965
edition of MIT's Tech Engineering News.

The terms foobar (/ˈfuːbɑːr/), foo, bar, baz, and others are used as metasyntactic variables and placeholder names in computer programming or computer-related documentation. They have been used to name entities such as variables, functions, and commands whose exact identity is unimportant and serve only to demonstrate a concept.
Foobar Wikipedia article
tldr;
Foo, bar, and other terms are used as placeholder names to demonstrate a concept. In real life, these values are usually substituted with more meaningful names that fit the context of your code.

Related

ECMAScript 2017, 5 Notational Conventions: What are productions, terminal and nonterminal symbols? [duplicate]

Can someone explain to me what a context free grammar is? After looking at the Wikipedia entry and then the Wikipedia entry on formal grammar, I am left utterly and totally befuddled. Would someone be so kind as to explain what these things are?
I am wondering this because I wish to investigate parsing, and also on the side, the limitation of a regex engine.
I'm not sure if these terms are directly programming related, or if they are related more to linguistics in general. If that is the case, I apologize, perhaps this could be moved if so?
A context free grammar is a grammar which satisfies certain properties. In computer science, grammars describe languages; specifically, they describe formal languages.
A formal language is just a set (mathematical term for a collection of objects) of strings (sequences of symbols... very similar to the programming usage of the word "string"). A simple example of a formal language is the set of all binary strings of length three, {000, 001, 010, 011, 100, 101, 110, 111}.
Grammars work by defining transformations you can make to construct a string in the language described by a grammar. Grammars will say how to transform a start symbol (usually S) into some string of symbols. A grammar for the language given before is:
S -> BBB
B -> 0
B -> 1
The way to interpret this is to say that S can be replaced by BBB, and B can be replaced by 0, and B can be replaced by 1. So to construct the string 010 we can do S -> BBB -> 0BB -> 01B -> 010.
A context-free grammar is simply a grammar where the thing that you're replacing (left of the arrow) is a single "non-terminal" symbol. A non-terminal symbol is any symbol you use in the grammar that can't appear in your final strings. In the grammar above, "S" and "B" are non-terminal symbols, and "0" and "1" are "terminal" symbols. Grammars like
S -> AB
AB -> 1
A -> AA
B -> 0
Are not context free since they contain rules like AB -> 1 that have more than one non-terminal symbol on the left.
Language Theory is related to Theory of Computation. Which is the more philosophical side of Computer Science, about deciding which programs are possible, or which will ever be possible to write, and what type of problems is it impossible to write an algorithm to solve.
A regular expression is a way of describing a regular language. A regular language is a language which can be decided by a deterministic finite automaton.
You should read the article on Finite State Machines: http://en.wikipedia.org/wiki/Finite_state_machine
And Regular languages:
http://en.wikipedia.org/wiki/Regular_language
All Regular Languages are Context Free Languages, but there are Context Free Languages that are not regular. A Context Free Language is the set of all strings accept by a Context Free Grammer or a Pushdown Automata which is a Finite State Machine with a single stack: http://en.wikipedia.org/wiki/Pushdown_automaton#PDA_and_Context-free_Languages
There are more complicated languages that require a Turing Machine (Any possible program you can write on your computer) to decide if a string is in the language or not.
Language theory is also very related to the P vs. NP problem, and some other interesting stuff.
My Introduction to Computer Science third year text book was pretty good at explaining this stuff: Introduction to the Theory of Computation. By Michael Sipser. But, it cost me like $160 to buy it new and it's not very big. Maybe you can find a used copy or find a copy at a library or something it might help you.
EDIT:
The limitations of Regular Expressions and higher language classes have been researched a ton over the past 50 years or so. You might be interested in the pumping lemma for regular languages. It is a means of proving that a certain language is not regular:
http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages
If a language isn't regular it may be Context Free, which means it could be described by a Context Free Grammer, or it may be even in a higher language class, you could prove it's not Context Free by the pumping lemma for Context Free languages which is similar to the one for regular expressions.
A language can even be undecidable, which means even a Turing machine (may program your computer can run) can't be programmed to decide if a string should be accepted as in the language or rejected.
I think the part you're most interested in is the Finite State Machines (Both Deterministic and Deterministic) to see what languages a Regular Expression can decide, and the pumping lemma to prove which languages are not regular.
Basically a language isn't regular if it needs some sort of memory or ability to count. The language of matching parenthesis is not regular for example because the machine needs to remember if it has opened a parenthesis to know if it has to close one.
The language of all strings using the letters a and b that contain at least three b's is a regular language: abababa
The language of all strings using the letters a and b that contain more b's than a's is not regular.
Also you should not that all finite language are regular, for example:
The language of all strings less than 50 characters long using the letters a and b that contain more b's than a's is regular, since it is finite we know it could be described as (b|abb|bab|bba|aabbb|ababb|...) ect until all the possible combinations are listed.

Reasons and history for choice of common comment signs

Most of the programming languages use // or # for a single line comment (see wiki). It seems to be that # is especially used for interpreted languages. According to this question the reason for that seems to be that one of the early shells (bourne shell) used '#' as a comment and made use of it (shebang).
Is there a logical reason why to choose # as a comment sign (e.g. symobolize crossing out by #)? And why do we use // as a comment sign in many compiled languages (especially in C as it seems to be one of the earliest compiled languages with that symbol)? Are there logical reasons for that? Why not use # instead of //, or // instead of #?
Is there a logical reason why to choose # as a comment sign [in early shells]?
The Bourne shell tokenizer is quite simple. To add comment line support, a single character identifier was the simplest, and logical, choice.
The set of single characters you can choose from, if you wish to be compatible with both EBCDIC and ASCII (the two major character sets used at that time), is quite small:
! (logical not in bc)
#
% (modulo in bc)
#
^ (power in bc)
~ (used in paths)
Now, I've listed the ones used in bc, the calculator used in the same time period, not because they were a reason, but because you should understand the context of the Bourne shell developers and users. The bc notation did not arrive from out of thin air; the prevailing preferences influenced the choice, because the developers wanted the syntax to be intuitive, at least for themselves. The above bc notes are therefore useful in showing what kind of associations contemporary developers had with specific characters. I don't intend to imply that bc necessarily had an impact on Bourne shell -- but I do believe it did; that one of the reasons for developing the Bourne shell was to make using and automating tools like bc easier.
Effectively, only # and # were "unused" characters available in both ASCII and EBCDIC; and it appears "hash" won over "at".
And why do we use // as a comment sign in many compiled languages?
The // comment style is from BCPL. Many of the BCPL tokens and operators were already multiple characters long, and I suspect that at time the developers considered it better (for interoperability) to reuse an already used character for the comment line token, rather than introduce a completely new character.
I suspect that the // comment style has a historical background in margin notes; a double vertical line used to separate the actual content from notes or explanations being a clear visual separator to even those not familiar with the practice.
Why not use # instead of //, or [vice versa]?
In both of the cases above, there is clear logic. However, that does not mean that these were the only logical choices available. These are just the ones that made the most sense to the developers at the time when the choice was made -- and I've tried to shed some light on the possible reasons, the context for the choices, above.
If these kinds of questions interest you, I recommend you find old math and science (physics in particular) books, and perhaps even reproductions of old notes. Best tools are intuitive, you see; and to find what was intuitive to someone, you need to find out the context they worked in. I am absolutely certain you can find interesting "reasons" -- things that made certain choices logical and intuitive to them, while to us they may seem odd -- by finding out the habits of the early developers and their colleagues and mentors.

What is an Abstract Syntax Tree/Is it needed?

I've been interested in compiler/interpreter design/implementation for as long as I've been programming (only 5 years now) and it's always seemed like the "magic" behind the scenes that nobody really talks about (I know of at least 2 forums for operating system development, but I don't know of any community for compiler/interpreter/language development). Anyways, recently I've decided to start working on my own, in hopes to expand my knowledge of programming as a whole (and hey, it's pretty fun :). So, based off the limited amount of reading material I have, and Wikipedia, I've developed this concept of the components for a compiler/interpreter:
Source code -> Lexical Analysis -> Abstract Syntax Tree -> Syntactic Analysis -> Semantic Analysis -> Code Generation -> Executable Code.
(I know there's more to code generation and executable code, but I haven't gotten that far yet :)
And with that knowledge, I've created a very basic lexer (in Java) to take input from a source file, and output the tokens into another file. A sample input/output would look like this:
Input:
int a := 2
if(a = 3) then
print "Yay!"
endif
Output (from lexer):
INTEGER
A
ASSIGN
2
IF
L_PAR
A
COMP
3
R_PAR
THEN
PRINT
YAY!
ENDIF
Personally, I think it would be really easy to go from there to syntactic/semantic analysis, and possibly even code generation, which leads me to question: Why use an AST, when it seems that my lexer is doing just as good a job? However, 100% of my sources I use to research this topic all seem adamant that this is a necessary part of any compiler/interpreter. Am I missing the point of what an AST really is (a tree that shows the logical flow of a program)?
TL;DR: Currently in route to develop a compiler, finished the lexer, seems to me like the output would make for easy syntactic analysis/semantic analysis, rather than doing an AST. So why use one? Am I missing the point of one?
Thanks!
First off, one thing about your list of components does not make sense. Building an AST is (pretty much) the syntactic analysis, so it either shouldn't be in there, or at least come before the AST.
What you got there is a lexer. All it gives you are individual tokens. In any case, you will need an actual parser, because regular languages aren't any fun to program in. You can't even (properly) nest expressions. Heck, you can't even handle operator precedence. A token stream doesn't give you:
An idea where statements and expressions start and end.
An idea how statements are grouped into blocks.
An idea Which part of the expression has which precedence, associativity, etc.
A clear, uncluttered view at the actual structure of the program.
A structure which can be passed through a myriad of transformations, without every single pass knowing and having code to accomodate that the condition in an if is enclosed by parentheses.
... more generally, any kind of comprehension above the level of a single token.
Suppose you have two passes in your compiler which optimize certain kinds of operators applies to certain arguments (say, constant folding and algebraic simplifications like x - x -> 0). If you hand them tokens for the expression x - x * 1, these passes are cluttered with figuring out that the x * 1 part comes first. And they have to know that, lest the transformation is incorrect (consider 1 + 2 * 3).
These things are tricky enough to get right as it is, so you don't want to be pestered by parsing problems as well. That's why you solve the parsing problem first, in a separate parsing step. Then you can, say, replace a function call with its definition, without worrying about adding parenthesis so the meaning remains the same. You save time, you separate concerns, you avoid repetition, you enable simpler code in many other places, etc.
A parser figures all that out, and builds an AST which consequently holds all that information. Without any further data on the nodes, the shape of the AST alone gives you no. 1, 2, 3, and much more, for free. None of the bazillion passes that follow have to worry about it anymore.
That's not to say you always have to have an AST. For sufficiently simple languages, you can do a single-pass compiler. Instead of generating an AST or some other intermediate representation during parsing, you emit code as you go. However, this becomes harder for less simple languages and you can't reasonably do a lot of stuff (such as 70% of all optimizations and diagnostics -- and yes I just made that number up). Generally, I wouldn't advise you to do this. There are good reasons single-pass compilers are mostly dead. Even languages which permit them (e.g. C) are nowadays implemented with multiple passes and ASTs. It's a simple way to get started, but will severely limit you (and the language, if you design it) later.
You've got the AST at the wrong point in your flow diagram. Typically, the output of the lexer is a series of tokens (as you have in your output), and these are fed to the parser/syntactic analyzer, which generates the AST. So the output of your lexer is different from an AST because they are used at different points in the compilation process and fulfill different purposes.
The next logical question is: What, then, is an AST? Well, the purpose of parsing/syntactic analysis is to turn the series of tokens generated by the lexer into an AST (or parse tree). The AST is an intermediate representation that captures the relationship between syntactical elements in a way that is easier to work with programmatically. One way of thinking about this is that a text program is a one dimensional construct, and can only represent ideas as a sequence of elements, while the AST is freed from this constraint, and can represent the underlying relationships between those elements in 2 dimensions (as typically drawn), or any higher dimension space if you so choose to think about it that way.
For instance, a binary operator has two operands, let's call them A and B. In code, this may be spelled 'A * B' (assuming an infix operator - another advantage of an AST is to hide such distinctions that may be important syntactically, but not semantically), but for the compiler to "understand" this expression, it must read 5 characters sequentially, and this logic can quickly become cumbersome, given the many possibilities in even a small language. In an AST representation, however, we have a "binary operator" node whose value is '*', and that node has two children, values 'A' and 'B'.
As your compiler project progresses, I think you will begin to see the advantages of this representation.

Do any general purpose languages support n + 2 = 3 and beyond?

Do any general purpose languages support, for example:
n + 2 = 3;
To ensure that possibly among other things that 'n' will now read as 1, or in other cases as a somewhat but not entirly uncertain value.
Beyond this are there any that can suport this concept for algorythmic stuff in general, for example a mixture of strings and numbers with concepts such as concatenate, substring, numerical bitwise rotate etc... not because somone hard coded it into the languege but because the languege understands about using it's knowledge of how things work (your C++ style classes, your classless scripting language like objects, functions that exist etc...) and using this knowledge to rearrange things, as is common in algebra.
I guess only Prolog can do that kind of stuff (counting only well known programming languages).
Certainly: Algol 60 purports to support this particular case if I remember rightly (not sure .. its been a while :) However only the simple linear case, which isn't useful since it is easy enough to subtract the constant from both sides in your head.
However many modern languages pose to the compiler very much harder problems to solve in terms of their type systems. Many allow posing of typing issues which have a solution but which the compiler cannot solve, this is particular true with compilers that do type inference.
Haskell had so-called "n-plus-k" patterns, where for example you could write the factorial function as:
fac 0 = 1
fac (n+1) = (n+1) * fac n
This is now viewed as A Bad Idea (some reasons here), and was removed from the language specification (deprecated in Haskell98 and removed in Haskell2010). But! There is a more sophisticated, more general form being worked on for future versions of Haskell:
View Patterns -- see the section "N+K Patterns"
General purpose languages are called general purpose for a reason. You don't solve math problems with them.
None of GP languages I know of allow expressions on the left side of assignment. Erlang has pattern matching, but that's an entirely different thing.

Purity vs Referential transparency

The terms do appear to be defined differently, but I've always thought of one implying the other; I can't think of any case when an expression is referentially transparent but not pure, or vice-versa.
Wikipedia maintains separate articles for these concepts and says:
From Referential transparency:
If all functions involved in the
expression are pure functions, then
the expression is referentially
transparent. Also, some impure
functions can be included in the
expression if their values are
discarded and their side effects are
insignificant.
From Pure expressions:
Pure functions are required to
construct pure expressions. [...] Pure
expressions are often referred to as
being referentially transparent.
I find these statements confusing. If the side effects from a so-called "impure function" are insignificant enough to allow not performing them (i.e. replace a call to such a function with its value) without materially changing the program, it's the same as if it were pure in the first place, isn't it?
Is there a simpler way to understand the differences between a pure expression and a referentially transparent one, if any? If there is a difference, an example expression that clearly demonstrates it would be appreciated.
If I gather in one place any three theorists of my acquaintance, at least two of them disagree on the meaning of the term "referential transparency." And when I was a young student, a mentor of mine gave me a paper explaining that even if you consider only the professional literature, the phrase "referentially transparent" is used to mean at least three different things. (Unfortunately that paper is somewhere in a box of reprints that have yet to be scanned. I searched Google Scholar for it but I had no success.)
I cannot inform you, but I can advise you to give up: Because even the tiny cadre of pointy-headed language theorists can't agree on what it means, the term "referentially transparent" is not useful. So don't use it.
P.S. On any topic to do with the semantics of programming languages, Wikipedia is unreliable. I have given up trying to fix it; the Wikipedian process seems to regard change and popular voting over stability and accuracy.
All pure functions are necessarily referentially transparent. Since, by definition, they cannot access anything other than what they are passed, their result must be fully determined by their arguments.
However, it is possible to have referentially transparent functions which are not pure. I can write a function which is given an int i, then generates a random number r, subtracts r from itself and places it in s, then returns i - s. Clearly this function is impure, because it is generating random numbers. However, it is referentially transparent. In this case, the example is silly and contrived. However, in, e.g., Haskell, the id function is of type a - > a whereas my stupidId function would be of type a -> IO a indicating that it makes use of side effects. When a programmer can guarantee through means of an external proof that their function is actually referentially transparent, then they can use unsafePerformIO to strip the IO back away from the type.
I'm somewhat unsure of the answer I give here, but surely somebody will point us in some direction. :-)
"Purity" is generally considered to mean "lack of side-effects". An expression is said to be pure if its evaluation lacks side-effects. What's a side-effect then? In a purely functional language, side-effect is anything that doesn't go by the simple beta-rule (the rule that to evaluate function application is the same as to substitute actual parameter for all free occurrences of the formal parameter).
For example, in a functional language with linear (or uniqueness, this distinction shouldn't bother at this moment) types some (controlled) mutation is allowed.
So I guess we have sorted out what "purity" and "side-effects" might be.
Referential transparency (according to the Wikipedia article you cited) means that variable can be replaced by the expression it denotes (abbreviates, stands for) without changing the meaning of the program at hand (btw, this is also a hard question to tackle, and I won't attempt to do so here). So, "purity" and "referential transparency" are indeed different things: "purity" is a property of some expression roughly means "doesn't produce side-effects when executed" whereas "referential transparency" is a property relating variable and expression that it stands for and means "variable can be replaced with what it denotes".
Hopefully this helps.
These slides from one ACCU2015 talk have a great summary on the topic of referential transparency.
From one of the slides:
A language is referentially transparent if (a)
every subexpression can be replaced by any other
that’s equal to it in value and (b) all occurrences of
an expression within a given context yield the
same value.
You can have, for instance, a function that logs its computation to the program standard output (so, it won't be a pure function), but you can replace calls for this function by a similar function that doesn't log its computation. Therefore, this function have the referential transparency property. But... the above definition is about languages, not expressions, as the slides emphasize.
[...] it's the same as if it were pure in the first place, isn't it?
From the definitions we have, no, it is not.
Is there a simpler way to understand the differences between a pure expression and a referentially transparent one, if any?
Try the slides I mentioned above.
The nice thing about standards is that there are so many of them to choose
from.
Andrew S. Tanenbaum.
...along with definitions of referential transparency:
from page 176 of Functional programming with Miranda by Ian Holyer:
8.1 Values and Behaviours
The most important property of the semantics of a pure functional language is that the declarative and operational views of the language coincide exactly, in the following way:
Every expression denotes a value, and there are valuescorresponding to all possible program behaviours. Thebehaviour produced by an expression in any context is completely determined by its value, and vice versa.
This principle, which is usually rather opaquely called referential transparency, can also be pictured in the following way:
and from Nondeterminism with Referential Transparency in Functional Programming Languages by F. Warren Burton:
[...] the property that an expression always has the same value in the same environment [...]
...for various other definitions, see Referential Transparency, Definiteness and Unfoldability by Harald Søndergaard and Peter Sestoft.
Instead, we'll begin with the concept of "purity". For the three of you who didn't know it already, the computer or device you're reading this on is a solid-state Turing machine, a model of computing intrinsically connected with effects. So every program, functional or otherwise, needs to use those effects To Get Things DoneTM.
What does this mean for purity? At the assembly-language level, which is the domain of the CPU, all programs are impure. If you're writing a program in assembly language, you're the one who is micro-managing the interplay between all those effects - and it's really tedious!
Most of the time, you're just instructing the CPU to move data around in the computer's memory, which only changes the contents of individual memory locations - nothing to see there! It's only when your instructions direct the CPU to e.g. write to video memory, that you observe a visible change (text appearing on the screen).
For our purposes here, we'll split effects into two coarse categories:
those involving I/O devices like screens, speakers, printers, VR-headsets, keyboards, mice, etc; commonly known as observable effects.
and the rest, which only ever change the contents of memory.
In this situation, purity just means the absence of those observable effects, the ones which cause a visible change to the environment of the running program, maybe even its host computer. It is definitely not the absence of all effects, otherwise we would have to replace our solid-state Turing machines!
Now for the question of 42 life, the Universe and everything what exactly is meant by the term "referential transparency" - instead of herding cats trying to bring theorists into agreement, let's just try to find the original meaning given to the term. Fortunately for us, the term frequently appears in the context of I/O in Haskell - we only need a relevant article...here's one: from the first page of Owen Stephen's Approaches to Functional I/O:
Referential transparency refers to the ability to replace a sub-expression with one of equal value, without changing the value of the outer expression. Originating from Quine the term was introduced to Computer Science by Strachey.
Following the references:
From page 9 of 39 in Christopher Strachey's Fundamental Concepts in Programming Languages:
One of the most useful properties of expressions is that called by Quine referential transparency. In essence this means that if we wish to find the value of an expression which contains a sub-expression, the only thing we need to know about the sub-expression is its value. Any other features of the sub-expression, such as its internal structure, the number and nature of its components, the order in which they are evaluated or the colour of the ink in which they are written, are irrelevant to the value of the main expression.
From page 163 of 314 in Willard Van Ormond Quine's Word and Object:
[...] Quotation, which thus interrupts the referential force of a term, may be said to fail of referential transparency2. [...] I call a mode of confinement Φ referentially transparent if, whenever an occurrence of a singular term t is purely referential in a term or sentence ψ(t), it is purely referential also in the containing term or sentence Φ(ψ(t)).
with the footnote:
2 The term is from Whitehead and Russell, 2d ed., vol. 1, p. 665.
Following that reference:
From page 709 of 719 in Principa Mathematica by Alfred North Whitehead and Bertrand Russell:
When an assertion occurs, it is made by means of a particular fact, which is an instance of the proposition asserted. But this particular fact is, so to speak, "transparent"; nothing is said about it, bit by means of it something is said about something else. It is the "transparent" quality which belongs to propositions as they occur in truth-functions.
Let's try to bring all that together:
Whitehead and Russell introduce the term "transparent";
Quine then defines the qualified term "referential transparency";
Strachey then adapts Quine's definition in defining the basics of programming languages.
So it's a choice between Quine's original or Strachey's adapted definition. You can try translating Quine's definition for yourself if you like - everyone who's ever contested the definition of "purely functional" might even enjoy the chance to debate something different like what "mode of containment" and "purely referential" really means...have fun! The rest of us will just accept that Strachey's definition is a little vague ("In essence [...]") and continue on:
One useful property of expressions is referential transparency. In essence this means that if we wish to find the value of an expression which contains a sub-expression,
the only thing we need to know about the sub-expression is its value. Any other features of the sub-expression, such as its internal structure, the number and nature of
its components, the order in which they are evaluated or the colour of the ink in which they are written, are irrelevant to the value of the main expression.
(emphasis by me.)
Regarding that description ("that if we wish to find the value of [...]"), a similar, but more concise statement is given by Peter Landin in The Next 700 Programming Languages:
the thing an expression denotes, i.e., its "value", depends only on the values of its sub-expressions, not on other properties of them.
Thus:
One useful property of expressions is referential transparency. In essence this means the thing an expression denotes, i.e., its "value", depends only on the values of its sub-expressions, not on other properties of them.
Strachey provides some examples:
(page 12 of 39)
We tend to assume automatically that the symbol x in an expression such as 3x2 + 2x + 17 stands for the same thing (or has the same value) on each occasion it occurs. This is the most important consequence of referential transparency and it is only in virtue of this property that we can use the where-clauses or λ-expressions described in the last section.
(and on page 16)
When the function is used (or called or applied) we write f[ε] where ε can be an expression. If we are using a referentially transparent language all we require to know about the expression ε in order to evaluate f[ε] is its value.
So referential transparency, by Strachey's original definition, implies purity - in the absence of an order of evaluation, observable and other effects are practically useless...
I'll quote John Mitchell Concept in programming language. He defines pure functional language has to pass declarative language test which is free from side-effects or lack of side effects.
"Within the scope of specific deceleration of x1,...,xn , all occurrence of an expression e containing only variables x1,...,xn have the same value."
In linguistics a name or noun phrase is considered referentially transparent if it may be replaced with the another noun phrase with same referent without changing the meaning of the sentence it contains.
Which in 1st case holds but in 2nd case it gets too weird.
Case 1:
"I saw Walter get into his new car."
And if Walter own a Centro then we could replace that in the given sentence as:
"I saw Walter get into his Centro"
Contrary to first :
Case #2 : He was called William Rufus because of his read beard.
Rufus means somewhat red and reference was to William IV of England.
"He was called William IV because of his read beard." looks too awkward.
Traditional way to say is, a language is referentially transparent if we may replace one expression with another of equal value anywhere in the program without changing the meaning of the program.
So, referential transparency is a property of pure functional language.
And if your program is free from side effects then this property will hold.
So give it up is awesome advice but get it on might also look good in this context.
Pure functions are those that return the same value on every call, and do not have side effects.
Referential transparency means that you can replace a bound variable with its value and still receive the same output.
Both pure and referentially transparent:
def f1(x):
t1 = 3 * x
t2 = 6
return t1 + t2
Why is this pure?
Because it is a function of only the input x and has no side-effects.
Why is this referentially transparent?
You could replace t1 and t2 in f1 with their respective right hand sides in the return statement, as follows
def f2(x):
return 3 * x + 6
and f2 will still always return the same result as f1 in every case.
Pure, but not referentially transparent:
Let's modify f1 as follows:
def f3(x):
t1 = 3 * x
t2 = 6
x = 10
return t1 + t2
Let us try the same trick again by replacing t1 and t2 with their right hand sides, and see if it is an equivalent definition of f3.
def f4(x):
x = 10
return 3 * x + 6
We can easily observe that f3 and f4 are not equivalent on replacing variables with their right hand sides / values. f3(1) would return 9 and f4(1) would return 36.
Referentially transparent, but not pure:
Simply modifying f1 to receive a non-local value of x, as follows:
def f5:
global x
t1 = 3 * x
t2 = 6
return t1 + t2
Performing the same replacement exercise from before shows that f5 is still referentially transparent. However, it is not pure because it is not a function of only the arguments passed to it.
Observing carefully, the reason we lose referential transparency moving from f3 to f4 is that x is modified. In the general case, making a variable final (or those familiar with Scala, using vals instead of vars) and using immutable objects can help keep a function referentially transparent. This makes them more like variables in the algebraic or mathematical sense, thus lending themselves better to formal verification.