From WolframAlpha: http://mathworld.wolfram.com/Function.html
"While this notation is deprecated by professional mathematicians, it is the more familiar one for most nonprofessionals. Therefore, unless indicated otherwise by context, the notation is taken in this work to be a shorthand for the more rigorous ."
Referring to f(x) being deprecated in favor of f:x->f(x).
I thought this was interesting because I've been familiar with:
function name(arg)
In all my years of middle school through high school, I have never seen functions with any other notation, what is the benefit of using f:x->f(x) instead of f(x)? If f(x) really is deprecated, why do programming languages continue to use a similar syntax?
You're taking the quote out of context. The page says "However, especially in more introductory texts, the notation f(x) is commonly used to refer to the function f itself (as opposed to the value of the function evaluated at x). In this context, the argument x is considered to be a dummy variable whose presence indicates that the function f takes a single argument (as opposed to f(x,y), etc.)" and then says that that's what deprecated.
In most programming languages f(x) refers to the function f evaluated with the argument x and writing f(x) when x is not defined is an error. So they don't use f(x) in its deprecated sense.
To refer to the function f itself, you'd use just f or lambda x: f(x) or something similar depending on the programming language.
Related
(This question is a follow-up of this one while studying Haskell.)
I used to find the notion between "variable" and "value" confusing. Therefore I read about the wiki-page of lambda calculus as well as the previous answer above. I come out with below interpretations.
May I confirm whether these are correct? Just want to double confirm because these concept are quite basic but essential to functional programming. Any advice is welcome.
Premises from wiki:
Lambda Calculus syntax
exp → ID
| (exp)
| λ ID.exp // abstraction
| exp exp // application
(Notation: "<=>" equivalent to)
Interpretations:
"value": it is the actual data or instructions stored in computer.
"variable": it is a way locating the data, a value-replacing reference , but not itself the set of data or instruction stored in computer.
"abstraction" <=> "function" ∈ syntactic form. (https://stackoverflow.com/a/25329157/3701346)
"application": it takes an input of "abstraction", and an input of "lambda expression", results in an "lambda expression".
"abstraction" is called "abstraction" because in usual function definition, we abbreviate the (commonly longer) function body into a much shorter form, i.e. a function identifier followed by a list of formal parameters. (Though lambda abstractions are anonymous functions, other functions usually do have name.)
"variable" <=> "symbol" <=> "reference"
a "variable" is associated with a "value" via a process called "binding".
"constant" ∈ "variable"
"literal" ∈ "value"
"formal parameter" ∈ "variable"
"actual parameter"(argument) ∈ "value"
A "variable" can have a "value" of "data"
=> e.g. variable "a" has a value of 3
A "variable"can also have a "value" of "a set of instructions"
=> e.g. an operator "+" is a variable
"value": it is the actual data or instructions stored in computer.
You're trying to think of it very concretely in terms of the machine, which I'm afraid may confuse you. It's better to think of it in terms of math: a value is just a thing that never changes, like the number 42, the letter 'H', or the sequence of letters that constitutes "Hello world".
Another way to think of it is in terms of mental models. We invent mental models in order to reason indirectly about the world; by reasoning about the mental models, we make predictions about things in the real world. We write computer programs to help us work with these mental models reliably and in large volumes.
Values are then things in the mental model. The bits and bytes are just encodings of the model into the computer's architecture.
"variable": it is a way locating the data, a value-replacing reference , but not itself the set of data or instruction stored in computer.
A variable is just a name that stands for a value in a certain scope of the program. Every time a variable is evaluated, its value needs to be looked up in an environment. There are several implementations of this concept in computer terms:
A stack frame in an eager language is an implementation of an environment for looking up the values of local variable, on each invocation of a routine.
A linker provides environments for looking up global-scope names when a program is compiled or loaded into memory.
"abstraction" <=> "function" ∈ syntactic form.
Abstraction and function are not equivalent. In the lambda calculus, "abstraction" a type of syntactic expression, but a function is a value.
One analogy that's not too shabby is names and descriptions vs. things. Names and descriptions are part of language, while things are part of the world. You could say that the meaning of a name or description is the thing that it names or describes.
Languages contain both simple names for things (e.g., 12 is a name for the number twelve) and more complex descriptions of things (5 + 7 is a description of the number twelve). A lambda abstraction is a description of a function; e.g., the expression \x -> x + 7 is a description of the function that adds seven to its argument.
The trick is that when descriptions get very complex, it's not easy to figure out what thing they're describing. If I give you 12345 + 67890, you need to do some amount of work to figure out what number I just described. Computers are machines that do this work way faster and more reliably than we can do it.
"application": it takes an input of "abstraction", and an input of "lambda expression", results in an "lambda expression".
An application is just an expression with two subexpressions, which describes a value by this means:
The first subexpression stands for a function.
The second subexpression stands for some value.
The application as a whole stands for the value that results for applying the function in (1) to the value from (2).
In formal semantics (and don't be scared of that word) we often use the double brackets ⟦∙⟧ to stand for "the meaning of"; e.g. ⟦dog⟧ = "the meaning of dog." Using that notation:
⟦e1 e2⟧ = ⟦e1⟧(⟦e2⟧)
where e1 and e2 are any two expressions or terms (any variable, abstraction or application).
"abstraction" is called "abstraction" because in usual function definition, we abbreviate the (commonly longer) function body into a much shorter form, i.e. a function identifier followed by a list of formal parameters. (Though lambda abstractions are anonymous functions, other functions usually do have name.)
To tell you the truth, I've never stopped to think whether the term "abstraction" is a good term for this or why it was picked. Generally, with math, it doesn't pay to ask questions like that unless the terms have been very badly picked and mislead people.
"constant" ∈ "variable"
"literal" ∈ "value"
The lambda calculus, in and of itself, doesn't have the concepts of "constant" nor "literal." But one way to define these would be:
A literal is an expression that, because of the rules of the language, always has the same value no matter where it occurs.
A constant, in a purely functional language, is a variable at the topmost scope of a program. Every (non-shadowed) use of that variable will always have the same value in the program.
"formal parameter" ∈ "variable"
"actual parameter"(argument) ∈ "value"
Formal parameter is one kind of use of a variable. In any expression of the form λv.e (where v is a variable and e is an expression), v is a formal variable.
An argument is any expression (not value!) that occurs as the second subexpression of an application.
A "variable" can have a "value" of "data" => e.g. variable "a" has a value of 3
All expressions have values, not just variables. For example, 5 + 7 is an application, and it has the value of twelve.
A "variable"can also have a "value" of "a set of instructions" => e.g. an operator "+" is a variable
The value of + is a function—it's the function that adds its arguments. The set of instructions is an implementation of that function.
Think of a function as an abstract table that says, for each combination of argument values, what the result is. The way the instructions come in is this:
For a lot of functions we cannot literally implement them as a table. In the case of addition it's because the table would be infinitely large.
Even for functions where we can enumerate the cases, we want to implement them much more briefly and efficiently.
But the way you check whether a function implementation is correct is, in some sense, to check that in every case it does the same thing the "infinite table" would do. Two sets of instructions that both check out in this way are really two different implementations of the same function.
The word "abstraction" is used because we can't "look inside" a function and see what's going on for the most part so it's "abstract" (contrast with "concrete"). Application is the process of applying a function to an argument. This means that its body is run, but with the thing that's being applied to it replacing the argument name (avoiding any capture). Hopefully this example will explain better than I can (in Haskell syntax. \ represents lambda):
(\x -> x + x) 5 <=> 5 + 5
Here we are applying the lambda expression on the left to the value 5 on the right. We get 5 + 5 as our result (which then may be further reduced to 10).
A "reference" might refer to something somewhat different in the context of Haskell (IORefs and STRefs), but, internally, all bindings ("variables") in Haskell have a layer of indirection like references in other languages (actually, they have even more indirection than that in a way because of the non-strict evaluation).
This mostly looks okay except for the reference issue I mentioned above.
In Haskell, there isn't really a distinction between a variable and a constant.
A "literal" usually is specifically a constructor for a value. For example, 20 constructs the the number 20, but a function application (\x -> 2 * x) 10 wouldn't be considered a literal for 20 because it has an extra step before you get the value.
Right, not all variables are parameters. A parameter is something that is passed to a function. The xs in the lambda expressions above are examples of parameters. A non-example would be something like let a = 15 in a * a. a is a "variable" but not a parameter. Actually, I would call a a "binding" here because it can never change or take on a different value (vary).
The formal parameter vs actual parameter part looks about right.
That looks okay.
I would say that a variable can be a function instead. Usually, in functional programming, we typically think in terms of functions and function applications instead of lists of instructions.
I'd like to point out also that you might get in trouble by thinking of functions as just syntactic forms. You can create new functions by applying certain kinds of higher order functions without using one of the syntactic forms to construct a function directly. A simple example of this is function composition, (.) in Haskell
(f . g) x = f (g x) -- Definition of (.)
(* 10) . (+ 1) <=> \x -> ((* 10) ((+ 1) x)) <=> \x -> 10 * (x + 1)
Writing it as (* 10) . (+ 1) doesn't directly use the lambda syntax or the function definition syntax to create the new function.
I have learned (from a SML book) that functions in SML always takes just one argument: a tuple. A function that takes multiple arguments is just a function that takes one tuple as argument, implemented with a tuple binding in function binding. I understand this point.
But after this, the book says something that I don't understand:
this point makes SML language flexible and elegant design, and you can do something useful that you cannot do in Java.
Why does this design make the language Flexible? What is the text referring to, that SML can but java cannot?
Using tuples instead of multiple arguments adds flexibility in the sense that higher-order functions can work with functions of any "arity". For example to create the list [f x, f y, f z], you can use the higher-order function map like this:
map f [x, y, z]
That's easy enough - you can do that in any language. But now let's consider the case where f actually needs two arguments. If f were a true binary function (supposing SML had such functions), we'd need a different version of map that can work with binary functions instead of unary functions (and if we'd want to use a 3-ary functions, we'd need a version for those as well). However using tuples we can just write it like this:
map f [(x,a), (y,b), (z,c)]
This will create the list [f (x,a), f (y,b), f (z,c)].
PS: It's not really true that all functions that need multiple arguments take tuples in SML. Often functions use currying, not tuples, to represent multiple arguments, but I suppose your book hasn't gotten to currying yet. Curried functions can't be used in the same way as described above, so they're not as general in that sense.
Actually I don't think you really understand this at all.
First of all, functions in SML doesn't take a tuple as argument, they can take anything as argument. It is just sometimes convenient to use tuples as a means of passing multiple arguments. For example a function may take a record as argument, an integer, a string or it may even take another function as argument. One could also say that it can take "no arguments" in the sense that it may take unit as the argument.
If I understand your statement correctly about functions that takes "multiple arguments" you are talking about currying. For example
fun add x y = x + y
In SML, currying is implemented as a derived form (syntactic sugar). See this answer for an elaboration on how this actually works. In summary there is only anonymous functions in SML, however we can bind them to names such that they may "referred to"/used later.
Behold, ramblings about to start.
Before talking about flexibility of anything, I think it would be in order to state how I think of it. I quite like this definition of flexibility of programming languages: "[...] the unexpectedly many ways in which utterings in the language can be used"
In the case of SML, a small and simple core language has been chosen. This makes implementing compilers and interpreters easy. The flexibility comes in the form that many features of the SML language has been implemented using these core language features such as anonymous functions, pattern matching and the fact that SML has higher-order functions.
Examples of this is currying, case expressions, record selectors, if-the-else expressions, expression sequences.
I would say that this makes the SML core language very flexible and frankly quite elegant.
I'm not quite sure where the author was going regarding what SML can do, that java can't (in this context). However I'm quite sure that the author might be a bit biased, as you can do anything in java as well. However it might take immensely amounts of coding :)
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What is the difference between a ‘function’ and a ‘procedure’?
I searched online for an answer to this question, and the answer I got was that a function can return a value, modify a value, etc., but a subroutine cannot. But I am not satisfied with this explanation and it seems to me that the difference ought to be more than just a matter of terminology.
So I am looking for a more conceptual answer to the question.
A function mirrors the mathematical definition of a function, which is a mapping from 1 or more inputs to a value.1
A subroutine is a general-purpose term for any chunk of code that has a definite entry point and exit point.
However, the precise meaning of these terms will vary from context to context.
1. Obviously, this is not the formal mathematical definition of a function.
A generic definition of function in programming languages is a piece of code that accepts zero or more input values and returns zero or one output value.
The most common definition of subroutine is a function that does not return anything and normally does not accept anything. It is only a piece of code with a name.
Actually in most languages functions do not differ in the way you declare them. So a subroutine may be called a function, but a function not necessarily may be called a subroutine.
Also there is people that consider functions and subroutines the same thing with a different name.
Subroutine - Wikipedia
It's worth noting as an addendum to #Oli's answer that in the mathematical sense a function must be "well-defined", which is to say its output is uniquely determined by its inputs, while this often isn't the case in programming languages.
Those that do make this guarantee (and also that their functions not cause side-effects) are called pure functional languages, an example of which being Haskell. They have the advantage (among others) of their functions being provably correct in their behaviour, which is generally not possible if functions rely on external state and/or have side-effects.
A function must return some value and must not change a global variable or a variable declared outside of the function's body. Under this situation, a function can only mimic it's mathematical counter part (the thing which maps a mathematical object to another mathematical object)
A subroutine doesn't return anything and usually is impure as it has to change some global state or variable otherwise there is no point in calling it. There is no mathematical parallel for a subroutine.
In R programing for those coming from other languages John Cook says that
R uses lexical scoping while S-PLUS uses static scope. The difference can be subtle, particularly when using closures.
I found this odd because I have always thought lexical scoping and static scoping where synonymous.
Are there distinct attributes to lexical and static scoping, or is this a distinction that changes from community to community, person to person? If so, what are the general camps and how do I tell them apart so I can better understand someones meaning when they use these words.
Wikipedia (and I) agree with you that the terms "lexical scope" and "static scope" are synonymous. This Lua discussion tries to make a distinction, but notes that people don't agree as to what that distinction is. :-)
It appears to me that the attempted distinction has to do with accessing names in a different function-activation-record ("stack block", if you will) than the most-current-execution record, which mainly (only?) occurs in nested functions:
function f:
var x
function h:
var y
use(y) -- obviously, accesses y in current activation of h
use(x) -- the question is, which x does this access?
With lexical scope, the answer is "the activation of f that called the activation of h" and with dynamic scope it means "the most recent activation that has any variable named x" (which might not be f). On the other hand, if the language forbids the use of x at all, there's no question about "which x is this" since the answer is "error". :-) It looks as though some people use "static scope" to refer to this third case.
R official documentation also addresses differences of scope between R and S-plus:
http://cran.r-project.org/doc/manuals/R-intro.html#Scope
The example given from the link can be simplified like this:
cube <- function(n) {
sq <- function() n*n
n*sq()
}
The results from S-Plus and R are different:
## first evaluation in S
S> cube(2)
Error in sq(): Object "n" not found
Dumped
S> n <- 3
S> cube(2)
[1] 18
## then the same function evaluated in R
R> cube(2)
[1] 8
I personally think the way of treating variable in R is more natural.
Suppose I have the following clojure functions:
(defn a [x] (* x x))
(def b (fn [x] (* x x)))
(def c (eval (read-string "(defn d [x] (* x x))")))
Is there a way to test for the equality of the function expression - some equivalent of
(eqls a b)
returns true?
It depends on precisely what you mean by "equality of the function expression".
These functions are going to end up as bytecode, so I could for example dump the bytecode corresponding to each function to a byte[] and then compare the two bytecode arrays.
However, there are many different ways of writing semantically equivalent methods, that wouldn't have the same representation in bytecode.
In general, it's impossible to tell what a piece of code does without running it. So it's impossible to tell whether two bits of code are equivalent without running both of them, on all possible inputs.
This is at least as bad, computationally speaking, as the halting problem, and possibly worse.
The halting problem is undecidable as it is, so the general-case answer here is definitely no (and not just for Clojure but for every programming language).
I agree with the above answers in regards to Clojure not having a built in ability to determine the equivalence of two functions and that it has been proven that you can not test programs functionally (also known as black box testing) to determine equality due to the halting problem (unless the input set is finite and defined).
I would like to point out that it is possible to algebraically determine the equivalence of two functions, even if they have different forms (different byte code).
The method for proving the equivalence algebraically was developed in the 1930's by Alonzo Church and is know as beta reduction in Lambda Calculus. This method is certainly applicable to the simple forms in your question (which would also yield the same byte code) and also for more complex forms that would yield different byte codes.
I cannot add to the excellent answers by others, but would like to offer another viewpoint that helped me. If you are e.g. testing that the correct function is returned from your own function, instead of comparing the function object you might get away with just returning the function as a 'symbol.
I know this probably is not what the author asked for but for simple cases it might do.