Ok, so it is possible to pass a function to another function.
Passing a function to another function in Actionscript 3
This is obviously very powerful, but a more important question is, when would it make sense to do so, as there are performance overheads whenever you call another function?
If you have much actionscript knowledge you probably use one example of this all the time without even noticing.
The addEventListener of the EventDispatcher class actually requires a function be passed into it when it's called:
addEventListener(type:String,
listener:Function, useCapture:Boolean
= false, priority:int = 0, useWeakReference:Boolean = false):void
http://livedocs.adobe.com/flex/3/langref/flash/events/EventDispatcher.html
Passing functions around is used a hell of a lot for callbacks. There are numerous other uses but this highlights one of the more simple scenarios.
The performance overhead is no worse than calling a virtual method in any contemporary OO language.
It makes sense to pass procedures to other procedures when it makes your code smaller. Less code has fewer bugs and is easier to maintain. Here's an example. These are two functions that respectively sum a list of numbers and multiple a list of numbers.
(define sum
(lambda (ls)
(if (null? ls)
0
(+ (car ls) (sum (cdr ls))))))
(define product
(lambda (ls)
(if (null? ls)
1
(* (car ls) (product (cdr ls))))))
They're identical except the operators + and - and the corresponding identity value (0 and 1). We've unfortunately duplicated a lot of code.
We can reduce complexity by abstracting the operator and the identity. The rewritten code looks like this.
(define fold
(lambda (proc id)
(lambda (ls)
(if (null? ls)
id
(proc (car ls) (fold (cdr ls) proc id))))))
(define sum (fold + 0))
(define product (fold * 1))
It's easier now to see the essential difference between sum and product. Also, improvements to the core code only have to be made in one place. Procedural abstraction is a fabulous tool, and it depends on being able to pass procedures to other procedures.
A function that takes a function as its argument is called a higher-order function. Google has a lot of information on these.
Examples of higher-order functions:
function compose(f, g) {
return function(x) {
return f(g(x));
};
}
function map(f, xs) {
var ys = [];
for(var i = 0; i < xs.length; ++i)
ys.push(f(xs[i]));
return ys;
}
With that, you can transform an array with two functions in a row:
var a = ["one", "two", "three"];
var b = map(compose(toUpperCase, reverse), a);
// b is now ["ENO", "OWT", "EERHT"]
1 example is a javascript AJAX call
namespace.class.method(parm1, parm2, callback,onErr);
The method will run asynchrously on the server, and once it is complete it will run the callBack method which is passed
function callback(result) {
$('#myDiv').innerHTML = result;
}
There are a host of other examples, just look at event handling as an example.
Another reason to pass a function to a function is if you want the receiving function to be flexible in the work that it does, for instance I had a recursive function that would process a directory tree, on each directory it would call the supplied function and pass it the current directory. This way I could use the same structure to scan a directory, copy a directory or delete a directory. And the "work" function just had to be complicated enough to process one directory not a tree. This is mostly with procedural programming with OO there are preferred ways to do this, inheritance, delegates, etc.
Another very common example is sorting where you pass a predicate i.e. how to sort e.g.
(sort > list-to-sort)
Here > is the function to apply whilst sorting. This is a very simple example using greater than so your list must be numeric but it could be anything e.g.
(sort (lambda(a b) (> (string-length a) (string-length b))) list-to-sort)
Here a closure is passed that does a greater than comparison on string lengths so assumes the list contains strings.
These types of things just suck in languages without closures or HOFs because of all the object/interface/type nonsense that is required to acheive the same.
Related
Are two Common Lisp function objects with the same symbol designator always eq? For example, this comparison seems to work:
(defun foo (fn)
(let ((ht (make-hash-table)))
(eq (symbol-function (hash-table-test ht)) fn)))
FOO
* (foo #'eql)
T
*(foo #'equal)
NIL
But this may rely on implementations not making latent copies of functions, presumably for reasons of efficiency. Since hash-table-test returns a symbol designator, the other (possibly better) eq alternative would be to derive the symbol from the function object? Is one approach better than the other?
Are two Common Lisp function objects with the same symbol designator always eq?
In Common Lisp a function is a piece of code, be it compiled or not. From the Glossary:
function n. 1. an object representing code, which can be called with zero or more arguments, and which produces zero or more values. 2. an object of type function.
A function designator, on the other hand, can be a symbol:
function designator n. a designator for a function; that is, an object that denotes a function and that is one of: a symbol (denoting the function named by that symbol in the global environment), or a function (denoting itself).
So, a symbol which is a function designator is something that, when evaluated in a certain context, or with a certain syntax like #'symbol or (function symbol), produces a function, and the comparison of two function designators is a comparison of the functions that they denote:
CL-USER> (eql #'car #'cdr)
NIL
CL-USER> (eql #'car (symbol-function 'car))
T
But note that this equality test is just a comparison of the identity of the functional objects (the pieces of code), like in:
CL-USER> (eq #'car #'car)
T
CL-USER> (let ((a (lambda (x) (1+ x))))
(eq a a))
T
but not of the actual bytes that represent them (the code!):
CL-USER> (let ((a (lambda (x) (car x)))
(eq a #'car))
NIL
CL-USER> (defun f (x) (1+ x))
F
CL-USER> (defun g (x) (1+ x))
G
CL-USER> (equalp (function f) (function g))
NIL
CL-USER> (equalp (lambda (x) (1+ x)) (lambda (x) (1+ x)))
NIL
Note that, in all these cases, the two functions compared have not only the same “meaning”, but in most cases the same “source code”, are compiled in the same way, and behaves identically on the same input data. This is because a function is mathematically a (possibly) infinite set of pairs (input, output) and one cannot compare infinte objects.
But this may rely on implementations not making latent copies of functions, presumably for reasons of efficiency.
There is no way for the user to copy a function (neither the system has any reason to perform a copy of a piece of code!), so any function is equal to itself in the same way as any pointer is equal only to itself.
Since hash-table-test returns a symbol designator, the other (possibly better) eq alternative would be to derive the symbol from the function object? Is one approach better than the other?
(I suppose you intend function designator, instead of symbol designator)
Actually, hash-table-test normally returns a function designator only as a symbol, as said in the manual:
test---a function designator. For the four standardized hash table test functions (see make-hash-table), the test value returned is always a symbol. If an implementation permits additional tests, it is implementation-dependent whether such tests are returned as function objects or function names.
So:
CL-USER> (type-of (hash-table-test (make-hash-table)))
SYMBOL
CL-USER> (eq 'eql (hash-table-test (make-hash-table)))
T
CL-USER> (eq #'eql (hash-table-test (make-hash-table)))
NIL
Note that in the last case we are comparing a function (the value of #'eql) with a symbol (what is returned by hash-table-test) and obviously this comparison returns a false value.
In conclusion:
It is not very reasonable to compare functions, unless you want to know if two functions are in effect the same object in memory (for istance if the two things are the same compiled code).
It is always important to distinguish functions from their designations as symbols (function names) or lists, like (LAMBDA parameters body), and decide what we want to actually compare.
#'eql is equivalent to (function eql). Unless there's a lexical function binding of eql, this is defined to return the global function definition of the symbol eql. That's also what (symbol-function 'eql) is defined to return.
So for any globally defined function f that isn't shadowed by a lexical definition,
(eq #'f (symbol-function 'f))
should always be true.
I am having trouble writing a tail-recursive power function in scheme. I want to write the function using a helper function. I know that I need to have a parameter to hold an accumulated value, but I am stuck after that. My code is as follows.
(define (pow-tr a b)
(define (pow-tr-h result)
(if (= b 0)
result
pow-tr a (- b 1))(* result a)) pow-tr-h 1)
I edited my code, and now it works. It is as follows:
(define (pow-tr2 a b)
(define (pow-tr2-h a b result)
(if (= 0 b)
result
(pow-tr2-h a (- b 1) (* result a))))
(pow-tr2-h a b 1))
Can someone explain to me why the helper function should have the same parameters as the main function. I am having a hard time trying to think of why this is necessary.
It's not correct to state that "the helper function should have the same parameters as the main function". You only need to pass the parameters that are going to change in each iteration - in the example, the exponent and the accumulated result. For instance, this will work fine without passing the base as a parameter:
(define (pow-tr2 a b)
(define (pow-tr2-h b result)
(if (= b 0)
result
(pow-tr2-h (- b 1) (* result a))))
(pow-tr2-h b 1))
It works because the inner, helper procedure can "see" the a parameter defined in the outer, main procedure. And because the base is never going to change, we don't have to pass it around. To read more about this, take a look at the section titled "Internal definitions and block structure" in the wonderful SICP book.
Now that you're using helper procedures, it'd be a good idea to start using named let, a very handy syntax for writing helpers without explicitly coding an inner procedure. The above code is equivalent to this:
(define (pow-tr2 a b)
(let pow-tr2-h [(b b) (result 1)]
(if (= b 0)
result
(pow-tr2-h (- b 1) (* result a)))))
Even though it has the same name, it's not the same parameter. If you dug into what the interpreter is doing you'll see "a" defined twice. Once for the local scope, but it still remembers the "a" on the outer scope. When the interpreter invokes a function it tries to bind the values of the arguments to the formal parameters.
The reason that you pass the values through rather mutating state like you would likely do in an algol family language is that by not mutating state you can use a substitution model to reason about the behaviour of procedures. That same procedure called at anytime with arguments will yeild the same result as it is called from anywhere else with the same arguments.
In a purely functional style values never change, rather you keep calling the function with new values. The compiler should be able to write code in a tight loop that updates the values in place on the stack (tail call elimination). This way you can worry more about the correctness of the algorithm rather than acting as a human compiler, which truth be told is a very inefficient machine-task pairing.
(define (power a b)
(if (zero? b)
1
(* a (power a (- b 1)))))
(display (power 3.5 3))
In lisp, a symbol can be bound to both a value and a function at the same time.
For example,
Symbol f bound to a function
(defun f(x)
(* 2 x))
Symbol f bound to a value
(setq f 10)
So i write something like this:
(f f)
=> 20
What is the benefit of such a feature?
The symbol can have both a function and a value. The function can be retrieved with SYMBOL-FUNCTION and the value with SYMBOL-VALUE.
This is not the complete view. Common Lisp has (at least) two namespaces, one for functions and one for variables. Global symbols participate in this. But for local functions the symbols are not involved.
So what are the advantages:
no name clashes between identifiers for functions and variables.
Scheme: (define (foo lst) (list lst))
CL: (defun foo (list) (list list))
no runtime checks whether something is really a function
Scheme: (define (foo) (bar))
CL: (defun foo () (bar))
In Scheme it is not clear what BAR is. It could be a number and that would lead to a runtime error when calling FOO.
In CL BAR is either a function or undefined. It can never be anything else. It can for example never be a number. It is not possible to bind a function name to a number, thus this case never needs to be checked at runtime.
It's useful for everyday tasks, but the main reason is because of macros, you'll understand why once you study it.
Imagine a simple (made up) language where functions look like:
function f(a, b) = c + 42
where c = a * b
(Say it's a subset of Lisp that includes 'defun' and 'let'.)
Also imagine that it includes immutable objects that look like:
struct s(a, b, c = a * b)
Again analogizing to Lisp (this time a superset), say a struct definition like that would generate functions for:
make-s(a, b)
s-a(s)
s-b(s)
s-c(s)
Now, given the simple set up, it seems clear that there is a lot of similarity between what happens behind the scenes when you either call 'f' or 'make-s'. Once 'a' and 'b' are supplied at call/instantiate time, there is enough information to compute 'c'.
You could think of instantiating a struct as being like a calling a function, and then storing the resulting symbolic environment for later use when the generated accessor functions are called. Or you could think of a evaluting a function as being like creating a hidden struct and then using it as the symbolic environment with which to evaluate the final result expression.
Is my toy model so oversimplified that it's useless? Or is it actually a helpful way to think about how real languages work? Are there any real languages/implementations that someone without a CS background but with an interest in programming languages (i.e. me) should learn more about in order to explore this concept?
Thanks.
EDIT: Thanks for the answers so far. To elaborate a little, I guess what I'm wondering is if there are any real languages where it's the case that people learning the language are told e.g. "you should think of objects as being essentially closures". Or if there are any real language implementations where it's the case that instantiating an object and calling a function actually share some common (non-trivial, i.e. not just library calls) code or data structures.
Does the analogy I'm making, which I know others have made before, go any deeper than mere analogy in any real situations?
You can't get much purer than lambda calculus: http://en.wikipedia.org/wiki/Lambda_calculus. Lambda calculus is in fact so pure, it only has functions!
A standard way of implementing a pair in lambda calculus is like so:
pair = fn a: fn b: fn x: x a b
first = fn a: fn b: a
second = fn a: fn b: b
So pair a b, what you might call a "struct", is actually a function (fn x: x a b). But it's a special type of function called a closure. A closure is essentially a function (fn x: x a b) plus values for all of the "free" variables (in this case, a and b).
So yes, instantiating a "struct" is like calling a function, but more importantly, the actual "struct" itself is like a special type of function (a closure).
If you think about how you would implement a lambda calculus interpreter, you can see the symmetry from the other side: you could implement a closure as an expression plus a struct containing the values of all the free variables.
Sorry if this is all obvious and you just wanted some real world example...
Both f and make-s are functions, but the resemblance doesn't go much further. Applying f calls the function and executes its code; applying make-s creates a structure.
In most language implementations and modelizations, make-s is a different kind of object from f: f is a closure, whereas make-s is a constructor (in the functional languages and logic meaning, which is close to the object oriented languages meaning).
If you like to think in an object-oriented way, both f and make-s have an apply method, but they have completely different implementations of this method.
If you like to think in terms of the underlying logic, f and make-s have a type build on the samme type constructor (the function type constructor), but they are constructed in different ways and have different destruction rules (function application vs. constructor application).
If you'd like to understand that last paragraph, I recommend Types and Programming Languages by Benjamin C. Pierce. Structures are discussed in §11.8.
Is my toy model so oversimplified that it's useless?
Essentially, yes. Your simplified model basically boils down to saying that each of these operations involves performing a computation and putting the result somewhere. But that is so general, it covers anything that a computer does. If you didn't perform a computation, you wouldn't be doing anything useful. If you didn't put the result somewhere, you would have done work for nothing as you have no way to get the result. So anything useful you do with a computer, from adding two registers together, to fetching a web page, could be modeled as performing a computation and putting the result somewhere that it can be accessed later.
There is a relationship between objects and closures. http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg03277.html
The following creates what some might call a function, and others might call an object:
Taken from SICP ( http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-21.html )
(define (make-account balance)
(define (withdraw amount)
(if (>= balance amount)
(begin (set! balance (- balance amount))
balance)
"Insufficient funds"))
(define (deposit amount)
(set! balance (+ balance amount))
balance)
(define (dispatch m)
(cond ((eq? m 'withdraw) withdraw)
((eq? m 'deposit) deposit)
(else (error "Unknown request -- MAKE-ACCOUNT"
m))))
dispatch)
Not having them used them all that much, I'm not quite sure about all the ways
lambda-definitions can be used (other than map/collect/do/lightweight local function syntax). For anyone interested in posting some examples:
provide explanations to help readers understand how lambda-definitions are being used;
preferred languages for the examples: Python, Smalltalk, Haskell.
You can make a functional data structure out of lambdas. Here is a simple one - a functional list (Python), supporting add and contains methods:
empty = lambda x : None
def add(lst, item) :
return lambda x : x == item or lst(x)
def contains(lst, item) :
return lst(item) or False
I just coded this quickly for fun - notice that you're not allowed to add any falsy values as is. It also is not tail-recursive, as a good functional structure should be. Exercises for the reader!
You can use them for control flow. For example, in Smalltalk, the "ifTrue:ifFalse:" method is a method on Boolean objects, with a different implementation on each of True and False classes. The expression
someBoolean ifTrue: [self doSomething] ifFalse: [self doSomethingElse]
uses two closures---blocks, in [square brackets] in Smalltalk syntax---one for the true branch, and one for the false branch. The implementation of "ifTrue:ifFalse:" for instances of class True is
ifTrue: block1 ifFalse: block2
^ block1 value
and for class False:
ifTrue: block1 ifFalse: block2
^ block2 value
Closures, here, are used to delay evaluation so that a decision about control flow can be taken, without any specialised syntax at all (besides the syntax for blocks).
Haskell is a little different, with its lazy evaluation model effectively automatically producing the effect of closures in many cases, but in Scheme you end up using lambdas for control flow a lot. For example, here is a utility to retrieve a value from an association-list, supplying an optionally-computed default in the case where the value is not present:
(define (assq/default key lst default-thunk)
(cond
((null? lst) (default-thunk)) ;; actually invoke the default-value-producer
((eq? (caar lst) key) (car lst))
(else (assq/default key (cdr lst) default-thunk))))
It would be called like this:
(assq/default 'mykey my-alist (lambda () (+ 3 4 5)))
The key here is the use of the lambda to delay computation of the default value until it's actually known to be required.
See also continuation-passing-style, which takes this to an extreme. Javascript, for instance, relies on continuation-passing-style and closures to perform all of its blocking operations (like sleeping, I/O, etc).
ETA: Where I've said closures above, I mean lexically scoped closures. It's the lexical scope that's key, often.
You can use a lambda to create a Y Combinator, that is a function that takes another function and returns a recursive form of it. Here is an example:
def Y(le):
def _anon(cc):
return le(lambda x: cc(cc)(x))
return _anon(_anon)
This is a thought bludgeon that deserves some more explanation, but rather than regurgitate it here check out this blog entry (above sample comes from there too).
Its C#, but I personally get a kick out of this article every time I read it:
Building Data out of Thin Air - an implementation of Lisp's cons, car, and cdr functions in C#. It shows how to build a simple stack data structure entirely out of lambda functions.
It isn't really quite the same concept as in haskell etc, but in C#, the lambda construct has (optionally) the ability to compile to an objcet model representing the code (expression-trees) rather than code itself (this is itself one of the cornerstones of LINQ).
This in turn can lead to some very expressive meta-programming opportunities, for example (where the lambda here is expressing "given a service, what do you want to do with it?"):
var client = new Client<ISomeService>();
string captured = "to show a closure";
var result = client.Invoke(
svc => svc.SomeMethodDefinedOnTheService(123, captured)
);
(assuming a suitable Invoke signature)
There are lots of uses for this type of thing, but I've used it to build an RPC stack that doesn't require any runtime code generation - it simply parses the expression-tree, figures out what the caller intended, translates it to RPC, invokes it, gathers the response, etc (discussed more here).
An example in Haskell to compute the derivative of a single variabled function using a numerical approximation:
deriv f = \x -> (f (x + d) - f x) / d
where
d = 0.00001
f x = x ^ 2
f' = deriv f -- roughly equal to f' x = 2 * x