What fun can be had with lambda-definitions? - function

Not having them used them all that much, I'm not quite sure about all the ways
lambda-definitions can be used (other than map/collect/do/lightweight local function syntax). For anyone interested in posting some examples:
provide explanations to help readers understand how lambda-definitions are being used;
preferred languages for the examples: Python, Smalltalk, Haskell.

You can make a functional data structure out of lambdas. Here is a simple one - a functional list (Python), supporting add and contains methods:
empty = lambda x : None
def add(lst, item) :
return lambda x : x == item or lst(x)
def contains(lst, item) :
return lst(item) or False
I just coded this quickly for fun - notice that you're not allowed to add any falsy values as is. It also is not tail-recursive, as a good functional structure should be. Exercises for the reader!

You can use them for control flow. For example, in Smalltalk, the "ifTrue:ifFalse:" method is a method on Boolean objects, with a different implementation on each of True and False classes. The expression
someBoolean ifTrue: [self doSomething] ifFalse: [self doSomethingElse]
uses two closures---blocks, in [square brackets] in Smalltalk syntax---one for the true branch, and one for the false branch. The implementation of "ifTrue:ifFalse:" for instances of class True is
ifTrue: block1 ifFalse: block2
^ block1 value
and for class False:
ifTrue: block1 ifFalse: block2
^ block2 value
Closures, here, are used to delay evaluation so that a decision about control flow can be taken, without any specialised syntax at all (besides the syntax for blocks).
Haskell is a little different, with its lazy evaluation model effectively automatically producing the effect of closures in many cases, but in Scheme you end up using lambdas for control flow a lot. For example, here is a utility to retrieve a value from an association-list, supplying an optionally-computed default in the case where the value is not present:
(define (assq/default key lst default-thunk)
(cond
((null? lst) (default-thunk)) ;; actually invoke the default-value-producer
((eq? (caar lst) key) (car lst))
(else (assq/default key (cdr lst) default-thunk))))
It would be called like this:
(assq/default 'mykey my-alist (lambda () (+ 3 4 5)))
The key here is the use of the lambda to delay computation of the default value until it's actually known to be required.
See also continuation-passing-style, which takes this to an extreme. Javascript, for instance, relies on continuation-passing-style and closures to perform all of its blocking operations (like sleeping, I/O, etc).
ETA: Where I've said closures above, I mean lexically scoped closures. It's the lexical scope that's key, often.

You can use a lambda to create a Y Combinator, that is a function that takes another function and returns a recursive form of it. Here is an example:
def Y(le):
def _anon(cc):
return le(lambda x: cc(cc)(x))
return _anon(_anon)
This is a thought bludgeon that deserves some more explanation, but rather than regurgitate it here check out this blog entry (above sample comes from there too).

Its C#, but I personally get a kick out of this article every time I read it:
Building Data out of Thin Air - an implementation of Lisp's cons, car, and cdr functions in C#. It shows how to build a simple stack data structure entirely out of lambda functions.

It isn't really quite the same concept as in haskell etc, but in C#, the lambda construct has (optionally) the ability to compile to an objcet model representing the code (expression-trees) rather than code itself (this is itself one of the cornerstones of LINQ).
This in turn can lead to some very expressive meta-programming opportunities, for example (where the lambda here is expressing "given a service, what do you want to do with it?"):
var client = new Client<ISomeService>();
string captured = "to show a closure";
var result = client.Invoke(
svc => svc.SomeMethodDefinedOnTheService(123, captured)
);
(assuming a suitable Invoke signature)
There are lots of uses for this type of thing, but I've used it to build an RPC stack that doesn't require any runtime code generation - it simply parses the expression-tree, figures out what the caller intended, translates it to RPC, invokes it, gathers the response, etc (discussed more here).

An example in Haskell to compute the derivative of a single variabled function using a numerical approximation:
deriv f = \x -> (f (x + d) - f x) / d
where
d = 0.00001
f x = x ^ 2
f' = deriv f -- roughly equal to f' x = 2 * x

Related

Clojure, can macros do something that couldn't be done with a function

I'm learning Clojure macros, and wonder why we can't use just functions for metaprogramming.
As far as I know the difference between macro and function is that arguments of macro are not evaluated but passed as data structures and symbols as they are, whereas the return value is evaluated (in the place where macro is called). Macro works as a proxy between reader and evaluator, transforming the form in an arbitrary way before the evaluation takes place. Internally they may use all the language features, including functions, special forms, literals, recursion, other macros etc.
Functions are the opposite. Arguments are evaluated before the call, return value is not after return. But the mirroring nature of macros and functions makes me wonder, couldn't we as well use functions as macros by quoting their arguments (the form), transforming the form, evaluating it inside the function, finally returning it's value. Wouldn't this logically produce the same outcome? Of course this would be inconvenient, but theoretically, is there equivalent function for every possible macro?
Here is simple infix macro
(defmacro infix
"translate infix notation to clojure form"
[form]
(list (second form) (first form) (last form)))
(infix (6 + 6)) ;-> 12
Here is same logic using a function
(defn infix-fn
"infix using a function"
[form]
((eval (second form)) (eval (first form)) (eval (last form))))
(infix-fn '(6 + 6)) ;-> 12
Now, is this perception generalizable to all situations, or are there some corner cases where macro couldn't be outdone? In the end, are macros just a syntactic sugar over a function call?
It would help if I read the question before answering it.
Your infix function doesn't work except with literals:
(let [m 3, n 22] (infix-fn '(m + n)))
CompilerException java.lang.RuntimeException:
Unable to resolve symbol: m in this context ...
This is the consequence of what #jkinski noted: by the time eval acts, m is gone.
Can macros do what functions cannot?
Yes. But if you can do it with a function, you generally should.
Macros are good for
deferred evaluation;
capturing forms;
re-organizing syntax;
none of which a function can do.
Deferred Evaluation
Consider (from Programming Clojure by Halloway & Bedra)
(defmacro unless [test then]
(list 'if (list 'not test) then)))
... a partial clone of if-not. Let's use it to define
(defn safe-div [num denom]
(unless (zero? denom) (/ num denom)))
... which prevents division by zero, returning nil:
(safe-div 10 0)
=> nil
If we tried to define it as a function:
(defn unless [test then]
(if (not test) then))
... then
(safe-div 10 0)
ArithmeticException Divide by zero ...
The potential result is evaluated as the then argument to unless, before the body of unless ignores it.
Capturing Forms and Re-organizing Syntax
Suppose Clojure had no case form. Here is a rough-and-ready substitute:
(defmacro my-case [expr & stuff]
(let [thunk (fn [form] `(fn [] ~form))
pairs (partition 2 stuff)
default (if (-> stuff count odd?)
(-> stuff last thunk)
'(constantly nil))
[ks vs] (apply map list pairs)
the-map (zipmap ks (map thunk vs))]
(list (list the-map expr default))))
This
picks apart the keys (ks) and corresponding expressions (vs),
wraps the latter as parameterless fn forms,
constructs a map from the former to the latter,
returns a form that calls the function returned by looking up the
map.
The details are unimportant. The point is it can be done.
When Guido van Rossum proposed adding a case statement to Python, the committee turned him down. So Python has no case statement. If Rich didn't want a case statement, but I did, I can have one.
Just for fun, let's use macros to contrive a passable clone of the if form. This is no doubt a cliche in functional programming circles, but took me by surprise. I had thought of if as an irreducible primitive of lazy evaluation.
An easy way is to piggy-back on the the my-case macro:
(defmacro if-like
([test then] `(if-like ~test ~then nil))
([test then else]
`(my-case ~test
false ~else
nil ~else
~then)))
This is prolix and slow, and it uses stack and loses recur, which gets buried in the closures. However ...
(defn fact [n]
(if-like (pos? n)
(* (fact (dec n)) n)
1))
(map fact (range 10))
=> (1 1 2 6 24 120 720 5040 40320 362880)
... it works, more or less.
Please, dear reader, point out any errors in my code.

What are some interesting uses of higher-order functions?

I'm currently doing a Functional Programming course and I'm quite amused by the concept of higher-order functions and functions as first class citizens. However, I can't yet think of many practically useful, conceptually amazing, or just plain interesting higher-order functions. (Besides the typical and rather dull map, filter, etc functions).
Do you know examples of such interesting functions?
Maybe functions that return functions, functions that return lists of functions (?), etc.
I'd appreciate examples in Haskell, which is the language I'm currently learning :)
Well, you notice that Haskell has no syntax for loops? No while or do or for. Because these are all just higher-order functions:
map :: (a -> b) -> [a] -> [b]
foldr :: (a -> b -> b) -> b -> [a] -> b
filter :: (a -> Bool) -> [a] -> [a]
unfoldr :: (b -> Maybe (a, b)) -> b -> [a]
iterate :: (a -> a) -> a -> [a]
Higher-order functions replace the need for baked in syntax in the language for control structures, meaning pretty much every Haskell program uses these functions -- making them quite useful!
They are the first step towards good abstraction because we can now plug custom behavior into a general purpose skeleton function.
In particular, monads are only possible because we can chain together, and manipulate functions, to create programs.
The fact is, life is pretty boring when it is first-order. Programming only gets interesting once you have higher-order.
Many techniques used in OO programming are workarounds for the lack of higher order functions.
This includes a number of the design patterns that are ubiquitous in functional programming. For example, the visitor pattern is a rather complicated way to implement a fold. The workaround is to create a class with methods and pass in an element of the class in as an argument, as a substitute for passing in a function.
The strategy pattern is another example of a scheme that often passes objects as arguments as a substitute for what is actually intended, functions.
Similarly dependency injection often involves some clunky scheme to pass a proxy for functions when it would often be better to simply pass in the functions directly as arguments.
So my answer would be that higher-order functions are often used to perform the same kinds of tasks that OO programmers perform, but directly, and with a lot less boilerplate.
I really started to feel the power when I learned a function can be part of a data structure. Here is a "consumer monad" (technobabble: free monad over (i ->)).
data Coro i a
= Return a
| Consume (i -> Coro i a)
So a Coro can either instantly yield a value, or be another Coro depending on some input. For example, this is a Coro Int Int:
Consume $ \x -> Consume $ \y -> Consume $ \z -> Return (x+y+z)
This consumes three integer inputs and returns their sum. You could also have it behave differently according to the inputs:
sumStream :: Coro Int Int
sumStream = Consume (go 0)
where
go accum 0 = Return accum
go accum n = Consume (\x -> go (accum+x) (n-1))
This consumes an Int and then consumes that many more Ints before yielding their sum. This can be thought of as a function that takes arbitrarily many arguments, constructed without any language magic, just higher order functions.
Functions in data structures are a very powerful tool that was not part of my vocabulary before I started doing Haskell.
Check out the paper 'Even Higher-Order Functions for Parsing or Why Would Anyone Ever Want To
Use a Sixth-Order Function?' by Chris Okasaki. It's written using ML, but the ideas apply equally to Haskell.
Joel Spolsky wrote a famous essay demonstrating how Map-Reduce works using Javascript's higher order functions. A must-read for anyone asking this question.
Higher-order functions are also required for currying, which Haskell uses everywhere. Essentially, a function taking two arguments is equivalent to a function taking one argument and returning another function taking one argument. When you see a type signature like this in Haskell:
f :: A -> B -> C
...the (->) can be read as right-associative, showing that this is in fact a higher-order function returning a function of type B -> C:
f :: A -> (B -> C)
A non-curried function of two arguments would instead have a type like this:
f' :: (A, B) -> C
So any time you use partial application in Haskell, you're working with higher-order functions.
Martín Escardó provides an interesting example of a higher-order function:
equal :: ((Integer -> Bool) -> Int) -> ((Integer -> Bool) -> Int) -> Bool
Given two functionals f, g :: (Integer -> Bool) -> Int, then equal f g decides if f and g are (extensionally) equal or not, even though f and g don't have a finite domain. In fact, the codomain, Int, can be replaced by any type with a decidable equality.
The code Escardó gives is written in Haskell, but the same algorithm should work in any functional language.
You can use the same techniques that Escardó describes to compute definite integrals of any continuous function to arbitrary precision.
One interesting and slightly crazy thing you can do is simulate an object-oriented system using a function and storing data in the function's scope (i.e. in a closure). It's higher-order in the sense that the object generator function is a function which returns the object (another function).
My Haskell is rather rusty so I can't easily give you a Haskell example, but here's a simplified Clojure example which hopefully conveys the concept:
(defn make-object [initial-value]
(let [data (atom {:value initial-value})]
(fn [op & args]
(case op
:set (swap! data assoc :value (first args))
:get (:value #data)))))
Usage:
(def a (make-object 10))
(a :get)
=> 10
(a :set 40)
(a :get)
=> 40
Same principle would work in Haskell (except that you'd probably need to change the set operation to return a new function since Haskell is purely functional)
I'm a particular fan of higher-order memoization:
memo :: HasTrie t => (t -> a) -> (t -> a)
(Given any function, return a memoized version of that function. Limited by the fact that the arguments of the function must be able to be encoded into a trie.)
This is from http://hackage.haskell.org/package/MemoTrie
There are several examples here: http://www.haskell.org/haskellwiki/Higher_order_function
I would also recommend this book: http://www.cs.nott.ac.uk/~gmh/book.html which is a great introduction to all of Haskell and covers higher order functions.
Higher order functions often use an accumulator so can be used when forming a list of elements which conform to a given rule from a larger list.
Here's a small paraphrased code snippet:
rays :: ChessPieceType -> [[(Int, Int)]]
rays Bishop = do
dx <- [1, -1]
dy <- [1, -1]
return $ iterate (addPos (dx, dy)) (dx, dy)
... -- Other piece types
-- takeUntilIncluding is an inclusive version of takeUntil
takeUntilIncluding :: (a -> Bool) -> [a] -> [a]
possibleMoves board piece = do
relRay <- rays (pieceType piece)
let ray = map (addPos src) relRay
takeUntilIncluding (not . isNothing . pieceAt board)
(takeWhile notBlocked ray)
where
notBlocked pos =
inBoard pos &&
all isOtherSide (pieceAt board pos)
isOtherSide = (/= pieceSide piece) . pieceSide
This uses several "higher order" functions:
iterate :: (a -> a) -> a -> [a]
takeUntilIncluding -- not a standard function
takeWhile :: (a -> Bool) -> [a] -> [a]
all :: (a -> Bool) -> [a] -> Bool
map :: (a -> b) -> [a] -> [b]
(.) :: (b -> c) -> (a -> b) -> a -> c
(>>=) :: Monad m => m a -> (a -> m b) -> m b
(.) is the . operator, and (>>=) is the do-notation "line break operator".
When programming in Haskell you just use them. Where you don't have the higher order functions is when you realize just how incredibly useful they were.
Here's a pattern that I haven't seen anyone else mention yet that really surprised me the first time I learned about it. Consider a statistics package where you have a list of samples as your input and you want to calculate a bunch of different statistics on them (there are also plenty of other ways to motivate this). The bottom line is that you have a list of functions that you want to run. How do you run them all?
statFuncs :: [ [Double] -> Double ]
statFuncs = [minimum, maximum, mean, median, mode, stddev]
runWith funcs samples = map ($samples) funcs
There's all kinds of higher order goodness going on here, some of which has been mentioned in other answers. But I want to point out the '$' function. When I first saw this use of '$', I was blown away. Before that I hadn't considered it to be very useful other than as a convenient replacement for parentheses...but this was almost magical...
One thing that's kind of fun, if not particularly practical, is Church Numerals. It's a way of representing integers using nothing but functions. Crazy, I know. <shamelessPlug>Here's an implementation in JavaScript that I made. It might be easier to understand than a Lisp/Haskell implementation. (But probably not, to be honest. JavaScript wasn't really meant for this kind of thing.)</shamelessPlug>
It’s been mentioned that Javascript supports certain higher-order functions, including an essay from Joel Spolsky. Mark Jason Dominus wrote an entire book called Higher–Order Perl; the book’s source is available for free download in a variety of fine formats, include PDF.
Ever since at least Perl 3, Perl has supported functionality more reminiscent of Lisp than of C, but it wasn’t until Perl 5 that full support for closures and all that follows from that was available. And ne of the first Perl 6 implementations was written in Haskell, which has had a lot of influence on how that language’s design has progressed.
Examples of functional programming approaches in Perl show up in everyday programming, especially with map and grep:
#ARGV = map { /\.gz$/ ? "gzip -dc < $_ |" : $_ } #ARGV;
#unempty = grep { defined && length } #many;
Since sort also admits a closure, the map/sort/map pattern is super common:
#txtfiles = map { $_->[1] }
sort {
$b->[0] <=> $a->[0]
||
lc $a->[1] cmp lc $b->[1]
||
$b->[1] cmp $a->[1]
}
map { -s => $_ }
grep { -f && -T }
glob("/etc/*");
or
#sorted_lines = map { $_->[0] }
sort {
$a->[4] <=> $b->[4]
||
$a->[-1] cmp $b->[-1]
||
$a->[3] <=> $b->[3]
||
...
}
map { [$_ => reverse split /:/] } #lines;
The reduce function makes list hackery easy without looping:
$sum = reduce { $a + $b } #numbers;
$max = reduce { $a > $b ? $a : $b } $MININT, #numbers;
There’s a lot more than this, but this is just a taste. Closures make it easy to create function generators, writing your own higher-order functions, not just using the builtins. In fact, one of the more common exception models,
try {
something();
} catch {
oh_drat();
};
is not a built-in. It is, however, almost trivially defined with try being a function that takes two arguments: a closure in the first arg and a function that takes a closure in the second one.
Perl 5 doesn’t have have currying built-in, although there is a module for that. Perl 6, though, has currying and first-class continuations built right into it, plus a lot more.

Is there a relationship between calling a function and instantiating an object in pure functional languages?

Imagine a simple (made up) language where functions look like:
function f(a, b) = c + 42
where c = a * b
(Say it's a subset of Lisp that includes 'defun' and 'let'.)
Also imagine that it includes immutable objects that look like:
struct s(a, b, c = a * b)
Again analogizing to Lisp (this time a superset), say a struct definition like that would generate functions for:
make-s(a, b)
s-a(s)
s-b(s)
s-c(s)
Now, given the simple set up, it seems clear that there is a lot of similarity between what happens behind the scenes when you either call 'f' or 'make-s'. Once 'a' and 'b' are supplied at call/instantiate time, there is enough information to compute 'c'.
You could think of instantiating a struct as being like a calling a function, and then storing the resulting symbolic environment for later use when the generated accessor functions are called. Or you could think of a evaluting a function as being like creating a hidden struct and then using it as the symbolic environment with which to evaluate the final result expression.
Is my toy model so oversimplified that it's useless? Or is it actually a helpful way to think about how real languages work? Are there any real languages/implementations that someone without a CS background but with an interest in programming languages (i.e. me) should learn more about in order to explore this concept?
Thanks.
EDIT: Thanks for the answers so far. To elaborate a little, I guess what I'm wondering is if there are any real languages where it's the case that people learning the language are told e.g. "you should think of objects as being essentially closures". Or if there are any real language implementations where it's the case that instantiating an object and calling a function actually share some common (non-trivial, i.e. not just library calls) code or data structures.
Does the analogy I'm making, which I know others have made before, go any deeper than mere analogy in any real situations?
You can't get much purer than lambda calculus: http://en.wikipedia.org/wiki/Lambda_calculus. Lambda calculus is in fact so pure, it only has functions!
A standard way of implementing a pair in lambda calculus is like so:
pair = fn a: fn b: fn x: x a b
first = fn a: fn b: a
second = fn a: fn b: b
So pair a b, what you might call a "struct", is actually a function (fn x: x a b). But it's a special type of function called a closure. A closure is essentially a function (fn x: x a b) plus values for all of the "free" variables (in this case, a and b).
So yes, instantiating a "struct" is like calling a function, but more importantly, the actual "struct" itself is like a special type of function (a closure).
If you think about how you would implement a lambda calculus interpreter, you can see the symmetry from the other side: you could implement a closure as an expression plus a struct containing the values of all the free variables.
Sorry if this is all obvious and you just wanted some real world example...
Both f and make-s are functions, but the resemblance doesn't go much further. Applying f calls the function and executes its code; applying make-s creates a structure.
In most language implementations and modelizations, make-s is a different kind of object from f: f is a closure, whereas make-s is a constructor (in the functional languages and logic meaning, which is close to the object oriented languages meaning).
If you like to think in an object-oriented way, both f and make-s have an apply method, but they have completely different implementations of this method.
If you like to think in terms of the underlying logic, f and make-s have a type build on the samme type constructor (the function type constructor), but they are constructed in different ways and have different destruction rules (function application vs. constructor application).
If you'd like to understand that last paragraph, I recommend Types and Programming Languages by Benjamin C. Pierce. Structures are discussed in §11.8.
Is my toy model so oversimplified that it's useless?
Essentially, yes. Your simplified model basically boils down to saying that each of these operations involves performing a computation and putting the result somewhere. But that is so general, it covers anything that a computer does. If you didn't perform a computation, you wouldn't be doing anything useful. If you didn't put the result somewhere, you would have done work for nothing as you have no way to get the result. So anything useful you do with a computer, from adding two registers together, to fetching a web page, could be modeled as performing a computation and putting the result somewhere that it can be accessed later.
There is a relationship between objects and closures. http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg03277.html
The following creates what some might call a function, and others might call an object:
Taken from SICP ( http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-21.html )
(define (make-account balance)
(define (withdraw amount)
(if (>= balance amount)
(begin (set! balance (- balance amount))
balance)
"Insufficient funds"))
(define (deposit amount)
(set! balance (+ balance amount))
balance)
(define (dispatch m)
(cond ((eq? m 'withdraw) withdraw)
((eq? m 'deposit) deposit)
(else (error "Unknown request -- MAKE-ACCOUNT"
m))))
dispatch)

What is this symbolic code transformation called?

I often cross this kind of code transformation (or even mathematical transformation). (Python example, but applies to any language.)
I've go a function
def f(x):
return x
I use it into another one.
def g(x):
return f(x)*f(x)
print g(2)
leads to 4
But I want to remove the functional dependency, and I change the function g into
def g(f):
return f*f
print g( f(2) )
leads to 4 too
How do you call this kind of transformation, locally turning a function into a scalar ?
I'm not sure there is a specific term for it.
In general terms for functional programming there usually isn't a distinction made between passing scalar arguments and passing functions as arguments.
In the first example I could still call g(f(2)) and it should calculate f(f(2))*f(f(2)), which (since f(x) is the identity transformation) will also result in 4 as the answer.

When would it make sense to pass a function to a function?

Ok, so it is possible to pass a function to another function.
Passing a function to another function in Actionscript 3
This is obviously very powerful, but a more important question is, when would it make sense to do so, as there are performance overheads whenever you call another function?
If you have much actionscript knowledge you probably use one example of this all the time without even noticing.
The addEventListener of the EventDispatcher class actually requires a function be passed into it when it's called:
addEventListener(type:String,
listener:Function, useCapture:Boolean
= false, priority:int = 0, useWeakReference:Boolean = false):void
http://livedocs.adobe.com/flex/3/langref/flash/events/EventDispatcher.html
Passing functions around is used a hell of a lot for callbacks. There are numerous other uses but this highlights one of the more simple scenarios.
The performance overhead is no worse than calling a virtual method in any contemporary OO language.
It makes sense to pass procedures to other procedures when it makes your code smaller. Less code has fewer bugs and is easier to maintain. Here's an example. These are two functions that respectively sum a list of numbers and multiple a list of numbers.
(define sum
(lambda (ls)
(if (null? ls)
0
(+ (car ls) (sum (cdr ls))))))
(define product
(lambda (ls)
(if (null? ls)
1
(* (car ls) (product (cdr ls))))))
They're identical except the operators + and - and the corresponding identity value (0 and 1). We've unfortunately duplicated a lot of code.
We can reduce complexity by abstracting the operator and the identity. The rewritten code looks like this.
(define fold
(lambda (proc id)
(lambda (ls)
(if (null? ls)
id
(proc (car ls) (fold (cdr ls) proc id))))))
(define sum (fold + 0))
(define product (fold * 1))
It's easier now to see the essential difference between sum and product. Also, improvements to the core code only have to be made in one place. Procedural abstraction is a fabulous tool, and it depends on being able to pass procedures to other procedures.
A function that takes a function as its argument is called a higher-order function. Google has a lot of information on these.
Examples of higher-order functions:
function compose(f, g) {
return function(x) {
return f(g(x));
};
}
function map(f, xs) {
var ys = [];
for(var i = 0; i < xs.length; ++i)
ys.push(f(xs[i]));
return ys;
}
With that, you can transform an array with two functions in a row:
var a = ["one", "two", "three"];
var b = map(compose(toUpperCase, reverse), a);
// b is now ["ENO", "OWT", "EERHT"]
1 example is a javascript AJAX call
namespace.class.method(parm1, parm2, callback,onErr);
The method will run asynchrously on the server, and once it is complete it will run the callBack method which is passed
function callback(result) {
$('#myDiv').innerHTML = result;
}
There are a host of other examples, just look at event handling as an example.
Another reason to pass a function to a function is if you want the receiving function to be flexible in the work that it does, for instance I had a recursive function that would process a directory tree, on each directory it would call the supplied function and pass it the current directory. This way I could use the same structure to scan a directory, copy a directory or delete a directory. And the "work" function just had to be complicated enough to process one directory not a tree. This is mostly with procedural programming with OO there are preferred ways to do this, inheritance, delegates, etc.
Another very common example is sorting where you pass a predicate i.e. how to sort e.g.
(sort > list-to-sort)
Here > is the function to apply whilst sorting. This is a very simple example using greater than so your list must be numeric but it could be anything e.g.
(sort (lambda(a b) (> (string-length a) (string-length b))) list-to-sort)
Here a closure is passed that does a greater than comparison on string lengths so assumes the list contains strings.
These types of things just suck in languages without closures or HOFs because of all the object/interface/type nonsense that is required to acheive the same.