I am trying to understand what's useful and how to actually use lambda expression in Haskell.
I don't really understand the advantage of using lambda expression over the convention way of defining functions.
For example, I usually do the following:
let add x y = x+y
and I can simply call
add 5 6
and get the result of 11
I know I can also do the following:
let add = \x->(\y-> x+y)
and get the same result.
But like I mentioned before, I don't understand the purpose of using lambda expression.
Also, I typed the following code (a nameless function?) into the prelude and it gave me an error message.
let \x -> (\y->x+y)
parse error (possibly incorrect indentation or mismatched backets)
Thank you in advance!
Many Haskell functions are "higher-order functions", i.e., they expect other functions as parameters. Often, the functions we want to pass to such a higher-order function are used only once in the program, at that particular point. It's simply more convenient then to use a lambda expression than to define a new local function for that purpose.
Here's an example that filters all even numbers that are greater than ten from a given list:
ghci> filter (\ x -> even x && x > 10) [1..20]
[12,14,16,18,20]
Here's another example that traverses a list and for every element x computes the term x^2 + x:
ghci> map (\ x -> x^2 + x) [1..10]
[2,6,12,20,30,42,56,72,90,110]
Related
Can someone please help me with the below,
applyTwice :: (a -> a) -> a -> a
applyTwice f x = f (f x)
I do not understand how the above works. If we had something like (+3) 10 surely it would produce 13? How is that f (f x). Basically I do not understand currying when it comes to looking at higher order functions.
So what I'm not understanding is if say we had a function of the form a -> a -> a it would take an input a then produce a function which expects another input a to produce an output. So if we had add 5 3 then doing add 5 would produce a function which would expect the input 3 to produce a final output of 8. My question is how does that work here. We take a function in as an input so does partial function application work here like it did in add x y or am I completely overcomplicating everything?
That's not currying, that's partial application.
> :t (+)
(+) :: Num a => a -> a -> a
> :t (+) 3
(+) 3 :: Num a => a -> a
The partial application (+) 3 indeed produces a function (+3)(*) which awaits another numerical input to produce its result. And it does so, whether once or twice.
You example is expanded as
applyTwice (+3) 10 = (+3) ((+3) 10)
= (+3) (10+3)
= (10+3)+3
That's all there is to it.
(*)(actually, it's (3 +), but that's the same as (+ 3) anyway).
As chepner clarifies in the comments (quoted with minimal copy editing),
partial application is an illusion created by the fact that functions only take one argument, and the combination of the right associativity of (->) and the left associativity of function application. (+) 3 isn't really a partial application. It's just [a regular] application of (+) to an argument 3.
So seen from the point of view of other, more traditional languages, we refer to this as a distinction between currying and partial application.
But seen from the Haskell perspective it is all indeed about currying, i.e. applying a function to its arguments one at a time, until fully saturated as indicated by its type (i.e. a->a->a value applied to an a value becomes an a->a value, and that then becomes an a value when applied to an a value in its turn).
Sorry for the basic question, but it's quite hard to find too much discussion on Maxima specifics.
I'm trying to learn some Maxima and wanted to use something like
x:2
x+=2
which as far as I can tell doesn't exist in Maxima. Then I discovered that I can define my own operators as infix operators, so I tried doing
infix("+=");
"+=" (a,b):= a:(a+b);
However this doesn't work, as if I first set x:1 then try calling x+=2, the function returns 3, but if I check the value of x I see it hasn't changed.
Is there a way to achieve what I was trying to do in Maxima? Could anyone explain why the definition I gave fails?
Thanks!
The problem with your implementation is that there is too much and too little evaluation -- the += function doesn't see the symbol x so it doesn't know to what variable to assign the result, and the left-hand side of an assignment isn't evaluated, so += thinks it is assigning to a, not x.
Here's one way to get the right amount of evaluation. ::= defines a macro, which is just a function which quotes its arguments, and for which the return value is evaluated again. buildq is a substitution function which quotes the expression into which you are substituting. So the combination of ::= and buildq here is to construct the x: x + 2 expression and then evaluate it.
(%i1) infix ("+=") $
(%i2) "+="(a, b) ::= buildq ([a, b], a: a + b) $
(%i3) x: 100 $
(%i4) macroexpand (x += 1);
(%o4) x : x + 1
(%i5) x += 1;
(%o5) 101
(%i6) x;
(%o6) 101
(%i7) x += 1;
(%o7) 102
(%i8) x;
(%o8) 102
So it is certainly possible to do so, if you want to do that. But may I suggest maybe you don't need it? Modifying a variable makes it harder to keep track, mentally, what is going on. A programming policy such as one-time assignment can make it easier for the programmer to understand the program. This is part of a general approach called functional programming; perhaps you can take a look at that. Maxima has various features which make it possible to use functional programming, although you are not required to use them.
This question already has answers here:
What is the difference between currying and partial application?
(16 answers)
Closed 5 years ago.
While studying functional programming, the concept partially applied functions comes up a lot. In Haskell, something like the built-in function take is considered to be partially applied.
I am still unclear as to what it exactly means for a function to be partially applied or the use/implication of it.
the classic example is
add :: Int -> Int -> Int
add x y = x + y
add function takes two arguments and adds them, we can now implement
increment = add 1
by partially applying add, which now waits for the other argument.
A function by itself can't be "partially applied" or not. It's a meaningless concept.
When you say that a function is "partially applied", you refer to how the function is called (aka "applied"). If the function is called with all its parameters, then it is said to be "fully applied". If some of the parameters are missing, then the function is said to be "partially applied".
For example:
-- The function `take` is fully applied here:
oneTwoThree = take 3 [1,2,3,4,5]
-- The function `take` is partially applied here:
take10 = take 10 -- see? it's missing one last argument
-- The function `take10` is fully applied here:
oneToTen = take10 [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,42]
The result of a partial function application is another function - a function that still "expects" to get its missing arguments - like the take10 in the example above, which still "expects" to receive the list.
Of course, it all gets a bit more complicated once you get into the higher-order functions - i.e. functions that take other functions as arguments or return other functions as result. Consider this function:
mkTake n = take (n+5)
The function mkTake has only one parameter, but it returns another function as result. Now, consider this:
x = mkTake 10
y = mkTake 10 [1,2,3]
On the first line, the function mkTake is, obviously, "fully applied", because it is given one argument, which is exactly how many arguments it expects. But the second line is also valid: since mkTake 10 returns a function, I can then call this function with another parameter. So what does it make mkTake? "Overly applied", I guess?
Then consider the fact that (barring compiler optimizations) all functions are, mathematically speaking, functions of exactly one argument. How could this be? When you declare a function take n l = ..., what you're "conceptually" saying is take = \n -> \l -> ... - that is, take is a function that takes argument n and returns another function that takes argument l and returns some result.
So the bottom line is that the concept of "partial application" isn't really that strictly defined, it's just a handy shorthand to refer to functions that "ought to" (as in common sense) take N arguments, but are instead given M < N arguments.
Strictly speaking, partial application refers to a situation where you supply fewer arguments than expected to a function, and get back a new function created on the fly that expects the remaining arguments.
Also strictly speaking, this does not apply in Haskell, because every function takes exactly one argument. There is no partial application, because you either apply the function to its argument or you don't.
However, Haskell provides syntactic sugar for defining functions that includes emulating multi-argument functions. In Haskell, then, "partial application" refers to supplying fewer than the full number of arguments needed to obtain a value that cannot be further applied to another argument. Using everyone's favorite add example,
add :: Int -> Int -> Int
add x y = x + y
the type indicates that add takes one argument of type Int and returns a function of type Int -> Int. The -> type constructor is right-associative, so it helps to explicitly parenthesize it to emphasize the one-argument nature of a Haskell function: Int -> (Int -> Int).
When calling such a "multi-argument" function, we take advantage of the fact the function application is left-associative, so we can write add 3 5 instead of (add 3) 5. The call add 3 is thought of as partial application, because we could further apply the result to another argument.
I mentioned the syntactic sugar Haskell provides to ease defining complex higher-order functions. There is one fundamental way to define a function: using a lambda expression. For example, to define a function that adds 3 to its argument, we write
\x -> x + 3
For ease of reference, we can assign a name to this function:
add = \x -> x + 3
To further ease the defintion, Haskell lets us write in its place
add x = x + 3
hiding the explicit lambda expression.
For higher-order, "multiargument" functions, we would write a function to add two values as
\x -> \y -> x + y
To hide the currying, we can write instead
\x y -> x + y.
Combined with the syntactic sugar of replacing a lambda expression with a paramterized name, all of the following are equivalent definitions for the explicitly typed function add :: Num a => a -> a -> a:
add = \x -> \y -> x + y -- or \x y -> x + y as noted above
add x = \y -> x + y
add x y = x + y
This is best understood by example. Here's the type of the addition operator:
(+) :: Num a => a -> a -> a
What does it mean? As you know, it takes two numbers and outputs another one. Right? Well, sort of.
You see, a -> a -> a actually means this: a -> (a -> a). Whoa, looks weird! This means that (+) is a function that takes one argument and outputs a function(!!) that also takes one argument and outputs a value.
What this means is you can supply that one value to (+) to get another, partially applied, function:
(+ 5) :: Num a => a -> a
This function takes one argument and adds five to it. Now, call that new function with some argument:
Prelude> (+5) 3
8
Here you go! Partial function application at work!
Note the type of (+ 5) 3:
(+5) 3 :: Num a => a
Look what we end up with:
(+) :: Num a => a -> a -> a
(+ 5) :: Num a => a -> a
(+5) 3 :: Num a => a
Do you see how the number of as in the type signature decreases by one every time you add a new argument?
Function with one argument that outputs a function with one argument that in turn, outputs a value
Function with one argument that outputs a value
The value itself
"...something like the built-in function take is considered to be partially applied" - I don't quite understand what this means. The function take itself is not partially applied. A function is partially applied when it is the result of supplying between 1 and (N - 1) arguments to a function that takes N arguments. So, take 5 is a partially applied function, but take and take 5 [1..10] are not.
Which are the uses for id function in Haskell?
It's useful as an argument to higher order functions (functions which take functions as arguments), where you want some particular value left unchanged.
Example 1: Leave a value alone if it is in a Just, otherwise, return a default of 7.
Prelude Data.Maybe> :t maybe
maybe :: b -> (a -> b) -> Maybe a -> b
Prelude Data.Maybe> maybe 7 id (Just 2)
2
Example 2: building up a function via a fold:
Prelude Data.Maybe> :t foldr (.) id [(+2), (*7)]
:: (Num a) => a -> a
Prelude Data.Maybe> let f = foldr (.) id [(+2), (*7)]
Prelude Data.Maybe> f 7
51
We built a new function f by folding a list of functions together with (.), using id as the base case.
Example 3: the base case for functions as monoids (simplified).
instance Monoid (a -> a) where
mempty = id
f `mappend` g = (f . g)
Similar to our example with fold, functions can be treated as concatenable values, with id serving for the empty case, and (.) as append.
Example 4: a trivial hash function.
Data.HashTable> h <- new (==) id :: IO (HashTable Data.Int.Int32 Int)
Data.HashTable> insert h 7 2
Data.HashTable> Data.HashTable.lookup h 7
Just 2
Hashtables require a hashing function. But what if your key is already hashed? Then pass the id function, to fill in as your hashing method, with zero performance overhead.
If you manipulate numbers, particularly with addition and multiplication, you'll have noticed the usefulness of 0 and 1. Likewise, if you manipulate lists, the empty list turns out to be quite handy. Similarly, if you manipulate functions (very common in functional programming), you'll come to notice the same sort of usefulness of id.
In functional languages, functions are first class values
that you can pass as a parameter.
So one of the most common uses of id comes up when
you pass a function as a
parameter to another function to tell it what to do.
One of the choices of what to do is likely to be
"just leave it alone" - in that case, you pass id
as the parameter.
Suppose you're searching for some kind of solution to a puzzle where you make a move at each turn. You start with a candidate position pos. At each stage there is a list of possible transformations you could make to pos (eg. sliding a piece in the puzzle). In a functional language it's natural to represent transformations as functions so now you can make a list of moves using a list of functions. If "doing nothing" is a legal move in this puzzle, then you would represent that with id. If you didn't do that then you'd need to handle "doing nothing" as a special case that works differently from "doing something". By using id you can handle all cases uniformly in a single list.
This is probably the reason why almost all uses of id exist. To handle "doing nothing" uniformly with "doing something".
For a different sort of answer:
I'll often do this when chaining multiple functions via composition:
foo = id
. bar
. baz
. etc
over
foo = bar
. baz
. etc
It keeps things easier to edit. One can do similar things with other 'zero' elements, such as
foo = return
>>= bar
>>= baz
foos = []
++ bars
++ bazs
Since we are finding nice applications of id. Here, have a palindrome :)
import Control.Applicative
pal :: [a] -> [a]
pal = (++) <$> id <*> reverse
Imagine you are a computer, i.e. you can execute a sequence of steps. Then if I want you to stay in your current state, but I always have to give you an instruction (I cannot just mute and let the time pass), what instruction do I give you? Id is the function created for that, for returning the argument unchanged (in the case of the previous computer the argument would be its state) and for having a name for it. That necessity appears only when you have high order functions, when you operate with functions without considering what's inside them, that forces you to represent symbolically even the "do nothing" implementation. Analogously 0 seen as a quantity of something, is a symbol for the absence of quantity. Actually in Algebra both 0 and id are considered the neutral elements of the operations + and ∘ (function composition) respectively, or more formally:
for all x of type number:
0 + x = x
x + 0 = x
for all f of type function:
id ∘ f = f
f ∘ id = f
I can also help improve your golf score. Instead of using
($)
you can save a single character by using id.
e.g.
zipWith id [(+1), succ] [2,3,4]
An interesting, more than useful result.
Whenever you need to have a function somewhere, but want to do more than just hold its place (with 'undefined' as an example).
It's also useful, as (soon-to-be) Dr. Stewart mentioned above, for when you need to pass a function as an argument to another function:
join = (>>= id)
or as the result of a function:
let f = id in f 10
(presumably, you will edit the above function later to do something more "interesting"... ;)
As others have mentioned, id is a wonderful place-holder for when you need a function somewhere.
Not having them used them all that much, I'm not quite sure about all the ways
lambda-definitions can be used (other than map/collect/do/lightweight local function syntax). For anyone interested in posting some examples:
provide explanations to help readers understand how lambda-definitions are being used;
preferred languages for the examples: Python, Smalltalk, Haskell.
You can make a functional data structure out of lambdas. Here is a simple one - a functional list (Python), supporting add and contains methods:
empty = lambda x : None
def add(lst, item) :
return lambda x : x == item or lst(x)
def contains(lst, item) :
return lst(item) or False
I just coded this quickly for fun - notice that you're not allowed to add any falsy values as is. It also is not tail-recursive, as a good functional structure should be. Exercises for the reader!
You can use them for control flow. For example, in Smalltalk, the "ifTrue:ifFalse:" method is a method on Boolean objects, with a different implementation on each of True and False classes. The expression
someBoolean ifTrue: [self doSomething] ifFalse: [self doSomethingElse]
uses two closures---blocks, in [square brackets] in Smalltalk syntax---one for the true branch, and one for the false branch. The implementation of "ifTrue:ifFalse:" for instances of class True is
ifTrue: block1 ifFalse: block2
^ block1 value
and for class False:
ifTrue: block1 ifFalse: block2
^ block2 value
Closures, here, are used to delay evaluation so that a decision about control flow can be taken, without any specialised syntax at all (besides the syntax for blocks).
Haskell is a little different, with its lazy evaluation model effectively automatically producing the effect of closures in many cases, but in Scheme you end up using lambdas for control flow a lot. For example, here is a utility to retrieve a value from an association-list, supplying an optionally-computed default in the case where the value is not present:
(define (assq/default key lst default-thunk)
(cond
((null? lst) (default-thunk)) ;; actually invoke the default-value-producer
((eq? (caar lst) key) (car lst))
(else (assq/default key (cdr lst) default-thunk))))
It would be called like this:
(assq/default 'mykey my-alist (lambda () (+ 3 4 5)))
The key here is the use of the lambda to delay computation of the default value until it's actually known to be required.
See also continuation-passing-style, which takes this to an extreme. Javascript, for instance, relies on continuation-passing-style and closures to perform all of its blocking operations (like sleeping, I/O, etc).
ETA: Where I've said closures above, I mean lexically scoped closures. It's the lexical scope that's key, often.
You can use a lambda to create a Y Combinator, that is a function that takes another function and returns a recursive form of it. Here is an example:
def Y(le):
def _anon(cc):
return le(lambda x: cc(cc)(x))
return _anon(_anon)
This is a thought bludgeon that deserves some more explanation, but rather than regurgitate it here check out this blog entry (above sample comes from there too).
Its C#, but I personally get a kick out of this article every time I read it:
Building Data out of Thin Air - an implementation of Lisp's cons, car, and cdr functions in C#. It shows how to build a simple stack data structure entirely out of lambda functions.
It isn't really quite the same concept as in haskell etc, but in C#, the lambda construct has (optionally) the ability to compile to an objcet model representing the code (expression-trees) rather than code itself (this is itself one of the cornerstones of LINQ).
This in turn can lead to some very expressive meta-programming opportunities, for example (where the lambda here is expressing "given a service, what do you want to do with it?"):
var client = new Client<ISomeService>();
string captured = "to show a closure";
var result = client.Invoke(
svc => svc.SomeMethodDefinedOnTheService(123, captured)
);
(assuming a suitable Invoke signature)
There are lots of uses for this type of thing, but I've used it to build an RPC stack that doesn't require any runtime code generation - it simply parses the expression-tree, figures out what the caller intended, translates it to RPC, invokes it, gathers the response, etc (discussed more here).
An example in Haskell to compute the derivative of a single variabled function using a numerical approximation:
deriv f = \x -> (f (x + d) - f x) / d
where
d = 0.00001
f x = x ^ 2
f' = deriv f -- roughly equal to f' x = 2 * x