Controlling function's arity - function

I was reading the answer to this question: Haskell: difference between . (dot) and $ (dollar sign) And the reply struck me as odd... What does he mean + has no input? And then I tried:
((+) 1)
((+) 1 1)
((+) 1 1 1)
Whoops... sad news. But I'm sure I saw functions that can take seemingly arbitrary or a very large number of arguments to believe that someone had defined them in a way a->b->c...->z. There must be some way to handle it! What I'm looking for is something like &rest or &optional in CL.

Sure, you can define a variadic addition function, with some typeclass hackery:1
{-# LANGUAGE TypeFamilies #-}
class Add r where
add' :: (Integer -> Integer) -> r
instance Add Integer where
add' k = k 0
instance (n ~ Integer, Add r) => Add (n -> r) where
add' k m = add' (\n -> k (m+n))
add :: (Add r) => r
add = add' id
And so:
GHCi> add 1 2 :: Integer
3
GHCi> add 1 2 3 :: Integer
6
The same trick is used by the standard Text.Printf module. It's generally avoided for two reasons: one, the types it gives you can be awkward to work with, and you often have to specify an explicit type signature for your use; two, it's really a hack, and should be used rarely, if at all. printf has to take any number of arguments and be polymorphic, so it can't simply take a list list, but for addition, you could just use sum.
1 The language extension isn't strictly necessary here, but they make the usage easier (without them, you'd have to explicitly specify the type of each argument in the examples I gave, e.g. add (1 :: Integer) (2 :: Integer) :: Integer).

It's a syntactical trade-off: you can't (in general) have both variable arity functions and nice Haskell-style syntax for function application at the same time. This is because it would make many expressions ambiguous.
Suppose you have a function foo that allows an arity of 1 or 2, and consider the following expression:
foo a b
Should the 1 or 2 argument version of foo be used here? No way for the compiler to know, as it could be the 2 argument version but it could equally be the result of the 1 argument version applied to b.
Hence language designers need to make a choice.
Haskell opts for nice function application syntax (no parentheses required)
Lisp opts for variable arity functions (and adds parentheses to remove the ambiguity)

Related

Partially applied functions [duplicate]

This question already has answers here:
What is the difference between currying and partial application?
(16 answers)
Closed 5 years ago.
While studying functional programming, the concept partially applied functions comes up a lot. In Haskell, something like the built-in function take is considered to be partially applied.
I am still unclear as to what it exactly means for a function to be partially applied or the use/implication of it.
the classic example is
add :: Int -> Int -> Int
add x y = x + y
add function takes two arguments and adds them, we can now implement
increment = add 1
by partially applying add, which now waits for the other argument.
A function by itself can't be "partially applied" or not. It's a meaningless concept.
When you say that a function is "partially applied", you refer to how the function is called (aka "applied"). If the function is called with all its parameters, then it is said to be "fully applied". If some of the parameters are missing, then the function is said to be "partially applied".
For example:
-- The function `take` is fully applied here:
oneTwoThree = take 3 [1,2,3,4,5]
-- The function `take` is partially applied here:
take10 = take 10 -- see? it's missing one last argument
-- The function `take10` is fully applied here:
oneToTen = take10 [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,42]
The result of a partial function application is another function - a function that still "expects" to get its missing arguments - like the take10 in the example above, which still "expects" to receive the list.
Of course, it all gets a bit more complicated once you get into the higher-order functions - i.e. functions that take other functions as arguments or return other functions as result. Consider this function:
mkTake n = take (n+5)
The function mkTake has only one parameter, but it returns another function as result. Now, consider this:
x = mkTake 10
y = mkTake 10 [1,2,3]
On the first line, the function mkTake is, obviously, "fully applied", because it is given one argument, which is exactly how many arguments it expects. But the second line is also valid: since mkTake 10 returns a function, I can then call this function with another parameter. So what does it make mkTake? "Overly applied", I guess?
Then consider the fact that (barring compiler optimizations) all functions are, mathematically speaking, functions of exactly one argument. How could this be? When you declare a function take n l = ..., what you're "conceptually" saying is take = \n -> \l -> ... - that is, take is a function that takes argument n and returns another function that takes argument l and returns some result.
So the bottom line is that the concept of "partial application" isn't really that strictly defined, it's just a handy shorthand to refer to functions that "ought to" (as in common sense) take N arguments, but are instead given M < N arguments.
Strictly speaking, partial application refers to a situation where you supply fewer arguments than expected to a function, and get back a new function created on the fly that expects the remaining arguments.
Also strictly speaking, this does not apply in Haskell, because every function takes exactly one argument. There is no partial application, because you either apply the function to its argument or you don't.
However, Haskell provides syntactic sugar for defining functions that includes emulating multi-argument functions. In Haskell, then, "partial application" refers to supplying fewer than the full number of arguments needed to obtain a value that cannot be further applied to another argument. Using everyone's favorite add example,
add :: Int -> Int -> Int
add x y = x + y
the type indicates that add takes one argument of type Int and returns a function of type Int -> Int. The -> type constructor is right-associative, so it helps to explicitly parenthesize it to emphasize the one-argument nature of a Haskell function: Int -> (Int -> Int).
When calling such a "multi-argument" function, we take advantage of the fact the function application is left-associative, so we can write add 3 5 instead of (add 3) 5. The call add 3 is thought of as partial application, because we could further apply the result to another argument.
I mentioned the syntactic sugar Haskell provides to ease defining complex higher-order functions. There is one fundamental way to define a function: using a lambda expression. For example, to define a function that adds 3 to its argument, we write
\x -> x + 3
For ease of reference, we can assign a name to this function:
add = \x -> x + 3
To further ease the defintion, Haskell lets us write in its place
add x = x + 3
hiding the explicit lambda expression.
For higher-order, "multiargument" functions, we would write a function to add two values as
\x -> \y -> x + y
To hide the currying, we can write instead
\x y -> x + y.
Combined with the syntactic sugar of replacing a lambda expression with a paramterized name, all of the following are equivalent definitions for the explicitly typed function add :: Num a => a -> a -> a:
add = \x -> \y -> x + y -- or \x y -> x + y as noted above
add x = \y -> x + y
add x y = x + y
This is best understood by example. Here's the type of the addition operator:
(+) :: Num a => a -> a -> a
What does it mean? As you know, it takes two numbers and outputs another one. Right? Well, sort of.
You see, a -> a -> a actually means this: a -> (a -> a). Whoa, looks weird! This means that (+) is a function that takes one argument and outputs a function(!!) that also takes one argument and outputs a value.
What this means is you can supply that one value to (+) to get another, partially applied, function:
(+ 5) :: Num a => a -> a
This function takes one argument and adds five to it. Now, call that new function with some argument:
Prelude> (+5) 3
8
Here you go! Partial function application at work!
Note the type of (+ 5) 3:
(+5) 3 :: Num a => a
Look what we end up with:
(+) :: Num a => a -> a -> a
(+ 5) :: Num a => a -> a
(+5) 3 :: Num a => a
Do you see how the number of as in the type signature decreases by one every time you add a new argument?
Function with one argument that outputs a function with one argument that in turn, outputs a value
Function with one argument that outputs a value
The value itself
"...something like the built-in function take is considered to be partially applied" - I don't quite understand what this means. The function take itself is not partially applied. A function is partially applied when it is the result of supplying between 1 and (N - 1) arguments to a function that takes N arguments. So, take 5 is a partially applied function, but take and take 5 [1..10] are not.

Haskell 2014.0.0: Two 'Could not deduce' errors occur when executing simple function

I want to program a function 'kans' (which calculates the chance of a repetition code of length n being correctly decoded) the following way:
fac 0 = 1
fac n = n * fac (n-1)
comb n r = ceiling (fac n / ((fac (n - r))*(fac r)))
kans 0 = 1
kans n = sum [ (comb (n+1) i) * 0.01^i * 0.99^(n+1-i) | i <- [0..(div n 2)]]
Now, calling 'kans 2' in GHCi gives the following error message: http://i.imgur.com/wEZRXgu.jpg
The functions 'fac' and 'comb' function normally. I even get an error message when I call 'kans 0', which I defined seperately.
I would greatly appreciate your help.
The error messages both contain telling parts of the form
Could not deduce [...]
from the context (Integral a, Fractional a)
This says that you are trying to use a single type as both a member of the class Integral and as a member of the class Fractional. There are no such types in Haskell: The standard numerical types all belong to exactly one of them. And you are using operations that require one or the other:
/ is "floating point" or "field" division, and only works with Fractional.
div is "integer division", and so works only with Integral.
the second argument of ^ must be Integral.
What you probably should do is to decide exactly which of your variables and results are integers (and thus in the Integral class), and which are floating-point (and thus in the Fractional class.) Then use the fromIntegral function to convert from the former to the latter whenever necessary. You will need to do this conversion in at least one of your functions.
On mathematical principle I recommend you do it in kans, and change comb to be purely integral by using div instead of /.
As #Mephy suggests, you probably should write type signatures for your functions, this helps make error messages clearer. However Num won't be enough here. For kans you can use either of
kans :: Integer -> Double
kans :: (Integral a, Fractional b) => a -> b
The latter is more general, but the former is likely to be the two specific types you'll actually use.
short dirty answer:
because both fac and comb always return Integers and only accept Integers, use this code:
fac 0 = 1
fac n = n * fac (n-1)
comb n r =fac n `div` ((fac (n - r))*(fac r))
kans 0 = 1
kans n = sum [ fromIntegral (comb (n+1) i) * 0.01^i * 0.99^(n+1-i) | i <- [0..(div n 2)]]
actual explanation:
looking at your error messages, GHC complains about ambiguity.
let's say you try to run this piece of code:
x = show (read "abc")
GHC will complain that it can't compute read "abc" because it doesn't have enough information to determine which definition of read to use (there are multiple definitions ; you might want to read about type classes, or maybe you already did. for example, one of type String -> Int which parses Integers and one of type String -> Float etc.). this is called ambiguity, because the result type of read "abc" is ambiguous.
in Haskell, when using arithmetic operators (which like read usually have multiple definitions in the same way) there is ambiguity all the time, so when there is ambiguity GHC checks if the types Integer or Float can be plugged in without problems, and if they do, then it changes the types to them, resolving the ambiguity. otherwise, it complains about the ambiguity. this is called defaulting.
the problem with your code is that because of the use of ceiling and / the types require instances of Fractional and RealFrac while also requiring instances of Integral for the same type variables, but no type is instance of both at once, so GHC complains about ambiguity.
the way to fix this would be to notice that comb would only work on whole numbers, and that (fac n / ((fac (n - r))*(fac r))) is always a whole number. the ceiling is redundant.
then we should replace / in comb's definition by \div`` to express that only whole numbers will be the output.
the last thing to do is to replace comb (n+1) i with fromIntegral (comb (n+1) i) (fromIntegral is a function which casts whole numbers to arbitrary types) because otherwise the types would mismach again - adding this allowes us to separate the type of comb (n+1) i (which is a whole number) from the type of 0.01^i which is of course a floating-point number.
this solves all problems and results in the code I wrote above in the "short dirty answer" section.

What are some interesting uses of higher-order functions?

I'm currently doing a Functional Programming course and I'm quite amused by the concept of higher-order functions and functions as first class citizens. However, I can't yet think of many practically useful, conceptually amazing, or just plain interesting higher-order functions. (Besides the typical and rather dull map, filter, etc functions).
Do you know examples of such interesting functions?
Maybe functions that return functions, functions that return lists of functions (?), etc.
I'd appreciate examples in Haskell, which is the language I'm currently learning :)
Well, you notice that Haskell has no syntax for loops? No while or do or for. Because these are all just higher-order functions:
map :: (a -> b) -> [a] -> [b]
foldr :: (a -> b -> b) -> b -> [a] -> b
filter :: (a -> Bool) -> [a] -> [a]
unfoldr :: (b -> Maybe (a, b)) -> b -> [a]
iterate :: (a -> a) -> a -> [a]
Higher-order functions replace the need for baked in syntax in the language for control structures, meaning pretty much every Haskell program uses these functions -- making them quite useful!
They are the first step towards good abstraction because we can now plug custom behavior into a general purpose skeleton function.
In particular, monads are only possible because we can chain together, and manipulate functions, to create programs.
The fact is, life is pretty boring when it is first-order. Programming only gets interesting once you have higher-order.
Many techniques used in OO programming are workarounds for the lack of higher order functions.
This includes a number of the design patterns that are ubiquitous in functional programming. For example, the visitor pattern is a rather complicated way to implement a fold. The workaround is to create a class with methods and pass in an element of the class in as an argument, as a substitute for passing in a function.
The strategy pattern is another example of a scheme that often passes objects as arguments as a substitute for what is actually intended, functions.
Similarly dependency injection often involves some clunky scheme to pass a proxy for functions when it would often be better to simply pass in the functions directly as arguments.
So my answer would be that higher-order functions are often used to perform the same kinds of tasks that OO programmers perform, but directly, and with a lot less boilerplate.
I really started to feel the power when I learned a function can be part of a data structure. Here is a "consumer monad" (technobabble: free monad over (i ->)).
data Coro i a
= Return a
| Consume (i -> Coro i a)
So a Coro can either instantly yield a value, or be another Coro depending on some input. For example, this is a Coro Int Int:
Consume $ \x -> Consume $ \y -> Consume $ \z -> Return (x+y+z)
This consumes three integer inputs and returns their sum. You could also have it behave differently according to the inputs:
sumStream :: Coro Int Int
sumStream = Consume (go 0)
where
go accum 0 = Return accum
go accum n = Consume (\x -> go (accum+x) (n-1))
This consumes an Int and then consumes that many more Ints before yielding their sum. This can be thought of as a function that takes arbitrarily many arguments, constructed without any language magic, just higher order functions.
Functions in data structures are a very powerful tool that was not part of my vocabulary before I started doing Haskell.
Check out the paper 'Even Higher-Order Functions for Parsing or Why Would Anyone Ever Want To
Use a Sixth-Order Function?' by Chris Okasaki. It's written using ML, but the ideas apply equally to Haskell.
Joel Spolsky wrote a famous essay demonstrating how Map-Reduce works using Javascript's higher order functions. A must-read for anyone asking this question.
Higher-order functions are also required for currying, which Haskell uses everywhere. Essentially, a function taking two arguments is equivalent to a function taking one argument and returning another function taking one argument. When you see a type signature like this in Haskell:
f :: A -> B -> C
...the (->) can be read as right-associative, showing that this is in fact a higher-order function returning a function of type B -> C:
f :: A -> (B -> C)
A non-curried function of two arguments would instead have a type like this:
f' :: (A, B) -> C
So any time you use partial application in Haskell, you're working with higher-order functions.
Martín Escardó provides an interesting example of a higher-order function:
equal :: ((Integer -> Bool) -> Int) -> ((Integer -> Bool) -> Int) -> Bool
Given two functionals f, g :: (Integer -> Bool) -> Int, then equal f g decides if f and g are (extensionally) equal or not, even though f and g don't have a finite domain. In fact, the codomain, Int, can be replaced by any type with a decidable equality.
The code Escardó gives is written in Haskell, but the same algorithm should work in any functional language.
You can use the same techniques that Escardó describes to compute definite integrals of any continuous function to arbitrary precision.
One interesting and slightly crazy thing you can do is simulate an object-oriented system using a function and storing data in the function's scope (i.e. in a closure). It's higher-order in the sense that the object generator function is a function which returns the object (another function).
My Haskell is rather rusty so I can't easily give you a Haskell example, but here's a simplified Clojure example which hopefully conveys the concept:
(defn make-object [initial-value]
(let [data (atom {:value initial-value})]
(fn [op & args]
(case op
:set (swap! data assoc :value (first args))
:get (:value #data)))))
Usage:
(def a (make-object 10))
(a :get)
=> 10
(a :set 40)
(a :get)
=> 40
Same principle would work in Haskell (except that you'd probably need to change the set operation to return a new function since Haskell is purely functional)
I'm a particular fan of higher-order memoization:
memo :: HasTrie t => (t -> a) -> (t -> a)
(Given any function, return a memoized version of that function. Limited by the fact that the arguments of the function must be able to be encoded into a trie.)
This is from http://hackage.haskell.org/package/MemoTrie
There are several examples here: http://www.haskell.org/haskellwiki/Higher_order_function
I would also recommend this book: http://www.cs.nott.ac.uk/~gmh/book.html which is a great introduction to all of Haskell and covers higher order functions.
Higher order functions often use an accumulator so can be used when forming a list of elements which conform to a given rule from a larger list.
Here's a small paraphrased code snippet:
rays :: ChessPieceType -> [[(Int, Int)]]
rays Bishop = do
dx <- [1, -1]
dy <- [1, -1]
return $ iterate (addPos (dx, dy)) (dx, dy)
... -- Other piece types
-- takeUntilIncluding is an inclusive version of takeUntil
takeUntilIncluding :: (a -> Bool) -> [a] -> [a]
possibleMoves board piece = do
relRay <- rays (pieceType piece)
let ray = map (addPos src) relRay
takeUntilIncluding (not . isNothing . pieceAt board)
(takeWhile notBlocked ray)
where
notBlocked pos =
inBoard pos &&
all isOtherSide (pieceAt board pos)
isOtherSide = (/= pieceSide piece) . pieceSide
This uses several "higher order" functions:
iterate :: (a -> a) -> a -> [a]
takeUntilIncluding -- not a standard function
takeWhile :: (a -> Bool) -> [a] -> [a]
all :: (a -> Bool) -> [a] -> Bool
map :: (a -> b) -> [a] -> [b]
(.) :: (b -> c) -> (a -> b) -> a -> c
(>>=) :: Monad m => m a -> (a -> m b) -> m b
(.) is the . operator, and (>>=) is the do-notation "line break operator".
When programming in Haskell you just use them. Where you don't have the higher order functions is when you realize just how incredibly useful they were.
Here's a pattern that I haven't seen anyone else mention yet that really surprised me the first time I learned about it. Consider a statistics package where you have a list of samples as your input and you want to calculate a bunch of different statistics on them (there are also plenty of other ways to motivate this). The bottom line is that you have a list of functions that you want to run. How do you run them all?
statFuncs :: [ [Double] -> Double ]
statFuncs = [minimum, maximum, mean, median, mode, stddev]
runWith funcs samples = map ($samples) funcs
There's all kinds of higher order goodness going on here, some of which has been mentioned in other answers. But I want to point out the '$' function. When I first saw this use of '$', I was blown away. Before that I hadn't considered it to be very useful other than as a convenient replacement for parentheses...but this was almost magical...
One thing that's kind of fun, if not particularly practical, is Church Numerals. It's a way of representing integers using nothing but functions. Crazy, I know. <shamelessPlug>Here's an implementation in JavaScript that I made. It might be easier to understand than a Lisp/Haskell implementation. (But probably not, to be honest. JavaScript wasn't really meant for this kind of thing.)</shamelessPlug>
It’s been mentioned that Javascript supports certain higher-order functions, including an essay from Joel Spolsky. Mark Jason Dominus wrote an entire book called Higher–Order Perl; the book’s source is available for free download in a variety of fine formats, include PDF.
Ever since at least Perl 3, Perl has supported functionality more reminiscent of Lisp than of C, but it wasn’t until Perl 5 that full support for closures and all that follows from that was available. And ne of the first Perl 6 implementations was written in Haskell, which has had a lot of influence on how that language’s design has progressed.
Examples of functional programming approaches in Perl show up in everyday programming, especially with map and grep:
#ARGV = map { /\.gz$/ ? "gzip -dc < $_ |" : $_ } #ARGV;
#unempty = grep { defined && length } #many;
Since sort also admits a closure, the map/sort/map pattern is super common:
#txtfiles = map { $_->[1] }
sort {
$b->[0] <=> $a->[0]
||
lc $a->[1] cmp lc $b->[1]
||
$b->[1] cmp $a->[1]
}
map { -s => $_ }
grep { -f && -T }
glob("/etc/*");
or
#sorted_lines = map { $_->[0] }
sort {
$a->[4] <=> $b->[4]
||
$a->[-1] cmp $b->[-1]
||
$a->[3] <=> $b->[3]
||
...
}
map { [$_ => reverse split /:/] } #lines;
The reduce function makes list hackery easy without looping:
$sum = reduce { $a + $b } #numbers;
$max = reduce { $a > $b ? $a : $b } $MININT, #numbers;
There’s a lot more than this, but this is just a taste. Closures make it easy to create function generators, writing your own higher-order functions, not just using the builtins. In fact, one of the more common exception models,
try {
something();
} catch {
oh_drat();
};
is not a built-in. It is, however, almost trivially defined with try being a function that takes two arguments: a closure in the first arg and a function that takes a closure in the second one.
Perl 5 doesn’t have have currying built-in, although there is a module for that. Perl 6, though, has currying and first-class continuations built right into it, plus a lot more.

Generic function composition in Haskell

I was reading here, and I noticed that, for example, if I have the following function definitions:
a :: Integer->Integer->Integer
b :: Integer->Bool
The following expression is invalid:
(b . a) 2 3
It's quite strange that the functions of the composition must have only one parameter.
Is this restriction because some problem in defining the most generic one in Haskell or have some other reason?
I'm new to Haskell, so I'm asking maybe useless questions.
When you do a 2 3, you're not applying a to 2 arguments. You're actually applying a to it's only argument, resulting in a function, then take that function and apply it to 3. So you actually do 2 applications. So in a sense, what you have is not equivalent to this:
a :: (Integer, Integer) -> Integer
b :: Integer -> Integer
(b . a) (2, 3)
You could've done this, btw
(b . a 2) 3

Uses for Haskell id function

Which are the uses for id function in Haskell?
It's useful as an argument to higher order functions (functions which take functions as arguments), where you want some particular value left unchanged.
Example 1: Leave a value alone if it is in a Just, otherwise, return a default of 7.
Prelude Data.Maybe> :t maybe
maybe :: b -> (a -> b) -> Maybe a -> b
Prelude Data.Maybe> maybe 7 id (Just 2)
2
Example 2: building up a function via a fold:
Prelude Data.Maybe> :t foldr (.) id [(+2), (*7)]
:: (Num a) => a -> a
Prelude Data.Maybe> let f = foldr (.) id [(+2), (*7)]
Prelude Data.Maybe> f 7
51
We built a new function f by folding a list of functions together with (.), using id as the base case.
Example 3: the base case for functions as monoids (simplified).
instance Monoid (a -> a) where
mempty = id
f `mappend` g = (f . g)
Similar to our example with fold, functions can be treated as concatenable values, with id serving for the empty case, and (.) as append.
Example 4: a trivial hash function.
Data.HashTable> h <- new (==) id :: IO (HashTable Data.Int.Int32 Int)
Data.HashTable> insert h 7 2
Data.HashTable> Data.HashTable.lookup h 7
Just 2
Hashtables require a hashing function. But what if your key is already hashed? Then pass the id function, to fill in as your hashing method, with zero performance overhead.
If you manipulate numbers, particularly with addition and multiplication, you'll have noticed the usefulness of 0 and 1. Likewise, if you manipulate lists, the empty list turns out to be quite handy. Similarly, if you manipulate functions (very common in functional programming), you'll come to notice the same sort of usefulness of id.
In functional languages, functions are first class values
that you can pass as a parameter.
So one of the most common uses of id comes up when
you pass a function as a
parameter to another function to tell it what to do.
One of the choices of what to do is likely to be
"just leave it alone" - in that case, you pass id
as the parameter.
Suppose you're searching for some kind of solution to a puzzle where you make a move at each turn. You start with a candidate position pos. At each stage there is a list of possible transformations you could make to pos (eg. sliding a piece in the puzzle). In a functional language it's natural to represent transformations as functions so now you can make a list of moves using a list of functions. If "doing nothing" is a legal move in this puzzle, then you would represent that with id. If you didn't do that then you'd need to handle "doing nothing" as a special case that works differently from "doing something". By using id you can handle all cases uniformly in a single list.
This is probably the reason why almost all uses of id exist. To handle "doing nothing" uniformly with "doing something".
For a different sort of answer:
I'll often do this when chaining multiple functions via composition:
foo = id
. bar
. baz
. etc
over
foo = bar
. baz
. etc
It keeps things easier to edit. One can do similar things with other 'zero' elements, such as
foo = return
>>= bar
>>= baz
foos = []
++ bars
++ bazs
Since we are finding nice applications of id. Here, have a palindrome :)
import Control.Applicative
pal :: [a] -> [a]
pal = (++) <$> id <*> reverse
Imagine you are a computer, i.e. you can execute a sequence of steps. Then if I want you to stay in your current state, but I always have to give you an instruction (I cannot just mute and let the time pass), what instruction do I give you? Id is the function created for that, for returning the argument unchanged (in the case of the previous computer the argument would be its state) and for having a name for it. That necessity appears only when you have high order functions, when you operate with functions without considering what's inside them, that forces you to represent symbolically even the "do nothing" implementation. Analogously 0 seen as a quantity of something, is a symbol for the absence of quantity. Actually in Algebra both 0 and id are considered the neutral elements of the operations + and ∘ (function composition) respectively, or more formally:
for all x of type number:
0 + x = x
x + 0 = x
for all f of type function:
id ∘ f = f
f ∘ id = f
I can also help improve your golf score. Instead of using
($)
you can save a single character by using id.
e.g.
zipWith id [(+1), succ] [2,3,4]
An interesting, more than useful result.
Whenever you need to have a function somewhere, but want to do more than just hold its place (with 'undefined' as an example).
It's also useful, as (soon-to-be) Dr. Stewart mentioned above, for when you need to pass a function as an argument to another function:
join = (>>= id)
or as the result of a function:
let f = id in f 10
(presumably, you will edit the above function later to do something more "interesting"... ;)
As others have mentioned, id is a wonderful place-holder for when you need a function somewhere.