How does an a function that increments a value work? - function

I'm trying to learn haskell after years of OOP. I'm reading Happy Haskell. It provides this code:
plus :: Int -> Int -> Int
plus x y = x + y
plus' :: Int -> Int -> Int
plus' = \x -> \y -> x + y
increment :: Int -> Int
increment = plus 1
increment' :: Int -> Int
increment' = (\x -> \y -> x + y) 1
I understand how plus and plus' work (they're the same, different syntax).
But increment, I don't get.
increment :: Int -> Int
means it takes an int, and returns an int, right? But right after that, the actual function is:
increment = plus 1
Question:
Where is the integer value increment takes? Shouldn't there be an x or something on the right of the = sign, to signify the integer value the function takes as input? Something like:
increment _ = plus 1 x
Edit: Also, shouldn't the definition of increment be Int -> (Int -> Int) since it takes an int and passes it to a function that takes an int and returns and int?

Partial Application
In Haskell, you can have currying and partial application of functions. Have a look a the Haskell Wiki: Partial Application
In particular, if you look at the type signature of any function, there's no real distinction between its inputs (arguments) and its outputs, and this is because really your function plus :: Int -> Int -> Int is a function that, when given an Int, will return another function which itself takes the remaining arguments and returns the int: Int -> Int. This is called partial application
This means that when you call increment = plus 1 you are saying that increment is equal to -- remember the partial application -- a function (returned by plus 1) which itself take an integer and returns an integer.
As Haskell is a functional programming language, everything with an equal is not an assignment, but more like a definition, so an easy way to understand partial application is really to follow the equal signs:
increment = plus 1 =
plus 1 y = 1 + y
Main uses
As you can see, partial application can be used to defined more specific functions, like add 1 to a number which is more specific than just add two numbers. It also allows more use of point-free style, where you concatenate more than one function.
Also note that with infix functions lke (+), you can partially apply to either the left or the right, which can be useful for non-commutative functions, for example
divBy2 :: Float -> Float
divBy2 = (/2)
div2by :: Float -> Float
div2by = (2/)
Prelude> divBy2 3
1.5
Prelude> div2by 2
1.0

It would be increment x = plus 1 x, but generally foo x = bar x is the same thing as foo = bar because if f is a function that returns g(x) whenever called with any argument x, then f is the same function as g. So increment = plus 1 works just as well.

This is because all functions in Haskell are implicitly curried. As such, there is no distinction between a function which returns a function taking an argument and a function which takes two arguments returning a value (both have the type a -> a -> a†). So calling plus (or any other function) with too few arguments simply returns a new function with the already-given arguments applied. In most languages, this would be an argument error. See also point-free style.
† Haskell type signatures are right-associative, so a -> a -> a -> a is equivalent to a -> (a -> (a -> a)).

The examples of plus and plus' are instructive. You see how the latter seems to have no arguments, at least on the left of the equals sign:
plus' :: Int -> Int -> Int
plus' = \x -> \y -> x + y
Let's make another pair of versions of increment (I'll name them after "bumping" a number—by 1) that go halfway to the final versions you gave:
bump :: Int -> Int
bump y = 1 + y
bump' :: Int -> Int
bump' = \y -> 1 + y
The analogy between these two definitions is just like the one between plus and plus', so these should make sense, including the latter even though it has no formal arguments on the left-hand side of the equal sign.
Now, your understanding of bump', is exactly the same understanding you need to understand increment' as you gave it in your question! In fact, we're defining bump' to be equal to something which is exactly what increment' is equal to.
That is (as we'll see shortly), the right-hand side of bump''s definition,
\y -> 1 + y
is something that is equal to
plus 1
The two notations, or expressions, are two syntactic ways of defining "the function that takes a number and returns one more than it."
But what makes them equal?! Well, (as other answerers have explained) the expression plus 1 is partially applied. The compiler, in a way, knows that plus requires two arguments (it was declared that way after all) and so when it appears here applied to just one argument, the compiler knows that it's still waiting for one more. It represents that "waiting" by giving you a function, saying, if you give one more argument, whether now or later, doing so will make this thing fully applied and the program will actually jump to the function body of plus (thus computing x + y for the two arguments that were given, the literal 1 from the expression plus 1 and the "one more" argument given later)
A key part of the joy and the value of Haskell is thinking about functions as things in themselves, which can be passed around and very flexibly transformed from one into another. Partial application is just such a way of transforming one thing (a function with "too many arguments", when you want to fix the value of the extras) into a function of "just the right many." You might pass the partially-applied function to an interface which expects a specific number of arguments. Or you might simply want to define multiple, specialized, functions based on one general definition (as we can define the general plus and more specific functions like plus 1 and plus 7).

Related

Partially applied functions [duplicate]

This question already has answers here:
What is the difference between currying and partial application?
(16 answers)
Closed 5 years ago.
While studying functional programming, the concept partially applied functions comes up a lot. In Haskell, something like the built-in function take is considered to be partially applied.
I am still unclear as to what it exactly means for a function to be partially applied or the use/implication of it.
the classic example is
add :: Int -> Int -> Int
add x y = x + y
add function takes two arguments and adds them, we can now implement
increment = add 1
by partially applying add, which now waits for the other argument.
A function by itself can't be "partially applied" or not. It's a meaningless concept.
When you say that a function is "partially applied", you refer to how the function is called (aka "applied"). If the function is called with all its parameters, then it is said to be "fully applied". If some of the parameters are missing, then the function is said to be "partially applied".
For example:
-- The function `take` is fully applied here:
oneTwoThree = take 3 [1,2,3,4,5]
-- The function `take` is partially applied here:
take10 = take 10 -- see? it's missing one last argument
-- The function `take10` is fully applied here:
oneToTen = take10 [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,42]
The result of a partial function application is another function - a function that still "expects" to get its missing arguments - like the take10 in the example above, which still "expects" to receive the list.
Of course, it all gets a bit more complicated once you get into the higher-order functions - i.e. functions that take other functions as arguments or return other functions as result. Consider this function:
mkTake n = take (n+5)
The function mkTake has only one parameter, but it returns another function as result. Now, consider this:
x = mkTake 10
y = mkTake 10 [1,2,3]
On the first line, the function mkTake is, obviously, "fully applied", because it is given one argument, which is exactly how many arguments it expects. But the second line is also valid: since mkTake 10 returns a function, I can then call this function with another parameter. So what does it make mkTake? "Overly applied", I guess?
Then consider the fact that (barring compiler optimizations) all functions are, mathematically speaking, functions of exactly one argument. How could this be? When you declare a function take n l = ..., what you're "conceptually" saying is take = \n -> \l -> ... - that is, take is a function that takes argument n and returns another function that takes argument l and returns some result.
So the bottom line is that the concept of "partial application" isn't really that strictly defined, it's just a handy shorthand to refer to functions that "ought to" (as in common sense) take N arguments, but are instead given M < N arguments.
Strictly speaking, partial application refers to a situation where you supply fewer arguments than expected to a function, and get back a new function created on the fly that expects the remaining arguments.
Also strictly speaking, this does not apply in Haskell, because every function takes exactly one argument. There is no partial application, because you either apply the function to its argument or you don't.
However, Haskell provides syntactic sugar for defining functions that includes emulating multi-argument functions. In Haskell, then, "partial application" refers to supplying fewer than the full number of arguments needed to obtain a value that cannot be further applied to another argument. Using everyone's favorite add example,
add :: Int -> Int -> Int
add x y = x + y
the type indicates that add takes one argument of type Int and returns a function of type Int -> Int. The -> type constructor is right-associative, so it helps to explicitly parenthesize it to emphasize the one-argument nature of a Haskell function: Int -> (Int -> Int).
When calling such a "multi-argument" function, we take advantage of the fact the function application is left-associative, so we can write add 3 5 instead of (add 3) 5. The call add 3 is thought of as partial application, because we could further apply the result to another argument.
I mentioned the syntactic sugar Haskell provides to ease defining complex higher-order functions. There is one fundamental way to define a function: using a lambda expression. For example, to define a function that adds 3 to its argument, we write
\x -> x + 3
For ease of reference, we can assign a name to this function:
add = \x -> x + 3
To further ease the defintion, Haskell lets us write in its place
add x = x + 3
hiding the explicit lambda expression.
For higher-order, "multiargument" functions, we would write a function to add two values as
\x -> \y -> x + y
To hide the currying, we can write instead
\x y -> x + y.
Combined with the syntactic sugar of replacing a lambda expression with a paramterized name, all of the following are equivalent definitions for the explicitly typed function add :: Num a => a -> a -> a:
add = \x -> \y -> x + y -- or \x y -> x + y as noted above
add x = \y -> x + y
add x y = x + y
This is best understood by example. Here's the type of the addition operator:
(+) :: Num a => a -> a -> a
What does it mean? As you know, it takes two numbers and outputs another one. Right? Well, sort of.
You see, a -> a -> a actually means this: a -> (a -> a). Whoa, looks weird! This means that (+) is a function that takes one argument and outputs a function(!!) that also takes one argument and outputs a value.
What this means is you can supply that one value to (+) to get another, partially applied, function:
(+ 5) :: Num a => a -> a
This function takes one argument and adds five to it. Now, call that new function with some argument:
Prelude> (+5) 3
8
Here you go! Partial function application at work!
Note the type of (+ 5) 3:
(+5) 3 :: Num a => a
Look what we end up with:
(+) :: Num a => a -> a -> a
(+ 5) :: Num a => a -> a
(+5) 3 :: Num a => a
Do you see how the number of as in the type signature decreases by one every time you add a new argument?
Function with one argument that outputs a function with one argument that in turn, outputs a value
Function with one argument that outputs a value
The value itself
"...something like the built-in function take is considered to be partially applied" - I don't quite understand what this means. The function take itself is not partially applied. A function is partially applied when it is the result of supplying between 1 and (N - 1) arguments to a function that takes N arguments. So, take 5 is a partially applied function, but take and take 5 [1..10] are not.

What is the nicest way to make a function that takes Float or Double as input?

Say I want to implement the Fermi function (the simplest example of a logistic curve) so that if it's passed a Float it returns a Float and if it's passed a Double it returns a Double. Here's what I've got:
e = 2.7182845904523536
fermiFunc :: (Floating a) => a -> a
fermiFunc x = let one = fromIntegral 1 in one/(one + e^(-x))
The problem is that ghc says e is a Double. Defining the variable one is also kinda gross. The other solution I've thought of is to just define the function for doubles:
e = 2.7182845904523536
fermiFuncDouble :: Double -> Double
fermiFuncDouble x = 1.0/(1.0 + e^(-x))
Then using Either:
fermiFunc :: (Floating a) => Either Float Double -> a
fermiFunc Right x = double2Float (fermiFuncDouble (float2Double x))
fermiFunc Left x = fermiFuncDouble x
This isn't very exciting though because I might as well have just written a separate function for the Float case that just handles the casting and calls fermiFuncDouble. Is there a nice way to write a function for both types?
Don't write e^x, ever, in any language. That is not the exponential function, it's the power function.
The exponential function is called exp, and its definition actually has little to do with the power operation – it's defined, depending on your taste, as a Taylor series or as the unique solution to the ordinary differential equation d⁄d𝑥 exp 𝑥 = exp 𝑥 with boundary condition exp 0 = 1. Now, it so happens that, for any rational n, we have exp n ≡ (exp 1)n and that motivates also defining the power operation for numbers in ℝ or ℂ addition to ℚ, namely as
az := exp (z · ln a)
...but e𝑥 should be understood as really just a shortcut for writing exp(𝑥) itself.
So rather than defining e somewhere and trying to take some power of it, you should use exp just as it is.
fermiFunc :: Floating a => a -> a
fermiFunc x = 1/(1 + exp (-x))
...or indeed
fermiFunc = recip . succ . exp . negate
Assuming that you want floating point exponent, that's (**). (^) is integral exponent. Rewriting your function to use (**) and letting GHC infer the type gives:
fermiFunc x = 1/(1 + e ** (-x))
and
> :t fermiFunc
fermiFunc :: (Floating a) => a -> a
Since Float and Double both have Floating instances, fermiFunc is now sufficiently polymorphic to work with both.
(Note: you may need to declare a polymorphic type for e to get around the monomorphism restriction, i.e., e :: Floating a => a.)
In general, the answer to "How do I write a function that works with multiple types?" is either "Write it so that it works universally for all types." (parametric polymorphism, like map), "Find (or create) one or more typeclasses that they share that provides the behaviour you need." (ad hoc polymorphism, like show), or "Create a new type that is the sum of those types." (like Either).
The latter two have some tradeoffs. For instance, type classes are open (you can add more at any time) while sum types are closed (you must modify the definition to add more types). Sum types require you to know which type you are dealing with (because it must be matched up with a constructor) while type classes let you write polymorphic functions.
You can use :i in GHCi to list instances and to list instance methods, which might help you to locate a suitable typeclass.

What's the difference between a function and a functor in Haskell? Only definition?

In Haskell, when writing a function, it means we map something(input) to another thing(output). I tried LYAH to understand the definition of Functor: seems just the same like a normal Functor.
Is there any restriction that a function could be called a Functor?
Is Functor allowed to have I/O or any other side effect?
If in Haskell, "everthing is a function", then what's the point of introducing the "Functor" concept? A restricted version of function, or an enhancement version of a function?
Very confused, need your advice.
Thanks.
First of all, it's not true that "everything is a function" in Haskell. Many things are not functions, like 4. Or the string "vik santata".
In Haskell, a function is something which maps some input to an output. A function is a value which you can apply to some other value to get a result. If a value has a -> in its type, chances are that it may be a function (but there are infinitely many exceptions to this rule of thumb ;-)).
Here are some examples of functions (quoting from a GHCI session):
λ: :t fst
fst :: (a, b) -> a
λ: :t even
even :: Integral a => a -> Bool
Here are a few examples of things which are not functions:
A (polymorphic) value which can assume any type a provided that the type is a member of the Num class (e.g. Int would be a valid type). The exact value would be inferred from how the number is used.
Note that this type has => in it, which is something different altogether than ->. It denotes a "class constraint".
λ: :t 5
5 :: Num a => a
A list of functions. Note that this has a -> in its type, but it's not the top level type constructor (the toplevel type is [], i.e. "list"):
λ: :t [fst, snd]
[fst, snd] :: [(a, a) -> a]
Functors are not things you can apply to values. Functors are types whose values can be used with (and returned by) the fmap function (provided that the fmap function complies to certain rules, often called 'laws'). You can find a basic list of types which are part of the Functor using GHCI:
λ: :i Functor
[...]
instance Functor (Either a) -- Defined in ‘Data.Either’
instance Functor [] -- Defined in ‘GHC.Base’
instance Functor Maybe -- Defined in ‘GHC.Base’
[...]
This means you can apply fmap to lists, or to Maybe values, or to Either values.
It helps to know a little category theory. A category is just a set of objects with arrows between them. They can model many things in mathematics, but for our purposes we are interested in the category of type; Hask is the category of Haskell types, with each type being an object in Hask and each function being an arrow between the argument type and the return type. For example, Int, Char, [Char], and Bool are all objects in Hask, and ord :: Char -> Int, odd :: Int -> Bool, and repeat :: Char -> [Char] would be some examples of arrows in Hask.
Each category has several properties:
Every object has an identity arrow.
Arrows compose, so that if a -> b and b -> c are arrows, then so is a -> c.
Identity arrows are both left and right identities for composition.
Composition is associative.
The reason that Hask is a category is that every type has an identity function, and functions compose. That is, id :: Int -> Int and id :: Char -> Char are identity arrows for the category, and odd . ord :: Char -> Bool are composed arrows.
(Ignore for now that we think of id is polymorphic function with type a -> a instead of a bunch of separate functions with concrete types. This demonstrates a concept in category theory called a natural transformation that you don't need to think about now.)
In category theory, a functor F is a mapping between two categories; it maps each object of one category to an object of the other, and it also maps each arrow of one category to an arrow of the other. If a is an object in one category, we say that F a is the object in the other category. We also say that if f is an arrow in the first category, the corresponding arrow in the other if F f.
Not just any mapping is a functor. It has to obey two properties which should look familiar.
F has to map the identity arrow for an object a to the identity arrow of the object F a.
F has to preserve composition. That means that the composition of two arrows in the first category has to be mapped to the composition of the corresponding arrows in the other category. That is, if h = g ∘ f is in the first category, then h is mapped to F h = F g ∘ F f in the other.
Finally, an endofunctor is a special name for a functor that maps one category to itself. In Hask, the typeclass Functor captures the idea of an endofunctor from Hask to Hask. The type constructor itself maps the types, and fmap is used to map the arrows.
Let's take Maybe as an example. The type constructor Maybe is an endofuntor, because it maps objects in Hask (types) to other objects in Hask (other types). (This point is obscured a little bit since we don't have new names for the target types, so think of Maybe as mapping Int to the type Maybe Int.)
To map an arrow a -> b to Maybe a -> Maybe b, we provide a defintion for fmap in the instance of Maybe Int.
Maybe also maps functions, but using the name fmap instead. The functor laws it must obey are the same as two listed in the definition of a functor.
fmap id = id (Maps id :: Int -> Int to id :: Maybe Int -> Maybe Int.
fmap f . fmap g = fmap f . g (That is, fmap odd . fmap ord $ x has to return the same value as fmap (odd . ord) $ x for any possible value x of type Maybe Int.
As an unrelated tangent, others have pointed out that some things in Haskell are not functions, namely literal values like 4 and "hello". While true in the programming language (you can't, for instance, compose 4 with another function that takes an Int as a value), it is true that in category theory that you can replace values with functions from the unit type () to the type of the value. That is, the literal value 4 can be thought of as an arrow 4 :: () -> Int that, when applied to the (only) value of type (), it returns a value of type Int corresponding to the integer 4. This arrow would compose like any other; odd . 4 :: () -> Bool would map the value from the unit type to a Boolean value indicating whether the integer 4 is odd or not.
Mathematically, this is nice. We don't have to define any structure for types; they just are, and since we already have the idea of a type defined, we don't need a separate definition for what a value of a type is; we just just define them in terms of functions. (You might notice we still need an actual value from the unit type, though. There might be a way of avoiding that in our definition, but I don't know category theory well enough to explain that one way or the other.)
For the actual implementation of our programming language, think of literal values as being an optimization to avoid the conceptual and performance overhead of having to use 4 () in place of 4 every time we just want a constant value.
Actually, a functor is two functions, but only one of them is a Haskell function (and I'm not sure it's the function you suspect it to be).
A type level function. The objects of the Hask category are types with kind *, and a functor maps such types to other types. You can see this aspect of functors in ghci, using the :kind query:
Prelude> :k Maybe
Maybe :: * -> *
Prelude> :k []
[] :: * -> *
Prelude> :k IO
IO :: * -> *
What these functions do is rather boring: they map, for instance,
Int to Maybe Int
() to IO ()
String to [[Char]].
By this I don't mean that they map integer numbers to maybe-integers etc. – that's a more specific operation, not possible for every functor. I just mean, they map the type Int, as a single entity, to the type Maybe Int.
A value-level function, which maps morphisms (i.e. Haskell functions) to morphisms. The target morphism always maps between the types that result from applying the type-level function to the domain and codomain of the original function.This function is what you get with fmap:
fmap :: (Int -> Double) -> (Maybe Int -> Maybe Double)
fmap :: (() -> Bool) -> (IO () -> IO Bool)
fmap :: (Char -> String) -> String -> [String].
For something to be a functor - you need two things:
a container type*
a special function that converts a function from containees to a function converting containers
the first is depending on your own definition but the second one has been codified in the "interface" called Functor and the conversion function has been named fmap.
thus you always start with something like
data Maybe a = Just a | Nothing
instance Functor Maybe where
-- fmap :: (a -> b) -> Maybe a -> Maybe b
fmap f (Just a) = Just (f a)
fmap _ Nothing = Nothing
Functions on the other hand - do not need a container to work - so they are not related to Functor in that kind of way. On the other hand every Functor, has to implement the function fmap to earn its name.
Moreover convention wants a Functor to adhere to certain laws - but this cannot be enforced by the compiler/type checker.
*: this container can also be a phantom type, e.g. data Proxy a = Proxy in this case the name container is debatable, but I would still use that name
Not everything in Haskell is a function. Non-functions include "Hello World", (3 :: Int, 'a'), and Just 'x'. Things whose types include => are not necessarily functions either, although GHC (generally) translates them into functions in its intermediate representation.
What is a functor? Given categories C and D, a functor f from C to D consists of a mapping fo from the objects of C into the objects of D and a mapping fm from the morphisms of C into the morphisms of D such that
If x and y are objects in C and p is a morphism from x to y then fm(p) is a morphism from fo(x) to fo(y).
If x is an object in C and id is the identity morphism from x to x then fm(id) is the identity morphism from fo(x) to fo(x).
If x, y, and z are objects in C, p is a morphism from y to z, and q is a morphism from x to y, then fm(p . q) = fm(p).fm(q), where the dot represents morphism composition.
How does this relate to Haskell? We like to think of Haskell types and Haskell functions between them as forming a category. This is only approximately true, for various reasons, but it's a useful guide to intuition. The Functor class then represents injective endofunctors from this Hask category to itself. In particular, a Functor consists of a mapping (specifically a type constructor or partial application of type constructor) from types to types, along with a mapping (the fmap function) from functions to functions which obeys the above laws.

Haskell 2014.0.0: Two 'Could not deduce' errors occur when executing simple function

I want to program a function 'kans' (which calculates the chance of a repetition code of length n being correctly decoded) the following way:
fac 0 = 1
fac n = n * fac (n-1)
comb n r = ceiling (fac n / ((fac (n - r))*(fac r)))
kans 0 = 1
kans n = sum [ (comb (n+1) i) * 0.01^i * 0.99^(n+1-i) | i <- [0..(div n 2)]]
Now, calling 'kans 2' in GHCi gives the following error message: http://i.imgur.com/wEZRXgu.jpg
The functions 'fac' and 'comb' function normally. I even get an error message when I call 'kans 0', which I defined seperately.
I would greatly appreciate your help.
The error messages both contain telling parts of the form
Could not deduce [...]
from the context (Integral a, Fractional a)
This says that you are trying to use a single type as both a member of the class Integral and as a member of the class Fractional. There are no such types in Haskell: The standard numerical types all belong to exactly one of them. And you are using operations that require one or the other:
/ is "floating point" or "field" division, and only works with Fractional.
div is "integer division", and so works only with Integral.
the second argument of ^ must be Integral.
What you probably should do is to decide exactly which of your variables and results are integers (and thus in the Integral class), and which are floating-point (and thus in the Fractional class.) Then use the fromIntegral function to convert from the former to the latter whenever necessary. You will need to do this conversion in at least one of your functions.
On mathematical principle I recommend you do it in kans, and change comb to be purely integral by using div instead of /.
As #Mephy suggests, you probably should write type signatures for your functions, this helps make error messages clearer. However Num won't be enough here. For kans you can use either of
kans :: Integer -> Double
kans :: (Integral a, Fractional b) => a -> b
The latter is more general, but the former is likely to be the two specific types you'll actually use.
short dirty answer:
because both fac and comb always return Integers and only accept Integers, use this code:
fac 0 = 1
fac n = n * fac (n-1)
comb n r =fac n `div` ((fac (n - r))*(fac r))
kans 0 = 1
kans n = sum [ fromIntegral (comb (n+1) i) * 0.01^i * 0.99^(n+1-i) | i <- [0..(div n 2)]]
actual explanation:
looking at your error messages, GHC complains about ambiguity.
let's say you try to run this piece of code:
x = show (read "abc")
GHC will complain that it can't compute read "abc" because it doesn't have enough information to determine which definition of read to use (there are multiple definitions ; you might want to read about type classes, or maybe you already did. for example, one of type String -> Int which parses Integers and one of type String -> Float etc.). this is called ambiguity, because the result type of read "abc" is ambiguous.
in Haskell, when using arithmetic operators (which like read usually have multiple definitions in the same way) there is ambiguity all the time, so when there is ambiguity GHC checks if the types Integer or Float can be plugged in without problems, and if they do, then it changes the types to them, resolving the ambiguity. otherwise, it complains about the ambiguity. this is called defaulting.
the problem with your code is that because of the use of ceiling and / the types require instances of Fractional and RealFrac while also requiring instances of Integral for the same type variables, but no type is instance of both at once, so GHC complains about ambiguity.
the way to fix this would be to notice that comb would only work on whole numbers, and that (fac n / ((fac (n - r))*(fac r))) is always a whole number. the ceiling is redundant.
then we should replace / in comb's definition by \div`` to express that only whole numbers will be the output.
the last thing to do is to replace comb (n+1) i with fromIntegral (comb (n+1) i) (fromIntegral is a function which casts whole numbers to arbitrary types) because otherwise the types would mismach again - adding this allowes us to separate the type of comb (n+1) i (which is a whole number) from the type of 0.01^i which is of course a floating-point number.
this solves all problems and results in the code I wrote above in the "short dirty answer" section.

Uses for Haskell id function

Which are the uses for id function in Haskell?
It's useful as an argument to higher order functions (functions which take functions as arguments), where you want some particular value left unchanged.
Example 1: Leave a value alone if it is in a Just, otherwise, return a default of 7.
Prelude Data.Maybe> :t maybe
maybe :: b -> (a -> b) -> Maybe a -> b
Prelude Data.Maybe> maybe 7 id (Just 2)
2
Example 2: building up a function via a fold:
Prelude Data.Maybe> :t foldr (.) id [(+2), (*7)]
:: (Num a) => a -> a
Prelude Data.Maybe> let f = foldr (.) id [(+2), (*7)]
Prelude Data.Maybe> f 7
51
We built a new function f by folding a list of functions together with (.), using id as the base case.
Example 3: the base case for functions as monoids (simplified).
instance Monoid (a -> a) where
mempty = id
f `mappend` g = (f . g)
Similar to our example with fold, functions can be treated as concatenable values, with id serving for the empty case, and (.) as append.
Example 4: a trivial hash function.
Data.HashTable> h <- new (==) id :: IO (HashTable Data.Int.Int32 Int)
Data.HashTable> insert h 7 2
Data.HashTable> Data.HashTable.lookup h 7
Just 2
Hashtables require a hashing function. But what if your key is already hashed? Then pass the id function, to fill in as your hashing method, with zero performance overhead.
If you manipulate numbers, particularly with addition and multiplication, you'll have noticed the usefulness of 0 and 1. Likewise, if you manipulate lists, the empty list turns out to be quite handy. Similarly, if you manipulate functions (very common in functional programming), you'll come to notice the same sort of usefulness of id.
In functional languages, functions are first class values
that you can pass as a parameter.
So one of the most common uses of id comes up when
you pass a function as a
parameter to another function to tell it what to do.
One of the choices of what to do is likely to be
"just leave it alone" - in that case, you pass id
as the parameter.
Suppose you're searching for some kind of solution to a puzzle where you make a move at each turn. You start with a candidate position pos. At each stage there is a list of possible transformations you could make to pos (eg. sliding a piece in the puzzle). In a functional language it's natural to represent transformations as functions so now you can make a list of moves using a list of functions. If "doing nothing" is a legal move in this puzzle, then you would represent that with id. If you didn't do that then you'd need to handle "doing nothing" as a special case that works differently from "doing something". By using id you can handle all cases uniformly in a single list.
This is probably the reason why almost all uses of id exist. To handle "doing nothing" uniformly with "doing something".
For a different sort of answer:
I'll often do this when chaining multiple functions via composition:
foo = id
. bar
. baz
. etc
over
foo = bar
. baz
. etc
It keeps things easier to edit. One can do similar things with other 'zero' elements, such as
foo = return
>>= bar
>>= baz
foos = []
++ bars
++ bazs
Since we are finding nice applications of id. Here, have a palindrome :)
import Control.Applicative
pal :: [a] -> [a]
pal = (++) <$> id <*> reverse
Imagine you are a computer, i.e. you can execute a sequence of steps. Then if I want you to stay in your current state, but I always have to give you an instruction (I cannot just mute and let the time pass), what instruction do I give you? Id is the function created for that, for returning the argument unchanged (in the case of the previous computer the argument would be its state) and for having a name for it. That necessity appears only when you have high order functions, when you operate with functions without considering what's inside them, that forces you to represent symbolically even the "do nothing" implementation. Analogously 0 seen as a quantity of something, is a symbol for the absence of quantity. Actually in Algebra both 0 and id are considered the neutral elements of the operations + and ∘ (function composition) respectively, or more formally:
for all x of type number:
0 + x = x
x + 0 = x
for all f of type function:
id ∘ f = f
f ∘ id = f
I can also help improve your golf score. Instead of using
($)
you can save a single character by using id.
e.g.
zipWith id [(+1), succ] [2,3,4]
An interesting, more than useful result.
Whenever you need to have a function somewhere, but want to do more than just hold its place (with 'undefined' as an example).
It's also useful, as (soon-to-be) Dr. Stewart mentioned above, for when you need to pass a function as an argument to another function:
join = (>>= id)
or as the result of a function:
let f = id in f 10
(presumably, you will edit the above function later to do something more "interesting"... ;)
As others have mentioned, id is a wonderful place-holder for when you need a function somewhere.