Does Haskell have variadic functions/tuples? - function

The uncurry function only works for functions taking two arguments:
uncurry :: (a -> b -> c) -> (a, b) -> c
If I want to uncurry functions with an arbitrary number of arguments, I could just write separate functions:
uncurry2 f (a, b) = f a b
uncurry3 f (a, b, c) = f a b c
uncurry4 f (a, b, c, d) = f a b c d
uncurry5 f (a, b, c, d, e) = f a b c d e
But this gets tedious quickly. Is there any way to generalize this, so I only have to write one function?

Try uncurryN from the tuple package. Like all forms of overloading, it's implemented using type classes. In this case by manually spelling out the instances up to 15-tuples, which should be more than enough.
Variadic functions are also possible using type classes. One example of this is Text.Printf. In this case, it's done by structural induction on the function type. Simplified, it works like this:
class Foo t
instance Foo (IO a)
instance Foo b => Foo (a -> b)
foo :: Foo
It shouldn't be hard to see that foo can be instantiated to the types IO a, a -> IO b, a -> b -> IO c and so on. QuickCheck also uses this technique.
Structural induction won't work on tuples, though, as an n-tuple is completely unrelated to a n+1-tuple, so that's why the instances have to be spelled out manually.

Finding ways to fake this sort of thing using overwrought type system tricks is one of my hobbies, so trust me when I say that the result is pretty ugly. In particular, note that tuples aren't defined recursively, so there's no real way to abstract over them directly; as far as Haskell's type system is concerned, every tuple size is completely distinct.
Any viable approach for working with tuples directly will therefore require code generation--either using TH, or an external tool as with the tuple package.
To fake it without using generated code, you have to first resort to using recursive definitions--typically right-nested pairs with a "nil" value to mark the end, either (,) and () or something equivalent to them. You may notice that this is similar to the definition of lists in terms of (:) and []--and in fact, recursively defined faux-tuples of this sort can be seen as either type-level data structures (a list of types) or as heterogeneous lists (e.g., HList works this way).
The downsides include, but are not limited to, the fact that actually using things built this way can be more awkward than it's worth, the code to implement the type system tricks is usually baffling and completely non-portable, and the end result is not necessarily equivalent anyway--there are multiple nontrivial differences between (a, (b, (c, ()))) and (a, b, c), for instance.
If you want to see how horrible it becomes you can look at the stuff I have on GitHub, particularly the bits here.

There is no straightforward way to write a single definition of uncurry that will work for different numbers of arguments.
However, it is possible to use Template Haskell to generate the many different variants that you would otherwise have to write by hand.

Related

What's the difference between a function and a functor in Haskell? Only definition?

In Haskell, when writing a function, it means we map something(input) to another thing(output). I tried LYAH to understand the definition of Functor: seems just the same like a normal Functor.
Is there any restriction that a function could be called a Functor?
Is Functor allowed to have I/O or any other side effect?
If in Haskell, "everthing is a function", then what's the point of introducing the "Functor" concept? A restricted version of function, or an enhancement version of a function?
Very confused, need your advice.
Thanks.
First of all, it's not true that "everything is a function" in Haskell. Many things are not functions, like 4. Or the string "vik santata".
In Haskell, a function is something which maps some input to an output. A function is a value which you can apply to some other value to get a result. If a value has a -> in its type, chances are that it may be a function (but there are infinitely many exceptions to this rule of thumb ;-)).
Here are some examples of functions (quoting from a GHCI session):
λ: :t fst
fst :: (a, b) -> a
λ: :t even
even :: Integral a => a -> Bool
Here are a few examples of things which are not functions:
A (polymorphic) value which can assume any type a provided that the type is a member of the Num class (e.g. Int would be a valid type). The exact value would be inferred from how the number is used.
Note that this type has => in it, which is something different altogether than ->. It denotes a "class constraint".
λ: :t 5
5 :: Num a => a
A list of functions. Note that this has a -> in its type, but it's not the top level type constructor (the toplevel type is [], i.e. "list"):
λ: :t [fst, snd]
[fst, snd] :: [(a, a) -> a]
Functors are not things you can apply to values. Functors are types whose values can be used with (and returned by) the fmap function (provided that the fmap function complies to certain rules, often called 'laws'). You can find a basic list of types which are part of the Functor using GHCI:
λ: :i Functor
[...]
instance Functor (Either a) -- Defined in ‘Data.Either’
instance Functor [] -- Defined in ‘GHC.Base’
instance Functor Maybe -- Defined in ‘GHC.Base’
[...]
This means you can apply fmap to lists, or to Maybe values, or to Either values.
It helps to know a little category theory. A category is just a set of objects with arrows between them. They can model many things in mathematics, but for our purposes we are interested in the category of type; Hask is the category of Haskell types, with each type being an object in Hask and each function being an arrow between the argument type and the return type. For example, Int, Char, [Char], and Bool are all objects in Hask, and ord :: Char -> Int, odd :: Int -> Bool, and repeat :: Char -> [Char] would be some examples of arrows in Hask.
Each category has several properties:
Every object has an identity arrow.
Arrows compose, so that if a -> b and b -> c are arrows, then so is a -> c.
Identity arrows are both left and right identities for composition.
Composition is associative.
The reason that Hask is a category is that every type has an identity function, and functions compose. That is, id :: Int -> Int and id :: Char -> Char are identity arrows for the category, and odd . ord :: Char -> Bool are composed arrows.
(Ignore for now that we think of id is polymorphic function with type a -> a instead of a bunch of separate functions with concrete types. This demonstrates a concept in category theory called a natural transformation that you don't need to think about now.)
In category theory, a functor F is a mapping between two categories; it maps each object of one category to an object of the other, and it also maps each arrow of one category to an arrow of the other. If a is an object in one category, we say that F a is the object in the other category. We also say that if f is an arrow in the first category, the corresponding arrow in the other if F f.
Not just any mapping is a functor. It has to obey two properties which should look familiar.
F has to map the identity arrow for an object a to the identity arrow of the object F a.
F has to preserve composition. That means that the composition of two arrows in the first category has to be mapped to the composition of the corresponding arrows in the other category. That is, if h = g ∘ f is in the first category, then h is mapped to F h = F g ∘ F f in the other.
Finally, an endofunctor is a special name for a functor that maps one category to itself. In Hask, the typeclass Functor captures the idea of an endofunctor from Hask to Hask. The type constructor itself maps the types, and fmap is used to map the arrows.
Let's take Maybe as an example. The type constructor Maybe is an endofuntor, because it maps objects in Hask (types) to other objects in Hask (other types). (This point is obscured a little bit since we don't have new names for the target types, so think of Maybe as mapping Int to the type Maybe Int.)
To map an arrow a -> b to Maybe a -> Maybe b, we provide a defintion for fmap in the instance of Maybe Int.
Maybe also maps functions, but using the name fmap instead. The functor laws it must obey are the same as two listed in the definition of a functor.
fmap id = id (Maps id :: Int -> Int to id :: Maybe Int -> Maybe Int.
fmap f . fmap g = fmap f . g (That is, fmap odd . fmap ord $ x has to return the same value as fmap (odd . ord) $ x for any possible value x of type Maybe Int.
As an unrelated tangent, others have pointed out that some things in Haskell are not functions, namely literal values like 4 and "hello". While true in the programming language (you can't, for instance, compose 4 with another function that takes an Int as a value), it is true that in category theory that you can replace values with functions from the unit type () to the type of the value. That is, the literal value 4 can be thought of as an arrow 4 :: () -> Int that, when applied to the (only) value of type (), it returns a value of type Int corresponding to the integer 4. This arrow would compose like any other; odd . 4 :: () -> Bool would map the value from the unit type to a Boolean value indicating whether the integer 4 is odd or not.
Mathematically, this is nice. We don't have to define any structure for types; they just are, and since we already have the idea of a type defined, we don't need a separate definition for what a value of a type is; we just just define them in terms of functions. (You might notice we still need an actual value from the unit type, though. There might be a way of avoiding that in our definition, but I don't know category theory well enough to explain that one way or the other.)
For the actual implementation of our programming language, think of literal values as being an optimization to avoid the conceptual and performance overhead of having to use 4 () in place of 4 every time we just want a constant value.
Actually, a functor is two functions, but only one of them is a Haskell function (and I'm not sure it's the function you suspect it to be).
A type level function. The objects of the Hask category are types with kind *, and a functor maps such types to other types. You can see this aspect of functors in ghci, using the :kind query:
Prelude> :k Maybe
Maybe :: * -> *
Prelude> :k []
[] :: * -> *
Prelude> :k IO
IO :: * -> *
What these functions do is rather boring: they map, for instance,
Int to Maybe Int
() to IO ()
String to [[Char]].
By this I don't mean that they map integer numbers to maybe-integers etc. – that's a more specific operation, not possible for every functor. I just mean, they map the type Int, as a single entity, to the type Maybe Int.
A value-level function, which maps morphisms (i.e. Haskell functions) to morphisms. The target morphism always maps between the types that result from applying the type-level function to the domain and codomain of the original function.This function is what you get with fmap:
fmap :: (Int -> Double) -> (Maybe Int -> Maybe Double)
fmap :: (() -> Bool) -> (IO () -> IO Bool)
fmap :: (Char -> String) -> String -> [String].
For something to be a functor - you need two things:
a container type*
a special function that converts a function from containees to a function converting containers
the first is depending on your own definition but the second one has been codified in the "interface" called Functor and the conversion function has been named fmap.
thus you always start with something like
data Maybe a = Just a | Nothing
instance Functor Maybe where
-- fmap :: (a -> b) -> Maybe a -> Maybe b
fmap f (Just a) = Just (f a)
fmap _ Nothing = Nothing
Functions on the other hand - do not need a container to work - so they are not related to Functor in that kind of way. On the other hand every Functor, has to implement the function fmap to earn its name.
Moreover convention wants a Functor to adhere to certain laws - but this cannot be enforced by the compiler/type checker.
*: this container can also be a phantom type, e.g. data Proxy a = Proxy in this case the name container is debatable, but I would still use that name
Not everything in Haskell is a function. Non-functions include "Hello World", (3 :: Int, 'a'), and Just 'x'. Things whose types include => are not necessarily functions either, although GHC (generally) translates them into functions in its intermediate representation.
What is a functor? Given categories C and D, a functor f from C to D consists of a mapping fo from the objects of C into the objects of D and a mapping fm from the morphisms of C into the morphisms of D such that
If x and y are objects in C and p is a morphism from x to y then fm(p) is a morphism from fo(x) to fo(y).
If x is an object in C and id is the identity morphism from x to x then fm(id) is the identity morphism from fo(x) to fo(x).
If x, y, and z are objects in C, p is a morphism from y to z, and q is a morphism from x to y, then fm(p . q) = fm(p).fm(q), where the dot represents morphism composition.
How does this relate to Haskell? We like to think of Haskell types and Haskell functions between them as forming a category. This is only approximately true, for various reasons, but it's a useful guide to intuition. The Functor class then represents injective endofunctors from this Hask category to itself. In particular, a Functor consists of a mapping (specifically a type constructor or partial application of type constructor) from types to types, along with a mapping (the fmap function) from functions to functions which obeys the above laws.

Is it possible to found most of the errors in programs written in Haskell by adding a new feature, something I call it "subtype system", to Haskell?

I have learned Haskell for about one year, and came up with a question that could the talented compiler writers add a new feature called "subset" by me to enhance the Haskell's type system to catch many errors including IOExceptions in compiling stage. I'm novice of Theory of types, and forgive my wishful thinking.
My initial purpose is not how to solve the problem but to know whether there exists a related solution but because of some reasons the solution is not introduced to Haskell.
Haskell is nearly perfect in my mind except for some little things, and I will express my wish to Haskell of the future in the following lines.
The following is the major one:
If we can define a type, which is just a "subset" of Int assuming Haskell allows us to do that, like bellow:
data IntNotZero = Int {except `0`} -- certainly it is not legal in Haskell, but I just assume that Haskell allows us to define a type as a "subset" of an already existing type. I'm novice of Theory of types, and forgive me.
And If a function needs a parameter of Int, a variable of IntNotZero, which is just a "subset" of Int, can also be a parameter of the function. But, If a function needs a IntNotZero, then a Int is illegal.
For example:
div' :: Int -> IntNotZero -> Int
div' = div
aFunction :: Int -> Int -> Int --If we casually write it, then the compiler will complain for type conflict.
aFunction = div'
aFunction2 :: Int -> Int -> Int --we have to distinguish between `Int` and `IntNotZero`.
aFunction2 m n = type n of --An assumed grammar like `case ... of` to separate "subset" from its complement. `case ...of` only works on different patterns.
IntNotZero -> m `div` n
otherwise -> m + n
For a more useful example:
data HandleNotClosed = Handle {not closed} --this type infers a Handle not closed
hGetContents' :: HandleNotClosed -> IO String --this function needs a HandleNotClosed and a Handle will make a type conflict.
hGetContents' = hGetContents
wrongMain = do
...
h <- openFile "~/xxx/aa" ReadMode
... -- we do many tasks with h and we may casually closed h
contents <- hGetContents' h --this will raise a type conflict, because h has type of Handle not HandleNotClosed.
...
rightMain = do
...
h <- openFile "~/xxx/aa" ReadMode
... -- we do many tasks with h and we may casually closed h
type h of -- the new grammar.
HandleNotClosed -> do
contents <- hGetContents' h
...
otherwise -> ...
If we combine ordinary IO with Exception to a new "supset", then we may get free of IOErrors.
What you want sounds similar to "refinement types" à la Liquid Haskell. This is an external tool that allows you to "refine" your Haskell types by specifying additional predicates that hold over your types. To check that these hold, you use an SMT solver to verify all the constraints have been satisfied.
The following code snippets are taken from their introductory blog post.
For example, you could write the type that zero is 0:
{-# zero :: { v : Int | v = 0 } #-}
zero :: Int
zero = 0
You'll notice that the syntax for types looks just like set notation for math--you're defining a new type as a subset of the old on. In this case, you're defining the type of Ints that are equal to 0.
You can use this system to write a safe divide function:
{-# divide :: Int -> { v : Int | v != 0 } -> Int #-}
divide :: Int -> Int -> Int
divide n 0 = error "Cannot divide by 0."
divide n d = n `div` d
When you actually try to compile this program, Liquid Haskell will see that having 0 as the denominator violates the predicate and so the call to error cannot happen. Moreover, when you try to use divide, it will check that the argument you pass in cannot be 0.
Of course, to make this useful, you have to be able to add information about the postconditions of your functions, not just the preconditions. You can just do this by refining the result type of the function; for example, you can imagine the following type for abs:
{-# abs :: Int -> { v : Int | 0 <= v } #-}
Now the type system knows that the result of calling abs will never be negative, and it can take advantage of this fact when it needs to verify your program.
Like other people mentioned, using this sort of type system means you will have to have proofs in your code. The advantage of Liquid Haskell is that it uses an SMT solver to generate the proof for you automatically--you just have to write the assertions.
Liquid Haskell is still a research project, and it's limited by what can reasonably be done with an SMT solver. I haven't used it myself, but it looks really awesome and seems to be exactly what you want. One thing I'm not sure about is how it interacts with custom types and IO--something you might want to look into yourself.
The problem is that the compiler can't determine if something is of type IntNotZero. For example:
f :: Int -> IntNotZero
f x = someExtremlyComplexComputation
the compiler would have to prove that someExtremlyComplexComputation doesn't produce a zero result, which is in general impossible.
One way how to approach this is in plain Haskell to create a module that hides the representation of IntNotZero and publishes only a smart constructor such as
module MyMod (IntNotZero(), intNotZero) where
newtype IntNotZero = IntNotZero Int
intNotZero :: Int -> IntNotZero
intNotZero 0 = error "Zero argument"
intNotZero x = IntNotZero x
-- etc
The obvious drawback is that the constraint is checked only at runtime.
There are more complex systems than Haskell that use Dependent types. These are types that depend on values, and they allow you to express just what you want. Unfortunately these systems are rather complex and not much widespread. If you're interested in the subject, I suggest you to read Certified Programming with Dependent Types by Adam Chlipala.
We have types and we have values. A Type is a set of (infinite) values. String is a type and all the possible string values are part of the String set. Now, the most important distinction about types and values is this - Types are about compile time and Values are available at runtime.
If we look at you first example which talks about a new type which is subtype (or subset) of Int type such that "the value of the Int can't be zero", which means you want to define a type which put some restrictions on a value BUT types are compile time and values are runtime things - a compile time thing can't restrict a runtime thing because the runtime thing is not there yet for compile time thing to consume.
Similarly the handle value is a runtime thing and only at runtime you can know if it is closed or not and for that you have functions to check whether the handle is closed or not.
IO is all about runtime and you can't use a type system to get free of IOErrors.
For modeling runtime failures you can use data types like Maybe or Either to indicate that the function may not be able to do what is was supposed to do and as these data types implements functors, moands and other computations patterns you can easily compose them.
A type system is more of a structuring/design tool which make things more explicit and clear and makes you think more about your design but it can't do what functions are supposed to do.
The film is : Typed Lambda calculus. Lambda in lead role, Typed in supporting role :)
To expand on #augustss's comment, it's quite possible using dependently typed languages. Haskell is not exactly dependently typed, but it's close enough that dependent types can be "faked".
People don't today commonly use dependent types for several reasons
they're still very much research topics
they complicate and weaken type inference
they're somewhat more difficult to use in some circumstances
they can cause type-checking to take much longer or even fail to terminate, and
it's just more difficult to create production dependently-typed compilers.
That said, proponents of dependent typing find the error reduction you're looking for quite tenable. They also anticipate better safety and faster compiled binaries. Finally, dependently typed systems can be used as "proof systems" for mathematics. For a very current example consider the "Homotopy Type Theory" Agda code which formally proves many of the assertions of a new field of dependent typing math.
For a taste of dependent typing you can read/explore either Pierce's Software Foundations or Chlipala's Certified Programming with Dependent Types.
With dependent types you might introduce a type like this
div :: Int -> (x :: Int) -> (Inequal x 0) -> Int
where the second argument introduces a dependency of the type on the actual value of the argument and the third argument demands a proof of the proposition that x /= 0. With such a proof in hand (so long as nobody cheats and uses undefined as that proof) it's possible to feel confident dividing by the second argument could never be undefined.
The challenge comes from creating (automatically or manually) a value to pass in as the third argument. For such a simple example it may not be too difficult, but it becomes possible to encode demands for proofs that are very difficult to generate, or even impossible.
As an example of another advantage, consider
fold1 :: (f :: a -> a -> a) -> Associative f -> [a] -> a
which, ignoring the second argument, is just a regular fold. The second argument could be a proof that f associates and thus allows us to use a tree-like merging algorithm with log complexity instead of linear. But, in order to "prove" Associative we need to embed a theory of application and association into our types and have the competency to create proofs within it.
Simpler invariants exist such as the all-prevalent Vec type of "fixed-length vectors". These are lists where the length of the list (a value) is included in the type allowing us to have nice things like
(++) :: Vec n a -> Vec m a -> Vec (n + m) a
which, if we have some good theories of addition (or, more generally, magmas, monoids, and groups) in our type system then it isn't too difficult to create our result type which holds information about the way lengths of Vecs interact under concatenation.

What are some interesting uses of higher-order functions?

I'm currently doing a Functional Programming course and I'm quite amused by the concept of higher-order functions and functions as first class citizens. However, I can't yet think of many practically useful, conceptually amazing, or just plain interesting higher-order functions. (Besides the typical and rather dull map, filter, etc functions).
Do you know examples of such interesting functions?
Maybe functions that return functions, functions that return lists of functions (?), etc.
I'd appreciate examples in Haskell, which is the language I'm currently learning :)
Well, you notice that Haskell has no syntax for loops? No while or do or for. Because these are all just higher-order functions:
map :: (a -> b) -> [a] -> [b]
foldr :: (a -> b -> b) -> b -> [a] -> b
filter :: (a -> Bool) -> [a] -> [a]
unfoldr :: (b -> Maybe (a, b)) -> b -> [a]
iterate :: (a -> a) -> a -> [a]
Higher-order functions replace the need for baked in syntax in the language for control structures, meaning pretty much every Haskell program uses these functions -- making them quite useful!
They are the first step towards good abstraction because we can now plug custom behavior into a general purpose skeleton function.
In particular, monads are only possible because we can chain together, and manipulate functions, to create programs.
The fact is, life is pretty boring when it is first-order. Programming only gets interesting once you have higher-order.
Many techniques used in OO programming are workarounds for the lack of higher order functions.
This includes a number of the design patterns that are ubiquitous in functional programming. For example, the visitor pattern is a rather complicated way to implement a fold. The workaround is to create a class with methods and pass in an element of the class in as an argument, as a substitute for passing in a function.
The strategy pattern is another example of a scheme that often passes objects as arguments as a substitute for what is actually intended, functions.
Similarly dependency injection often involves some clunky scheme to pass a proxy for functions when it would often be better to simply pass in the functions directly as arguments.
So my answer would be that higher-order functions are often used to perform the same kinds of tasks that OO programmers perform, but directly, and with a lot less boilerplate.
I really started to feel the power when I learned a function can be part of a data structure. Here is a "consumer monad" (technobabble: free monad over (i ->)).
data Coro i a
= Return a
| Consume (i -> Coro i a)
So a Coro can either instantly yield a value, or be another Coro depending on some input. For example, this is a Coro Int Int:
Consume $ \x -> Consume $ \y -> Consume $ \z -> Return (x+y+z)
This consumes three integer inputs and returns their sum. You could also have it behave differently according to the inputs:
sumStream :: Coro Int Int
sumStream = Consume (go 0)
where
go accum 0 = Return accum
go accum n = Consume (\x -> go (accum+x) (n-1))
This consumes an Int and then consumes that many more Ints before yielding their sum. This can be thought of as a function that takes arbitrarily many arguments, constructed without any language magic, just higher order functions.
Functions in data structures are a very powerful tool that was not part of my vocabulary before I started doing Haskell.
Check out the paper 'Even Higher-Order Functions for Parsing or Why Would Anyone Ever Want To
Use a Sixth-Order Function?' by Chris Okasaki. It's written using ML, but the ideas apply equally to Haskell.
Joel Spolsky wrote a famous essay demonstrating how Map-Reduce works using Javascript's higher order functions. A must-read for anyone asking this question.
Higher-order functions are also required for currying, which Haskell uses everywhere. Essentially, a function taking two arguments is equivalent to a function taking one argument and returning another function taking one argument. When you see a type signature like this in Haskell:
f :: A -> B -> C
...the (->) can be read as right-associative, showing that this is in fact a higher-order function returning a function of type B -> C:
f :: A -> (B -> C)
A non-curried function of two arguments would instead have a type like this:
f' :: (A, B) -> C
So any time you use partial application in Haskell, you're working with higher-order functions.
Martín Escardó provides an interesting example of a higher-order function:
equal :: ((Integer -> Bool) -> Int) -> ((Integer -> Bool) -> Int) -> Bool
Given two functionals f, g :: (Integer -> Bool) -> Int, then equal f g decides if f and g are (extensionally) equal or not, even though f and g don't have a finite domain. In fact, the codomain, Int, can be replaced by any type with a decidable equality.
The code Escardó gives is written in Haskell, but the same algorithm should work in any functional language.
You can use the same techniques that Escardó describes to compute definite integrals of any continuous function to arbitrary precision.
One interesting and slightly crazy thing you can do is simulate an object-oriented system using a function and storing data in the function's scope (i.e. in a closure). It's higher-order in the sense that the object generator function is a function which returns the object (another function).
My Haskell is rather rusty so I can't easily give you a Haskell example, but here's a simplified Clojure example which hopefully conveys the concept:
(defn make-object [initial-value]
(let [data (atom {:value initial-value})]
(fn [op & args]
(case op
:set (swap! data assoc :value (first args))
:get (:value #data)))))
Usage:
(def a (make-object 10))
(a :get)
=> 10
(a :set 40)
(a :get)
=> 40
Same principle would work in Haskell (except that you'd probably need to change the set operation to return a new function since Haskell is purely functional)
I'm a particular fan of higher-order memoization:
memo :: HasTrie t => (t -> a) -> (t -> a)
(Given any function, return a memoized version of that function. Limited by the fact that the arguments of the function must be able to be encoded into a trie.)
This is from http://hackage.haskell.org/package/MemoTrie
There are several examples here: http://www.haskell.org/haskellwiki/Higher_order_function
I would also recommend this book: http://www.cs.nott.ac.uk/~gmh/book.html which is a great introduction to all of Haskell and covers higher order functions.
Higher order functions often use an accumulator so can be used when forming a list of elements which conform to a given rule from a larger list.
Here's a small paraphrased code snippet:
rays :: ChessPieceType -> [[(Int, Int)]]
rays Bishop = do
dx <- [1, -1]
dy <- [1, -1]
return $ iterate (addPos (dx, dy)) (dx, dy)
... -- Other piece types
-- takeUntilIncluding is an inclusive version of takeUntil
takeUntilIncluding :: (a -> Bool) -> [a] -> [a]
possibleMoves board piece = do
relRay <- rays (pieceType piece)
let ray = map (addPos src) relRay
takeUntilIncluding (not . isNothing . pieceAt board)
(takeWhile notBlocked ray)
where
notBlocked pos =
inBoard pos &&
all isOtherSide (pieceAt board pos)
isOtherSide = (/= pieceSide piece) . pieceSide
This uses several "higher order" functions:
iterate :: (a -> a) -> a -> [a]
takeUntilIncluding -- not a standard function
takeWhile :: (a -> Bool) -> [a] -> [a]
all :: (a -> Bool) -> [a] -> Bool
map :: (a -> b) -> [a] -> [b]
(.) :: (b -> c) -> (a -> b) -> a -> c
(>>=) :: Monad m => m a -> (a -> m b) -> m b
(.) is the . operator, and (>>=) is the do-notation "line break operator".
When programming in Haskell you just use them. Where you don't have the higher order functions is when you realize just how incredibly useful they were.
Here's a pattern that I haven't seen anyone else mention yet that really surprised me the first time I learned about it. Consider a statistics package where you have a list of samples as your input and you want to calculate a bunch of different statistics on them (there are also plenty of other ways to motivate this). The bottom line is that you have a list of functions that you want to run. How do you run them all?
statFuncs :: [ [Double] -> Double ]
statFuncs = [minimum, maximum, mean, median, mode, stddev]
runWith funcs samples = map ($samples) funcs
There's all kinds of higher order goodness going on here, some of which has been mentioned in other answers. But I want to point out the '$' function. When I first saw this use of '$', I was blown away. Before that I hadn't considered it to be very useful other than as a convenient replacement for parentheses...but this was almost magical...
One thing that's kind of fun, if not particularly practical, is Church Numerals. It's a way of representing integers using nothing but functions. Crazy, I know. <shamelessPlug>Here's an implementation in JavaScript that I made. It might be easier to understand than a Lisp/Haskell implementation. (But probably not, to be honest. JavaScript wasn't really meant for this kind of thing.)</shamelessPlug>
It’s been mentioned that Javascript supports certain higher-order functions, including an essay from Joel Spolsky. Mark Jason Dominus wrote an entire book called Higher–Order Perl; the book’s source is available for free download in a variety of fine formats, include PDF.
Ever since at least Perl 3, Perl has supported functionality more reminiscent of Lisp than of C, but it wasn’t until Perl 5 that full support for closures and all that follows from that was available. And ne of the first Perl 6 implementations was written in Haskell, which has had a lot of influence on how that language’s design has progressed.
Examples of functional programming approaches in Perl show up in everyday programming, especially with map and grep:
#ARGV = map { /\.gz$/ ? "gzip -dc < $_ |" : $_ } #ARGV;
#unempty = grep { defined && length } #many;
Since sort also admits a closure, the map/sort/map pattern is super common:
#txtfiles = map { $_->[1] }
sort {
$b->[0] <=> $a->[0]
||
lc $a->[1] cmp lc $b->[1]
||
$b->[1] cmp $a->[1]
}
map { -s => $_ }
grep { -f && -T }
glob("/etc/*");
or
#sorted_lines = map { $_->[0] }
sort {
$a->[4] <=> $b->[4]
||
$a->[-1] cmp $b->[-1]
||
$a->[3] <=> $b->[3]
||
...
}
map { [$_ => reverse split /:/] } #lines;
The reduce function makes list hackery easy without looping:
$sum = reduce { $a + $b } #numbers;
$max = reduce { $a > $b ? $a : $b } $MININT, #numbers;
There’s a lot more than this, but this is just a taste. Closures make it easy to create function generators, writing your own higher-order functions, not just using the builtins. In fact, one of the more common exception models,
try {
something();
} catch {
oh_drat();
};
is not a built-in. It is, however, almost trivially defined with try being a function that takes two arguments: a closure in the first arg and a function that takes a closure in the second one.
Perl 5 doesn’t have have currying built-in, although there is a module for that. Perl 6, though, has currying and first-class continuations built right into it, plus a lot more.

Is there a relationship between calling a function and instantiating an object in pure functional languages?

Imagine a simple (made up) language where functions look like:
function f(a, b) = c + 42
where c = a * b
(Say it's a subset of Lisp that includes 'defun' and 'let'.)
Also imagine that it includes immutable objects that look like:
struct s(a, b, c = a * b)
Again analogizing to Lisp (this time a superset), say a struct definition like that would generate functions for:
make-s(a, b)
s-a(s)
s-b(s)
s-c(s)
Now, given the simple set up, it seems clear that there is a lot of similarity between what happens behind the scenes when you either call 'f' or 'make-s'. Once 'a' and 'b' are supplied at call/instantiate time, there is enough information to compute 'c'.
You could think of instantiating a struct as being like a calling a function, and then storing the resulting symbolic environment for later use when the generated accessor functions are called. Or you could think of a evaluting a function as being like creating a hidden struct and then using it as the symbolic environment with which to evaluate the final result expression.
Is my toy model so oversimplified that it's useless? Or is it actually a helpful way to think about how real languages work? Are there any real languages/implementations that someone without a CS background but with an interest in programming languages (i.e. me) should learn more about in order to explore this concept?
Thanks.
EDIT: Thanks for the answers so far. To elaborate a little, I guess what I'm wondering is if there are any real languages where it's the case that people learning the language are told e.g. "you should think of objects as being essentially closures". Or if there are any real language implementations where it's the case that instantiating an object and calling a function actually share some common (non-trivial, i.e. not just library calls) code or data structures.
Does the analogy I'm making, which I know others have made before, go any deeper than mere analogy in any real situations?
You can't get much purer than lambda calculus: http://en.wikipedia.org/wiki/Lambda_calculus. Lambda calculus is in fact so pure, it only has functions!
A standard way of implementing a pair in lambda calculus is like so:
pair = fn a: fn b: fn x: x a b
first = fn a: fn b: a
second = fn a: fn b: b
So pair a b, what you might call a "struct", is actually a function (fn x: x a b). But it's a special type of function called a closure. A closure is essentially a function (fn x: x a b) plus values for all of the "free" variables (in this case, a and b).
So yes, instantiating a "struct" is like calling a function, but more importantly, the actual "struct" itself is like a special type of function (a closure).
If you think about how you would implement a lambda calculus interpreter, you can see the symmetry from the other side: you could implement a closure as an expression plus a struct containing the values of all the free variables.
Sorry if this is all obvious and you just wanted some real world example...
Both f and make-s are functions, but the resemblance doesn't go much further. Applying f calls the function and executes its code; applying make-s creates a structure.
In most language implementations and modelizations, make-s is a different kind of object from f: f is a closure, whereas make-s is a constructor (in the functional languages and logic meaning, which is close to the object oriented languages meaning).
If you like to think in an object-oriented way, both f and make-s have an apply method, but they have completely different implementations of this method.
If you like to think in terms of the underlying logic, f and make-s have a type build on the samme type constructor (the function type constructor), but they are constructed in different ways and have different destruction rules (function application vs. constructor application).
If you'd like to understand that last paragraph, I recommend Types and Programming Languages by Benjamin C. Pierce. Structures are discussed in §11.8.
Is my toy model so oversimplified that it's useless?
Essentially, yes. Your simplified model basically boils down to saying that each of these operations involves performing a computation and putting the result somewhere. But that is so general, it covers anything that a computer does. If you didn't perform a computation, you wouldn't be doing anything useful. If you didn't put the result somewhere, you would have done work for nothing as you have no way to get the result. So anything useful you do with a computer, from adding two registers together, to fetching a web page, could be modeled as performing a computation and putting the result somewhere that it can be accessed later.
There is a relationship between objects and closures. http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg03277.html
The following creates what some might call a function, and others might call an object:
Taken from SICP ( http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-21.html )
(define (make-account balance)
(define (withdraw amount)
(if (>= balance amount)
(begin (set! balance (- balance amount))
balance)
"Insufficient funds"))
(define (deposit amount)
(set! balance (+ balance amount))
balance)
(define (dispatch m)
(cond ((eq? m 'withdraw) withdraw)
((eq? m 'deposit) deposit)
(else (error "Unknown request -- MAKE-ACCOUNT"
m))))
dispatch)

Uses for Haskell id function

Which are the uses for id function in Haskell?
It's useful as an argument to higher order functions (functions which take functions as arguments), where you want some particular value left unchanged.
Example 1: Leave a value alone if it is in a Just, otherwise, return a default of 7.
Prelude Data.Maybe> :t maybe
maybe :: b -> (a -> b) -> Maybe a -> b
Prelude Data.Maybe> maybe 7 id (Just 2)
2
Example 2: building up a function via a fold:
Prelude Data.Maybe> :t foldr (.) id [(+2), (*7)]
:: (Num a) => a -> a
Prelude Data.Maybe> let f = foldr (.) id [(+2), (*7)]
Prelude Data.Maybe> f 7
51
We built a new function f by folding a list of functions together with (.), using id as the base case.
Example 3: the base case for functions as monoids (simplified).
instance Monoid (a -> a) where
mempty = id
f `mappend` g = (f . g)
Similar to our example with fold, functions can be treated as concatenable values, with id serving for the empty case, and (.) as append.
Example 4: a trivial hash function.
Data.HashTable> h <- new (==) id :: IO (HashTable Data.Int.Int32 Int)
Data.HashTable> insert h 7 2
Data.HashTable> Data.HashTable.lookup h 7
Just 2
Hashtables require a hashing function. But what if your key is already hashed? Then pass the id function, to fill in as your hashing method, with zero performance overhead.
If you manipulate numbers, particularly with addition and multiplication, you'll have noticed the usefulness of 0 and 1. Likewise, if you manipulate lists, the empty list turns out to be quite handy. Similarly, if you manipulate functions (very common in functional programming), you'll come to notice the same sort of usefulness of id.
In functional languages, functions are first class values
that you can pass as a parameter.
So one of the most common uses of id comes up when
you pass a function as a
parameter to another function to tell it what to do.
One of the choices of what to do is likely to be
"just leave it alone" - in that case, you pass id
as the parameter.
Suppose you're searching for some kind of solution to a puzzle where you make a move at each turn. You start with a candidate position pos. At each stage there is a list of possible transformations you could make to pos (eg. sliding a piece in the puzzle). In a functional language it's natural to represent transformations as functions so now you can make a list of moves using a list of functions. If "doing nothing" is a legal move in this puzzle, then you would represent that with id. If you didn't do that then you'd need to handle "doing nothing" as a special case that works differently from "doing something". By using id you can handle all cases uniformly in a single list.
This is probably the reason why almost all uses of id exist. To handle "doing nothing" uniformly with "doing something".
For a different sort of answer:
I'll often do this when chaining multiple functions via composition:
foo = id
. bar
. baz
. etc
over
foo = bar
. baz
. etc
It keeps things easier to edit. One can do similar things with other 'zero' elements, such as
foo = return
>>= bar
>>= baz
foos = []
++ bars
++ bazs
Since we are finding nice applications of id. Here, have a palindrome :)
import Control.Applicative
pal :: [a] -> [a]
pal = (++) <$> id <*> reverse
Imagine you are a computer, i.e. you can execute a sequence of steps. Then if I want you to stay in your current state, but I always have to give you an instruction (I cannot just mute and let the time pass), what instruction do I give you? Id is the function created for that, for returning the argument unchanged (in the case of the previous computer the argument would be its state) and for having a name for it. That necessity appears only when you have high order functions, when you operate with functions without considering what's inside them, that forces you to represent symbolically even the "do nothing" implementation. Analogously 0 seen as a quantity of something, is a symbol for the absence of quantity. Actually in Algebra both 0 and id are considered the neutral elements of the operations + and ∘ (function composition) respectively, or more formally:
for all x of type number:
0 + x = x
x + 0 = x
for all f of type function:
id ∘ f = f
f ∘ id = f
I can also help improve your golf score. Instead of using
($)
you can save a single character by using id.
e.g.
zipWith id [(+1), succ] [2,3,4]
An interesting, more than useful result.
Whenever you need to have a function somewhere, but want to do more than just hold its place (with 'undefined' as an example).
It's also useful, as (soon-to-be) Dr. Stewart mentioned above, for when you need to pass a function as an argument to another function:
join = (>>= id)
or as the result of a function:
let f = id in f 10
(presumably, you will edit the above function later to do something more "interesting"... ;)
As others have mentioned, id is a wonderful place-holder for when you need a function somewhere.