Generate a powerset with the help of a binary representation - binary

I know that "a powerset is simply any number between 0 and 2^N-1 where N is number of set members and one in binary presentation denotes presence of corresponding member".
(Hynek -Pichi- Vychodil)
I would like to generate a powerset using this mapping from the binary representation to the actual set elements.
How can I do this with Erlang?
I have tried to modify this, but with no success.
UPD: My goal is to write an iterative algorithm that generates a powerset of a set without keeping a stack. I tend to think that binary representation could help me with that.
Here is the successful solution in Ruby, but I need to write it in Erlang.
UPD2: Here is the solution in pseudocode, I would like to make something similar in Erlang.

First of all, I would note that with Erlang a recursive solution does not necessarily imply it will consume extra stack. When a method is tail-recursive (i.e., the last thing it does is the recursive call), the compiler will re-write it into modifying the parameters followed by a jump to the beginning of the method. This is fairly standard for functional languages.
To generate a list of all the numbers A to B, use the library method lists:seq(A, B).
To translate a list of values (such as the list from 0 to 2^N-1) into another list of values (such as the set generated from its binary representation), use lists:map or a list comprehension.
Instead of splitting a number into its binary representation, you might want to consider turning that around and checking whether the corresponding bit is set in each M value (in 0 to 2^N-1) by generating a list of power-of-2-bitmasks. Then, you can do a binary AND to see if the bit is set.
Putting all of that together, you get a solution such as:
generate_powerset(List) ->
% Do some pre-processing of the list to help with checks later.
% This involves modifying the list to combine the element with
% the bitmask it will need later on, such as:
% [a, b, c, d, e] ==> [{1,a}, {2,b}, {4,c}, {8,d}, {16,e}]
PowersOf2 = [1 bsl (X-1) || X <- lists:seq(1, length(List))],
ListWithMasks = lists:zip(PowersOf2, List),
% Generate the list from 0 to 1^N - 1
AllMs = lists:seq(0, (1 bsl length(List)) - 1),
% For each value, generate the corresponding subset
lists:map(fun (M) -> generate_subset(M, ListWithMasks) end, AllMs).
% or, using a list comprehension:
% [generate_subset(M, ListWithMasks) || M <- AllMs].
generate_subset(M, ListWithMasks) ->
% List comprehension: choose each element where the Mask value has
% the corresponding bit set in M.
[Element || {Mask, Element} <- ListWithMasks, M band Mask =/= 0].
However, you can also achieve the same thing using tail recursion without consuming stack space. It also doesn't need to generate or keep around the list from 0 to 2^N-1.
generate_powerset(List) ->
% same preliminary steps as above...
PowersOf2 = [1 bsl (X-1) || X <- lists:seq(1, length(List))],
ListWithMasks = lists:zip(PowersOf2, List),
% call tail-recursive helper method -- it can have the same name
% as long as it has different arity.
generate_powerset(ListWithMasks, (1 bsl length(List)) - 1, []).
generate_powerset(_ListWithMasks, -1, Acc) -> Acc;
generate_powerset(ListWithMasks, M, Acc) ->
generate_powerset(ListWithMasks, M-1,
[generate_subset(M, ListWithMasks) | Acc]).
% same as above...
generate_subset(M, ListWithMasks) ->
[Element || {Mask, Element} <- ListWithMasks, M band Mask =/= 0].
Note that when generating the list of subsets, you'll want to put new elements at the head of the list. Lists are singly-linked and immutable, so if you want to put an element anywhere but the beginning, it has to update the "next" pointers, which causes the list to be copied. That's why the helper function puts the Acc list at the tail instead of doing Acc ++ [generate_subset(...)]. In this case, since we're counting down instead of up, we're already going backwards, so it ends up coming out in the same order.
So, in conclusion,
Looping in Erlang is idiomatically done via a tail recursive function or using a variation of lists:map.
In many (most?) functional languages, including Erlang, tail recursion does not consume extra stack space since it is implemented using jumps.
List construction is typically done backwards (i.e., [NewElement | ExistingList]) for efficiency reasons.
You generally don't want to find the Nth item in a list (using lists:nth) since lists are singly-linked: it would have to iterate the list over and over again. Instead, find a way to iterate the list once, such as how I pre-processed the bit masks above.

Related

Haskell Syntax with functions, specifically findMin function

I'm struggling with the syntax with Haskell. It's very simple, but i'm stuck.
I'm trying to write a findMin function that takes a list and finds the minimum. Here's my code, I have tried so many syntactical things that I'm up for any help I can get.
findMin [] = [0]
findMin list = if any < head list then findMin(tail) else take 1
And i get all sorts of type errors. What is going wrong??
(if it helps at all i have a background in object oriented programming)
I see that you've got things figured out in the comments but I'll add some things here hopefully to help. I also feel like I should quickly mention that Haskell already has a minimum function just in case anyone stumbles upon this who isn't just trying to learn the language and actually needs the function for something.
First of all let's talk about types. I would normally expect a findMin style of function to return the minimum value rather than that value inside a list so the type will be:
findMin :: (Num a, Ord a) => [a] -> a
The things before => add context to the function type. This restricts all the things that a can be to only be things that have order (otherwise how can we find a minimum). Secondly Num a forces a to be a number, this is necessary because you specified that the case for the empty list should be 0.
I'll explain 2 other ways to write the findMin function, trying to make them more concise than your definition (one of the benefits of Haskell is how concise it can be and I also find it helps when learning to see the multiple possibilities). The first will be using recursion and guards the second will be using a list comprehension.
We can't do much with findMin [] = 0 so we'll move onto the lists with stuff in them.
We need to be careful with a recursive definition because eventually we will evaluate findMin [] and always get 0 so we need to stop the recursion before that by defining a case for a single value:
findMin [x] = x
When passing a list as an argument to a function you can separate out its elements and give them each a name so (x:xs) means a value x is the first element followed by a list of elements xs.
For this definition we will define the first two elements on their own followed by the rest of the elements:
findMin (x:y:xs)
| x < y = findMin (x:xs)
| otherwise = findMin (y:xs)
The guards allow us to have multiple definitions for the function depending on a condition. If x < y we want to get rid of y as it cannot be the minimum so we find the minimum of x and the remaining elements, xs. If x is not smaller than y then the minimum value is either y or one of the values in xs.
The second way to define this function is using a list comprehension (this is my favourite as it is particularly concise).
We aren't using recursion so we don't need the case for one element we can keep our definition for an empty list and go straight to any list with elements:
findMin xs = head [x | x <- xs, all (>= x) xs]
So what's going on here? [x | x <- xs] creates a list of x values where x is all the elements from xs. We then add a condition to say we only want those values if all (>= x) xs meaning if all elements of xs are greater than or equal to xs.
This results in a list of the minimum elements. This might have one element if the minimum occurs once or it may have several if it occurs multiple times. Either way they are all the same so we just take the first using head.
Hope this helps and hope you have fun learning Haskell. Feel free to ask if you have any questions :)
In ghci this function seems to do your trick:
let findMin x = if length x > 1 then min (head x) (findMin (tail x)) else head x
I'm learning as I try to answer some questions here, so any feedback would be appreciated.

How to write own List.map function in F#

I have to write my own List.map function using 'for elem in list' and tail/non-tail recursive. I have been looking all around Google for some tips, but didn't find much. I am used to Python and it's pretty hard to not think about using its methods, but of course, these languages are very different from each other.
For the first one I started with something like:
let myMapFun funcx list =
for elem in list do
funcx elem::[]
Tail recursive:
let rec myMapFun2 f list =
let cons head tail = head :: tail
But anyway, I know it's wrong, it feels wrong. I think I am not used yet to F# strcture. Can anyone give me a hand?
Thanks.
As a general rule, when you're working through a list in F#, you want to write a recursive function that does something to the head of the list, then calls itself on the tail of the list. Like this:
// NON-tail-recursive version
let rec myListFun list =
match list with
| [] -> valueForEmptyList // Decision point 1
| head :: tail ->
let newHead = doSomethingWith head // Decision point 2
newHead :: (myListFun tail) // Return value might be different, too
There are two decisions you need to make: What do I do if the list is empty? And what do I do with each item in the list? For example, if the thing you're wanting to do is to count the number of items in the list, then your "value for empty list" is probably 0, and the thing you'll do with each item is to add 1 to the length. I.e.,
// NON-tail-recursive version of List.length
let rec myListLength list =
match list with
| [] -> 0 // Empty lists have length 0
| head :: tail ->
let headLength = 1 // The head is one item, so its "length" is 1
headLength + (myListLength tail)
But this function has a problem, because it will add a new recursive call to the stack for each item in the list. If the list is too long, the stack will overflow. The general pattern, when you're faced with recursive calls that aren't tail-recursive (like this one), is to change your recursive function around so that it takes an extra parameter that will be an "accumulator". So instead of passing a result back from the recursive function and THEN doing a calculation, you perform the calculation on the "accumulator" value, and then pass the new accumulator value to the recursive function in a truly tail-recursive call. Here's what that looks like for the myListLength function:
let rec myListLength acc list =
match list with
| [] -> acc // Empty list means I've finished, so return the accumulated number
| head :: tail ->
let headLength = 1 // The head is one item, so its "length" is 1
myListLength (acc + headLength) tail
Now you'd call this as myListLength 0 list. And since that's a bit annoying, you can "hide" the accumulator by making it a parameter in an "inner" function, whose definition is hidden inside myListLength. Like this:
let myListLength list =
let rec innerFun acc list =
match list with
| [] -> acc // Empty list means I've finished, so return the accumulated number
| head :: tail ->
let headLength = 1 // The head is one item, so its "length" is 1
innerFun (acc + headLength) tail
innerFun 0 list
Notice how myListLength is no longer recursive, and it only takes one parameter, the list whose length you want to count.
Now go back and look at the generic, NON-tail-recursive myListFun that I presented in the first part of my answer. See how it corresponds to the myListLength function? Well, its tail-recursive version also corresponds well to the tail-recursive version of myListLength:
let myListFun list =
let rec innerFun acc list =
match list with
| [] -> acc // Decision point 1: return accumulated value, or do something more?
| head :: tail ->
let newHead = doSomethingWith head
innerFun (newHead :: acc) tail
innerFun [] list
... Except that if you write your map function this way, you'll notice that it actually comes out reversed. The solution is to change innerFun [] list in the last line to innerFun [] list |> List.rev, but the reason why it comes out reversed is something that you'll benefit from working out for yourself, so I won't tell you unless you ask for help.
And now, by the way, you have the general pattern for doing all sorts of things with lists, recursively. Writing List.map should be easy. For an extra challenge, try writing List.filter next: it will use the same pattern.
let myMapFun funcx list =
[for elem in list -> funcx elem]
myMapFun ((+)1) [1;2;3]
let rec myMapFun2 f = function // [1]
| [] -> [] // [2]
| h::t -> (f h)::myMapFun f t // [3]
myMapFun2 ((+)1) [1;2;3] // [4]
let myMapFun3 f xs = // [6]
let rec g f xs= // [7]
match xs with // [1]
| [] -> [] // [2]
| h::t -> (f h)::g f t // [3]
g f xs
myMapFun3 ((+)1) [1;2;3] // [4]
// [5] see 6 for a comment on value Vs variable.
// [8] see 8 for a comment on the top down out-of-scopeness of F#
(*
Reference:
convention: I've used a,b,c,etc refer to distinct aspects of the numbered reference
[1] roughly function is equivalent to the use of match. It's the way they do it in
OCaml. There is no "match" in OCaml. So this is a more compatible way
of writing functions. With function, and the style that is used here, we can shave
off a whole two lines from our definitions(!) Therefore, readability is increased(!)
If you end up writing many functions scrolling less to be on top
of the breadth of what is happening is more desirable than the
niceties of using match. "Match" can be
a more "rounded" form. Sometimes I've found a glitch with function.
I tend to change to match, when readability is better served.
It's a style thing.
[1b] when I discovered "function" in the F# compiler source code + it's prevalence in OCaml,
I was a little annoyed that it took so long to discover it + that it is deemed such an
underground, confusing and divisive tool by our esteemed F# brethren.
[1c] "function" is arguably more flexible. You can also slot it into pipelines really
quickly. Whereas match requires assignment or a variable name (perhaps an argument).
If you are into pipelines |> and <| (and cousins such as ||> etc), then you should
check it out.
[1d] on style, typically, (fun x->x) is the standard way, however, if you've ever
appreciated the way you can slot in functions from Seq, List, and Module, then it's
nice to skip the extra baggage. For me, function falls into this category.
[2a] "[]" is used in two ways, here. How annoying. Once it grows on you, it's cool.
Firstly [] is an empty list. Visually, it's a list without the stuff in it
(like [1;2;3], etc). Left of the "->" we're in the "pattern" part of the partern
matching expression. So, when the input to the function (lets call it "x" to stay
in tune with our earliest memories of maths or "math" classes) is an empty list,
follow the arrow and do the statement on the right.
Incidentally, sometimes it's really nice to skip the definition of x altogether.
Behold, the built in "id" identity function (essentially fun (x)->x -- ie. do nothing).
It's more useful than you realise, at first. I digress.
[2b] "[]" on the right of [] means return an empty list from this code block. Match or
function symantics being the expression "block" in this case. Block being the same
meaning as you'll have come across in other languages. The difference in F#, being
that there's *always* a return from any expression unless you return unit which is
defined as (). I digress, again.
[3a] "::" is the "cons" operator. Its history goes back a long way. F# really only
implements two such operators (the other being append #). These operators are
list specific.
[3b] on the lhs of "->" we have a pattern match on a list. So the first element
on the lhs of :: goes into the value (h)ead, and the rest of the list, the tail,
goes into the (t)ail value.
[3c] Head/tail use is very specific in F#. Another language that I like a lot, has
a nicer terminology for obviously interesting parts of a list, but, you know, it's
nice to go with an opinionated simplification, sometimes.
[3d] on the rhs of the "->", the "::", surprisingly, means join a single element
to a list. In this case, the result of the function f or funcx.
[3e] when we are talking about lists, specifically, we're talking about a linked
structure with pointers behind the scenes. All we have the power to do is to
follow the cotton thread of pointers from structure to structure. So, with a
simple "match" based device, we abstract away from the messy .Value and .Next()
operations you may have to use in other languages (or which get hidden inside
an enumerator -- it'd be nice to have these operators for Seq, too, but
a Sequence could be an infinite sequence, on purpose, so these decisions for
List make sense). It's all about increasing readability.
[3f] A list of "what". What it is is typically encoded into 't (or <T> in C#).
or also <T> in F#. Idiomatically, you tend to see 'someLowerCaseLetter in
F# a lot more. What can be nice is to pair such definitions (x:'x).
i.e. the value x which is of type 'x.
[4a] move verbosely, ((+)1) is equivilent to (fun x->x+1). We rely on partial
composition, here. Although "+" is an operator, it is firstmost, also a
function... and functions... you get the picture.
[4b] partial composition is a topic that is more useful than it sounds, too.
[5] value Vs variable. As an often stated goal, we aim to have values that
never ever change, because, when a value doesn't change, it's easier to
think and reason about. There are nice side-effects that flow from that
choice, that mean that threading and locking are a lot simpler. Now we
get into that "stateless" topic. More often than not, a value is all you
need. So, "value" it is for our cannon regarding sensible defaults.
A variable, implies, that it can be changed. Not strictly true, but in
the programming world this is the additional meaning that has been strapped
on to the notion of variable. Upon hearing the word variable, ones mind might
start jumping through the different kinds of variable "hoops". It's more stuff
that you need to hold in the context of your mind. Apparently, western people
are only able to hold about 7 things in their minds at once. Introduce mutability
and value in the same context, and there goes two slots. I'm told that more uniform
languages like Chinese allow you to hold up to 10 things in your mind at once.
I can't verify the latter. I have a language with warlike Saxon and elegant
French blended together to use (which I love for other reasons).
Anyway, when I hear "value", I feel peace. That can only mean one thing.
[6] this variation really only achieves hiding of the recursive function. Perhaps
it's nice to be a little terser inside the function, and more descriptive to
the outside world. Long names lead to bloat. Sometimes, it's just simpler.
[7a] type inference and recursion. F# is one of the nicest
languages that I've come across for elegantly dealing with recursive algorithms.
Initially, it's confusing, but once you get past that
[7b] If you are interested in solving real problems, forget about "tail"
recursion, for now. It's a cool compiler trick. When you get performance conscious,
or on a rainy day, it
might be a useful thing to look up.
Look it up by all means if you are curious, though. If you are writing recursive
stuff, just be aware that the compiler geeks have you covered (sometimes), and
that horrible "recursive" performance hole (that is often associated with
recursive techniques -- ie. perhaps avoid at all costs in ancient programming
history) may just be turned into a regular loop for you, gratis. This auto-to-loop
conversion has always been a compiler geek promise. You can rely on it more though.
It's more predictable in F# as to when "tail recursion" kicks in. I digress.
Step 1 correctly and elegantly solve useful problems.
Step 2 (or 3, etc) work out why the silicon is getting hot.
NB. depending on the context, performance may be an equally important thing
to think about. Many don't have that problem. Bear in mind that by writing
functionally, you are structuring solutions in such a way that they are
more easily streamlineable (in the cycling sense). So... it's okay not to
get caught in the weeds. Probably best for another discussion.
[8] on the way the file system is top down and the way code is top down.
From day one we are encouraged in an opinionated (some might say coerced) into
writing code that has flow + code that is readable and easier to navigate.
There are some nice side-effects from this friendly coercion.

Trying to understand this block of code in OCaml

I am trying to understand what this block of code is doing:
let rec size x =
match x with
[] -> 0
| _::tail -> 1 + (size tail) ;;
I know that this expression computes the size of a list, but I don't understand where in the code it reduces the list one by one. For example, I think it needs to go from [1;2;3] to [2;3] to [3], but where or how does it do it? I don't get it.
Thanks.
A list in OCaml is built recursively using empty list ([]) and cons (::) constructor. So [1; 2; 3] is a syntactic sugar of 1::2::3::[].
The size is computed by reducing x in each step using the pattern _::tail (_ denotes that we ignore the head of the list) and calling the same function size on tail. The function eventually terminates when the list is empty and the pattern of [] succeeds.
Here is a short illustration of how size [1; 2; 3] is computed:
size 1::2::3::[]
~> 1 + size 2::3::[] // match the case of _::tail
~> 1 + 1 + size 3::[] // match the case of _::tail
~> 1 + 1 + 1 + size [] // match the case of _::tail
~> 1 + 1 + 1 + 0 // match the case of []
~> 3
As a side note, you can see from the figure that a lot of information needs to be stored in the stack to compute size. That means your function could result in a stack overflow error if the input list is long, but it is another story.
Actually this piece of code use the power of pattern matching to compute the size of the list.
the match means you will try to make x enter in one of the following pattern.
The first one [] means that your list is empty, so its size is 0. And the second one _::tail means you have one element (*) , followed by the rest of the list, so basically the size is 1 + size(rest of the list)
(*) The underscore means you don't care about the value of this element.
Any time you match against a list, you can match a pattern of the form head::tail where head will get the value of the first element and tail will get the remainder. This pattern will match any non-empty list because tail can be empty, but head must exist.
Second, any pattern you're matching in Ocaml, if you want, you can replace a variable with an underscore to say "match something here, but I'm not going to actually use it, so I'm not giving it a name". So, in this program instead of writing head::tail -> 1 + (size tail), they write _::tail -> 1 + (size tail) since they aren't actually using the first element, just ensuring that it exists.
As per this article, which actually has that exact example you're discussing:
As we have seen, a list can be either empty (the list is of the form []), or composed of a first element (its head) and a sublist (its tail). The list is then of the form h::t.
The statement provided simply gives you 0 if the list matches an empty list or uses pattern matching to extract the head (first item) and tail (all other items), then uses recursion to get the length of the tail.
So, it's the _::tail which reduces the list, and 1 + (size tail) which calculates the size. The bit before the | is, of course, the terminating condition for the recursion.
It may be more understandable (in my opinion) if viewed as:
let rec size x = match x with
[] -> 0
| _::tail -> 1 + (size tail)
;;
(this is actually very similar to the format used in the linked page, I've just changed it slightly to line up the -> symbols).
It uses a pattern match to extract the tail of the list (naming it tail), then calls itself with the tail. Maybe the missing piece for you is the pattern matching.

Function types declarations in Mathematica

I have bumped into this problem several times on the type of input data declarations mathematica understands for functions.
It Seems Mathematica understands the following types declarations:
_Integer,
_List,
_?MatrixQ,
_?VectorQ
However: _Real,_Complex declarations for instance cause the function sometimes not to compute. Any idea why?
What's the general rule here?
When you do something like f[x_]:=Sin[x], what you are doing is defining a pattern replacement rule. If you instead say f[x_smth]:=5 (if you try both, do Clear[f] before the second example), you are really saying "wherever you see f[x], check if the head of x is smth and, if it is, replace by 5". Try, for instance,
Clear[f]
f[x_smth]:=5
f[5]
f[smth[5]]
So, to answer your question, the rule is that in f[x_hd]:=1;, hd can be anything and is matched to the head of x.
One can also have more complicated definitions, such as f[x_] := Sin[x] /; x > 12, which will match if x>12 (of course this can be made arbitrarily complicated).
Edit: I forgot about the Real part. You can certainly define Clear[f];f[x_Real]=Sin[x] and it works for eg f[12.]. But you have to keep in mind that, while Head[12.] is Real, Head[12] is Integer, so that your definition won't match.
Just a quick note since no one else has mentioned it. You can pattern match for multiple Heads - and this is quicker than using the conditional matching of ? or /;.
f[x:(_Integer|_Real)] := True (* function definition goes here *)
For simple functions acting on Real or Integer arguments, it runs in about 75% of the time as the similar definition
g[x_] /; Element[x, Reals] := True (* function definition goes here *)
(which as WReach pointed out, runs in 75% of the time
as g[x_?(Element[#, Reals]&)] := True).
The advantage of the latter form is that it works with Symbolic constants such as Pi - although if you want a purely numeric function, this can be fixed in the former form with the use of N.
The most likely problem is the input your using to test the the functions. For instance,
f[x_Complex]:= Conjugate[x]
f[x + I y]
f[3 + I 4]
returns
f[x + I y]
3 - I 4
The reason the second one works while the first one doesn't is revealed when looking at their FullForms
x + I y // FullForm == Plus[x, Times[ Complex[0,1], y]]
3 + I 4 // FullForm == Complex[3,4]
Internally, Mathematica transforms 3 + I 4 into a Complex object because each of the terms is numeric, but x + I y does not get the same treatment as x and y are Symbols. Similarly, if we define
g[x_Real] := -x
and using them
g[ 5 ] == g[ 5 ]
g[ 5. ] == -5.
The key here is that 5 is an Integer which is not recognized as a subset of Real, but by adding the decimal point it becomes Real.
As acl pointed out, the pattern _Something means match to anything with Head === Something, and both the _Real and _Complex cases are very restrictive in what is given those Heads.

Uses for Haskell id function

Which are the uses for id function in Haskell?
It's useful as an argument to higher order functions (functions which take functions as arguments), where you want some particular value left unchanged.
Example 1: Leave a value alone if it is in a Just, otherwise, return a default of 7.
Prelude Data.Maybe> :t maybe
maybe :: b -> (a -> b) -> Maybe a -> b
Prelude Data.Maybe> maybe 7 id (Just 2)
2
Example 2: building up a function via a fold:
Prelude Data.Maybe> :t foldr (.) id [(+2), (*7)]
:: (Num a) => a -> a
Prelude Data.Maybe> let f = foldr (.) id [(+2), (*7)]
Prelude Data.Maybe> f 7
51
We built a new function f by folding a list of functions together with (.), using id as the base case.
Example 3: the base case for functions as monoids (simplified).
instance Monoid (a -> a) where
mempty = id
f `mappend` g = (f . g)
Similar to our example with fold, functions can be treated as concatenable values, with id serving for the empty case, and (.) as append.
Example 4: a trivial hash function.
Data.HashTable> h <- new (==) id :: IO (HashTable Data.Int.Int32 Int)
Data.HashTable> insert h 7 2
Data.HashTable> Data.HashTable.lookup h 7
Just 2
Hashtables require a hashing function. But what if your key is already hashed? Then pass the id function, to fill in as your hashing method, with zero performance overhead.
If you manipulate numbers, particularly with addition and multiplication, you'll have noticed the usefulness of 0 and 1. Likewise, if you manipulate lists, the empty list turns out to be quite handy. Similarly, if you manipulate functions (very common in functional programming), you'll come to notice the same sort of usefulness of id.
In functional languages, functions are first class values
that you can pass as a parameter.
So one of the most common uses of id comes up when
you pass a function as a
parameter to another function to tell it what to do.
One of the choices of what to do is likely to be
"just leave it alone" - in that case, you pass id
as the parameter.
Suppose you're searching for some kind of solution to a puzzle where you make a move at each turn. You start with a candidate position pos. At each stage there is a list of possible transformations you could make to pos (eg. sliding a piece in the puzzle). In a functional language it's natural to represent transformations as functions so now you can make a list of moves using a list of functions. If "doing nothing" is a legal move in this puzzle, then you would represent that with id. If you didn't do that then you'd need to handle "doing nothing" as a special case that works differently from "doing something". By using id you can handle all cases uniformly in a single list.
This is probably the reason why almost all uses of id exist. To handle "doing nothing" uniformly with "doing something".
For a different sort of answer:
I'll often do this when chaining multiple functions via composition:
foo = id
. bar
. baz
. etc
over
foo = bar
. baz
. etc
It keeps things easier to edit. One can do similar things with other 'zero' elements, such as
foo = return
>>= bar
>>= baz
foos = []
++ bars
++ bazs
Since we are finding nice applications of id. Here, have a palindrome :)
import Control.Applicative
pal :: [a] -> [a]
pal = (++) <$> id <*> reverse
Imagine you are a computer, i.e. you can execute a sequence of steps. Then if I want you to stay in your current state, but I always have to give you an instruction (I cannot just mute and let the time pass), what instruction do I give you? Id is the function created for that, for returning the argument unchanged (in the case of the previous computer the argument would be its state) and for having a name for it. That necessity appears only when you have high order functions, when you operate with functions without considering what's inside them, that forces you to represent symbolically even the "do nothing" implementation. Analogously 0 seen as a quantity of something, is a symbol for the absence of quantity. Actually in Algebra both 0 and id are considered the neutral elements of the operations + and ∘ (function composition) respectively, or more formally:
for all x of type number:
0 + x = x
x + 0 = x
for all f of type function:
id ∘ f = f
f ∘ id = f
I can also help improve your golf score. Instead of using
($)
you can save a single character by using id.
e.g.
zipWith id [(+1), succ] [2,3,4]
An interesting, more than useful result.
Whenever you need to have a function somewhere, but want to do more than just hold its place (with 'undefined' as an example).
It's also useful, as (soon-to-be) Dr. Stewart mentioned above, for when you need to pass a function as an argument to another function:
join = (>>= id)
or as the result of a function:
let f = id in f 10
(presumably, you will edit the above function later to do something more "interesting"... ;)
As others have mentioned, id is a wonderful place-holder for when you need a function somewhere.