How to write recursive multiplication function in pyret - function

My programming languages class uses pyret. I've only really learned c++ so it's a little confusing for me right now. We were given a homework assignment regarding natural numbers. the "S()" just means successor of a natural number (which is just another natural number), so pretty much anything but zero. The following function is provided:
fun nat-plus(n :: Nat, m :: Nat) -> Nat:
cases (Nat) n:
|O => m
|S(nn) => S(nat-plus(nn, m))
end
This function just adds n and m. I think the main problem is that I'm not 100% sure how this function works but I have a bit of an idea.
The next step in the homework is to write a function that multiplies n and m using the nat-plus function above. Is anyone able to point me in the right direction for writing this function?

Related

What is the nicest way to make a function that takes Float or Double as input?

Say I want to implement the Fermi function (the simplest example of a logistic curve) so that if it's passed a Float it returns a Float and if it's passed a Double it returns a Double. Here's what I've got:
e = 2.7182845904523536
fermiFunc :: (Floating a) => a -> a
fermiFunc x = let one = fromIntegral 1 in one/(one + e^(-x))
The problem is that ghc says e is a Double. Defining the variable one is also kinda gross. The other solution I've thought of is to just define the function for doubles:
e = 2.7182845904523536
fermiFuncDouble :: Double -> Double
fermiFuncDouble x = 1.0/(1.0 + e^(-x))
Then using Either:
fermiFunc :: (Floating a) => Either Float Double -> a
fermiFunc Right x = double2Float (fermiFuncDouble (float2Double x))
fermiFunc Left x = fermiFuncDouble x
This isn't very exciting though because I might as well have just written a separate function for the Float case that just handles the casting and calls fermiFuncDouble. Is there a nice way to write a function for both types?
Don't write e^x, ever, in any language. That is not the exponential function, it's the power function.
The exponential function is called exp, and its definition actually has little to do with the power operation – it's defined, depending on your taste, as a Taylor series or as the unique solution to the ordinary differential equation d⁄d𝑥 exp 𝑥 = exp 𝑥 with boundary condition exp 0 = 1. Now, it so happens that, for any rational n, we have exp n ≡ (exp 1)n and that motivates also defining the power operation for numbers in ℝ or ℂ addition to ℚ, namely as
az := exp (z · ln a)
...but e𝑥 should be understood as really just a shortcut for writing exp(𝑥) itself.
So rather than defining e somewhere and trying to take some power of it, you should use exp just as it is.
fermiFunc :: Floating a => a -> a
fermiFunc x = 1/(1 + exp (-x))
...or indeed
fermiFunc = recip . succ . exp . negate
Assuming that you want floating point exponent, that's (**). (^) is integral exponent. Rewriting your function to use (**) and letting GHC infer the type gives:
fermiFunc x = 1/(1 + e ** (-x))
and
> :t fermiFunc
fermiFunc :: (Floating a) => a -> a
Since Float and Double both have Floating instances, fermiFunc is now sufficiently polymorphic to work with both.
(Note: you may need to declare a polymorphic type for e to get around the monomorphism restriction, i.e., e :: Floating a => a.)
In general, the answer to "How do I write a function that works with multiple types?" is either "Write it so that it works universally for all types." (parametric polymorphism, like map), "Find (or create) one or more typeclasses that they share that provides the behaviour you need." (ad hoc polymorphism, like show), or "Create a new type that is the sum of those types." (like Either).
The latter two have some tradeoffs. For instance, type classes are open (you can add more at any time) while sum types are closed (you must modify the definition to add more types). Sum types require you to know which type you are dealing with (because it must be matched up with a constructor) while type classes let you write polymorphic functions.
You can use :i in GHCi to list instances and to list instance methods, which might help you to locate a suitable typeclass.

What is a polymorphic lambda?

The concept of lambdas (anonymous functions) is very clear to me. And I'm aware of polymorphism in terms of classes, with runtime/dynamic dispatch used to call the appropriate method based on the instance's most derived type. But how exactly can a lambda be polymorphic? I'm yet another Java programmer trying to learn more about functional programming.
You will observe that I don't talk about lambdas much in the following answer. Remember that in functional languages, any function is simply a lambda bound to a name, so what I say about functions translates to lambdas.
Polymorphism
Note that polymorphism doesn't really require the kind of "dispatch" that OO languages implement through derived classes overriding virtual methods. That's just one particular kind of polymorphism, subtyping.
Polymorphism itself simply means a function allows not just for one particular type of argument, but is able to act accordingly for any of the allowed types. The simplest example: you don't care for the type at all, but simply hand on whatever is passed in. Or, to make it not quite so trivial, wrap it in a single-element container. You could implement such a function in, say, C++:
template<typename T> std::vector<T> wrap1elem( T val ) {
return std::vector(val);
}
but you couldn't implement it as a lambda, because C++ (time of writing: C++11) doesn't support polymorphic lambdas.
Untyped values
...At least not in this way, that is. C++ templates implement polymorphism in rather an unusual way: the compiler actually generates a monomorphic function for every type that anybody passes to the function, in all the code it encounters. This is necessary because of C++' value semantics: when a value is passed in, the compiler needs to know the exact type (its size in memory, possible child-nodes etc.) in order to make a copy of it.
In most newer languages, almost everything is just a reference to some value, and when you call a function it doesn't get a copy of the argument objects but just a reference to the already-existing ones. Older languages require you to explicitly mark arguments as reference / pointer types.
A big advantage of reference semantics is that polymorphism becomes much easier: pointers always have the same size, so the same machine code can deal with references to any type at all. That makes, very uglily1, a polymorphic container-wrapper possible even in C:
typedef struct{
void** contents;
int size;
} vector;
vector wrap1elem_by_voidptr(void* ptr) {
vector v;
v.contents = malloc(sizeof(&ptr));
v.contents[0] = ptr;
v.size = 1;
return v;
}
#define wrap1elem(val) wrap1elem_by_voidptr(&(val))
Here, void* is just a pointer to any unknown type. The obvious problem thus arising: vector doesn't know what type(s) of elements it "contains"! So you can't really do anything useful with those objects. Except if you do know what type it is!
int sum_contents_int(vector v) {
int acc = 0, i;
for(i=0; i<v.size; ++i) {
acc += * (int*) (v.contents[i]);
}
return acc;
}
obviously, this is extremely laborious. What if the type is double? What if we want the product, not the sum? Of course, we could write each case by hand. Not a nice solution.
What would we better is if we had a generic function that takes the instruction what to do as an extra argument! C has function pointers:
int accum_contents_int(vector v, void* (*combine)(int*, int)) {
int acc = 0, i;
for(i=0; i<v.size; ++i) {
combine(&acc, * (int*) (v.contents[i]));
}
return acc;
}
That could then be used like
void multon(int* acc, int x) {
acc *= x;
}
int main() {
int a = 3, b = 5;
vector v = wrap2elems(a, b);
printf("%i\n", accum_contents_int(v, multon));
}
Apart from still being cumbersome, all the above C code has one huge problem: it's completely unchecked if the container elements actually have the right type! The casts from *void will happily fire on any type, but in doubt the result will be complete garbage2.
Classes & Inheritance
That problem is one of the main issues which OO languages solve by trying to bundle all operations you might perform right together with the data, in the object, as methods. While compiling your class, the types are monomorphic so the compiler can check the operations make sense. When you try to use the values, it's enough if the compiler knows how to find the method. In particular, if you make a derived class, the compiler knows "aha, it's ok to call that method from the base class even on a derived object".
Unfortunately, that would mean all you achieve by polymorphism is equivalent to compositing data and simply calling the (monomorphic) methods on a single field. To actually get different behaviour (but controlledly!) for different types, OO languages need virtual methods. What this amounts to is basically that the class has extra fields with pointers to the method implementations, much like the pointer to the combine function I used in the C example – with the difference that you can only implement an overriding method by adding a derived class, for which the compiler again knows the type of all the data fields etc. and you're safe and all.
Sophisticated type systems, checked parametric polymorphism
While inheritance-based polymorphism obviously works, I can't help saying it's just crazy stupid3 sure a bit limiting. If you want to use just one particular operation that happens to be not implemented as a class method, you need to make an entire derived class. Even if you just want to vary an operation in some way, you need to derive and override a slightly different version of the method.
Let's revisit our C code. On the face of it, we notice it should be perfectly possible to make it type-safe, without any method-bundling nonsense. We just need to make sure no type information is lost – not during compile-time, at least. Imagine (Read ∀T as "for all types T")
∀T: {
typedef struct{
T* contents;
int size;
} vector<T>;
}
∀T: {
vector<T> wrap1elem(T* elem) {
vector v;
v.contents = malloc(sizeof(T*));
v.contents[0] = &elem;
v.size = 1;
return v;
}
}
∀T: {
void accum_contents(vector<T> v, void* (*combine)(T*, const T*), T* acc) {
int i;
for(i=0; i<v.size; ++i) {
combine(&acc, (*T) (v[i]));
}
}
}
Observe how, even though the signatures look a lot like the C++ template thing on top of this post (which, as I said, really is just auto-generated monomorphic code), the implementation actually is pretty much just plain C. There are no T values in there, just pointers to them. No need to compile multiple versions of the code: at runtime, the type information isn't needed, we just handle generic pointers. At compile time, we do know the types and can use the function head to make sure they match. I.e., if you wrote
void evil_sumon (int* acc, double* x) { acc += *x; }
and tried to do
vector<float> v; char acc;
accum_contents(v, evil_sumon, acc);
the compiler would complain because the types don't match: in the declaration of accum_contents it says the type may vary, but all occurences of T do need to resolve to the same type.
And that is exactly how parametric polymorphism works in languages of the ML family as well as Haskell: the functions really don't know anything about the polymorphic data they're dealing with. But they are given the specialised operators which have this knowledge, as arguments.
In a language like Java (prior to lambdas), parametric polymorphism doesn't gain you much: since the compiler makes it deliberately hard to define "just a simple helper function" in favour of having only class methods, you can simply go the derive-from-class way right away. But in functional languages, defining small helper functions is the easiest thing imaginable: lambdas!
And so you can do incredible terse code in Haskell:
Prelude> foldr (+) 0 [1,4,6]
11
Prelude> foldr (\x y -> x+y+1) 0 [1,4,6]
14
Prelude> let f start = foldr (\_ (xl,xr) -> (xr, xl)) start
Prelude> :t f
f :: (t, t) -> [a] -> (t, t)
Prelude> f ("left", "right") [1]
("right","left")
Prelude> f ("left", "right") [1, 2]
("left","right")
Note how in the lambda I defined as a helper for f, I didn't have any clue about the type of xl and xr, I merely wanted to swap a tuple of these elements which requires the types to be the same. So that would be a polymorphic lambda, with the type
\_ (xl, xr) -> (xr, xl) :: ∀ a t. a -> (t,t) -> (t,t)
1Apart from the weird explicit malloc stuff, type safety etc.: code like that is extremely hard to work with in languages without garbage collector, because somebody always needs to clean up memory once it's not needed anymore, but if you didn't watch out properly whether somebody still holds a reference to the data and might in fact need it still. That's nothing you have to worry about in Java, Lisp, Haskell...
2There is a completely different approach to this: the one dynamic languages choose. In those languages, every operation needs to make sure it works with any type (or, if that's not possible, raise a well-defined error). Then you can arbitrarily compose polymorphic operations, which is on one hand "nicely trouble-free" (not as trouble-free as with a really clever type system like Haskell's, though) but OTOH incurs quite a heavy overhead, since even primitive operations need type-decisions and safeguards around them.
3I'm of course being unfair here. The OO paradigm has more to it than just type-safe polymorphism, it enables many things e.g. old ML with it's Hindler-Milner type system couldn't do (ad-hoc polymorphism: Haskell has type classes for that, SML has modules), and even some things that are pretty hard in Haskell (mainly, storing values of different types in a variable-size container). But the more you get accustomed to functional programming, the less need you will feel for such stuff.
In C++ polymorphic (or generic) lambda starting from C++14 is a lambda that can take any type as an argument. Basically it's a lambda that has auto parameter type:
auto lambda = [](auto){};
Is there a context that you've heard the term "polymorphic lambda"? We might be able to be more specific.
The simplest way that a lambda can be polymorphic is to accept arguments whose type is (partly-)irrelevant to the final result.
e.g. the lambda
\(head:tail) -> tail
has the type [a] -> [a] -- e.g. it's fully-polymorphic in the inner type of the list.
Other simple examples are the likes of
\_ -> 5 :: Num n => a -> n
\x f -> f x :: a -> (a -> b) -> b
\n -> n + 1 :: Num n => n -> n
etc.
(Notice the Num n examples which involve typeclass dispatch)

What are some interesting uses of higher-order functions?

I'm currently doing a Functional Programming course and I'm quite amused by the concept of higher-order functions and functions as first class citizens. However, I can't yet think of many practically useful, conceptually amazing, or just plain interesting higher-order functions. (Besides the typical and rather dull map, filter, etc functions).
Do you know examples of such interesting functions?
Maybe functions that return functions, functions that return lists of functions (?), etc.
I'd appreciate examples in Haskell, which is the language I'm currently learning :)
Well, you notice that Haskell has no syntax for loops? No while or do or for. Because these are all just higher-order functions:
map :: (a -> b) -> [a] -> [b]
foldr :: (a -> b -> b) -> b -> [a] -> b
filter :: (a -> Bool) -> [a] -> [a]
unfoldr :: (b -> Maybe (a, b)) -> b -> [a]
iterate :: (a -> a) -> a -> [a]
Higher-order functions replace the need for baked in syntax in the language for control structures, meaning pretty much every Haskell program uses these functions -- making them quite useful!
They are the first step towards good abstraction because we can now plug custom behavior into a general purpose skeleton function.
In particular, monads are only possible because we can chain together, and manipulate functions, to create programs.
The fact is, life is pretty boring when it is first-order. Programming only gets interesting once you have higher-order.
Many techniques used in OO programming are workarounds for the lack of higher order functions.
This includes a number of the design patterns that are ubiquitous in functional programming. For example, the visitor pattern is a rather complicated way to implement a fold. The workaround is to create a class with methods and pass in an element of the class in as an argument, as a substitute for passing in a function.
The strategy pattern is another example of a scheme that often passes objects as arguments as a substitute for what is actually intended, functions.
Similarly dependency injection often involves some clunky scheme to pass a proxy for functions when it would often be better to simply pass in the functions directly as arguments.
So my answer would be that higher-order functions are often used to perform the same kinds of tasks that OO programmers perform, but directly, and with a lot less boilerplate.
I really started to feel the power when I learned a function can be part of a data structure. Here is a "consumer monad" (technobabble: free monad over (i ->)).
data Coro i a
= Return a
| Consume (i -> Coro i a)
So a Coro can either instantly yield a value, or be another Coro depending on some input. For example, this is a Coro Int Int:
Consume $ \x -> Consume $ \y -> Consume $ \z -> Return (x+y+z)
This consumes three integer inputs and returns their sum. You could also have it behave differently according to the inputs:
sumStream :: Coro Int Int
sumStream = Consume (go 0)
where
go accum 0 = Return accum
go accum n = Consume (\x -> go (accum+x) (n-1))
This consumes an Int and then consumes that many more Ints before yielding their sum. This can be thought of as a function that takes arbitrarily many arguments, constructed without any language magic, just higher order functions.
Functions in data structures are a very powerful tool that was not part of my vocabulary before I started doing Haskell.
Check out the paper 'Even Higher-Order Functions for Parsing or Why Would Anyone Ever Want To
Use a Sixth-Order Function?' by Chris Okasaki. It's written using ML, but the ideas apply equally to Haskell.
Joel Spolsky wrote a famous essay demonstrating how Map-Reduce works using Javascript's higher order functions. A must-read for anyone asking this question.
Higher-order functions are also required for currying, which Haskell uses everywhere. Essentially, a function taking two arguments is equivalent to a function taking one argument and returning another function taking one argument. When you see a type signature like this in Haskell:
f :: A -> B -> C
...the (->) can be read as right-associative, showing that this is in fact a higher-order function returning a function of type B -> C:
f :: A -> (B -> C)
A non-curried function of two arguments would instead have a type like this:
f' :: (A, B) -> C
So any time you use partial application in Haskell, you're working with higher-order functions.
Martín Escardó provides an interesting example of a higher-order function:
equal :: ((Integer -> Bool) -> Int) -> ((Integer -> Bool) -> Int) -> Bool
Given two functionals f, g :: (Integer -> Bool) -> Int, then equal f g decides if f and g are (extensionally) equal or not, even though f and g don't have a finite domain. In fact, the codomain, Int, can be replaced by any type with a decidable equality.
The code Escardó gives is written in Haskell, but the same algorithm should work in any functional language.
You can use the same techniques that Escardó describes to compute definite integrals of any continuous function to arbitrary precision.
One interesting and slightly crazy thing you can do is simulate an object-oriented system using a function and storing data in the function's scope (i.e. in a closure). It's higher-order in the sense that the object generator function is a function which returns the object (another function).
My Haskell is rather rusty so I can't easily give you a Haskell example, but here's a simplified Clojure example which hopefully conveys the concept:
(defn make-object [initial-value]
(let [data (atom {:value initial-value})]
(fn [op & args]
(case op
:set (swap! data assoc :value (first args))
:get (:value #data)))))
Usage:
(def a (make-object 10))
(a :get)
=> 10
(a :set 40)
(a :get)
=> 40
Same principle would work in Haskell (except that you'd probably need to change the set operation to return a new function since Haskell is purely functional)
I'm a particular fan of higher-order memoization:
memo :: HasTrie t => (t -> a) -> (t -> a)
(Given any function, return a memoized version of that function. Limited by the fact that the arguments of the function must be able to be encoded into a trie.)
This is from http://hackage.haskell.org/package/MemoTrie
There are several examples here: http://www.haskell.org/haskellwiki/Higher_order_function
I would also recommend this book: http://www.cs.nott.ac.uk/~gmh/book.html which is a great introduction to all of Haskell and covers higher order functions.
Higher order functions often use an accumulator so can be used when forming a list of elements which conform to a given rule from a larger list.
Here's a small paraphrased code snippet:
rays :: ChessPieceType -> [[(Int, Int)]]
rays Bishop = do
dx <- [1, -1]
dy <- [1, -1]
return $ iterate (addPos (dx, dy)) (dx, dy)
... -- Other piece types
-- takeUntilIncluding is an inclusive version of takeUntil
takeUntilIncluding :: (a -> Bool) -> [a] -> [a]
possibleMoves board piece = do
relRay <- rays (pieceType piece)
let ray = map (addPos src) relRay
takeUntilIncluding (not . isNothing . pieceAt board)
(takeWhile notBlocked ray)
where
notBlocked pos =
inBoard pos &&
all isOtherSide (pieceAt board pos)
isOtherSide = (/= pieceSide piece) . pieceSide
This uses several "higher order" functions:
iterate :: (a -> a) -> a -> [a]
takeUntilIncluding -- not a standard function
takeWhile :: (a -> Bool) -> [a] -> [a]
all :: (a -> Bool) -> [a] -> Bool
map :: (a -> b) -> [a] -> [b]
(.) :: (b -> c) -> (a -> b) -> a -> c
(>>=) :: Monad m => m a -> (a -> m b) -> m b
(.) is the . operator, and (>>=) is the do-notation "line break operator".
When programming in Haskell you just use them. Where you don't have the higher order functions is when you realize just how incredibly useful they were.
Here's a pattern that I haven't seen anyone else mention yet that really surprised me the first time I learned about it. Consider a statistics package where you have a list of samples as your input and you want to calculate a bunch of different statistics on them (there are also plenty of other ways to motivate this). The bottom line is that you have a list of functions that you want to run. How do you run them all?
statFuncs :: [ [Double] -> Double ]
statFuncs = [minimum, maximum, mean, median, mode, stddev]
runWith funcs samples = map ($samples) funcs
There's all kinds of higher order goodness going on here, some of which has been mentioned in other answers. But I want to point out the '$' function. When I first saw this use of '$', I was blown away. Before that I hadn't considered it to be very useful other than as a convenient replacement for parentheses...but this was almost magical...
One thing that's kind of fun, if not particularly practical, is Church Numerals. It's a way of representing integers using nothing but functions. Crazy, I know. <shamelessPlug>Here's an implementation in JavaScript that I made. It might be easier to understand than a Lisp/Haskell implementation. (But probably not, to be honest. JavaScript wasn't really meant for this kind of thing.)</shamelessPlug>
It’s been mentioned that Javascript supports certain higher-order functions, including an essay from Joel Spolsky. Mark Jason Dominus wrote an entire book called Higher–Order Perl; the book’s source is available for free download in a variety of fine formats, include PDF.
Ever since at least Perl 3, Perl has supported functionality more reminiscent of Lisp than of C, but it wasn’t until Perl 5 that full support for closures and all that follows from that was available. And ne of the first Perl 6 implementations was written in Haskell, which has had a lot of influence on how that language’s design has progressed.
Examples of functional programming approaches in Perl show up in everyday programming, especially with map and grep:
#ARGV = map { /\.gz$/ ? "gzip -dc < $_ |" : $_ } #ARGV;
#unempty = grep { defined && length } #many;
Since sort also admits a closure, the map/sort/map pattern is super common:
#txtfiles = map { $_->[1] }
sort {
$b->[0] <=> $a->[0]
||
lc $a->[1] cmp lc $b->[1]
||
$b->[1] cmp $a->[1]
}
map { -s => $_ }
grep { -f && -T }
glob("/etc/*");
or
#sorted_lines = map { $_->[0] }
sort {
$a->[4] <=> $b->[4]
||
$a->[-1] cmp $b->[-1]
||
$a->[3] <=> $b->[3]
||
...
}
map { [$_ => reverse split /:/] } #lines;
The reduce function makes list hackery easy without looping:
$sum = reduce { $a + $b } #numbers;
$max = reduce { $a > $b ? $a : $b } $MININT, #numbers;
There’s a lot more than this, but this is just a taste. Closures make it easy to create function generators, writing your own higher-order functions, not just using the builtins. In fact, one of the more common exception models,
try {
something();
} catch {
oh_drat();
};
is not a built-in. It is, however, almost trivially defined with try being a function that takes two arguments: a closure in the first arg and a function that takes a closure in the second one.
Perl 5 doesn’t have have currying built-in, although there is a module for that. Perl 6, though, has currying and first-class continuations built right into it, plus a lot more.

Higher Order Function

I am having trouble understanding what my lecturer want me to do from this question. Can anyone help explain to me what he wants me to do?
Define a higher order version of the insertion sort algorithm. That is define
functions
insertBy :: Ord b => (a->b) -> a -> [a] -> [a]
inssortBy :: Ord b => (a->b) -> [a] -> [a]
and this bit is where it got me confused:
such that inssort f l sorts the list l such that an element x comes before an elementyif f x < f y.
If you were sorting numbers, then it's clear what x < y means. But what if you were sorting letters? Or customers? Or anything else without a clear (to the computer) ordering?
So you are supposed to create a function f() that defines that ordering for the sorting procedure. That f() will take the letters or customers or whatever and will return an integer for each one that the computer can actually sort on.
At least, that's how the problem is described. I personally would have designed a predicate that accepted two items, x and y and returned a boolean if x < y. But whichever is fine.
The code wants you to rewrite the insertion sort algorithm, but using a function as a parameter - thus a higher order function.
I would like to point out that this code, typo included, seems to stem from a piece of work currently due at a certain university - I found this page while searching for "insertion sort algortihm", as I copy pasted the term out of the document as well, typo included.
Seeking code from the internet is a risky business. Might I recommend the insertion sort algorithm wikipedia entry, or the Haskell code provided in your lecture slides (you are looking for the 'insertion sort algorithm' and for 'higher-order functions), as opposed to the several queries you have placed on Stack Overflow?

Is there a relationship between calling a function and instantiating an object in pure functional languages?

Imagine a simple (made up) language where functions look like:
function f(a, b) = c + 42
where c = a * b
(Say it's a subset of Lisp that includes 'defun' and 'let'.)
Also imagine that it includes immutable objects that look like:
struct s(a, b, c = a * b)
Again analogizing to Lisp (this time a superset), say a struct definition like that would generate functions for:
make-s(a, b)
s-a(s)
s-b(s)
s-c(s)
Now, given the simple set up, it seems clear that there is a lot of similarity between what happens behind the scenes when you either call 'f' or 'make-s'. Once 'a' and 'b' are supplied at call/instantiate time, there is enough information to compute 'c'.
You could think of instantiating a struct as being like a calling a function, and then storing the resulting symbolic environment for later use when the generated accessor functions are called. Or you could think of a evaluting a function as being like creating a hidden struct and then using it as the symbolic environment with which to evaluate the final result expression.
Is my toy model so oversimplified that it's useless? Or is it actually a helpful way to think about how real languages work? Are there any real languages/implementations that someone without a CS background but with an interest in programming languages (i.e. me) should learn more about in order to explore this concept?
Thanks.
EDIT: Thanks for the answers so far. To elaborate a little, I guess what I'm wondering is if there are any real languages where it's the case that people learning the language are told e.g. "you should think of objects as being essentially closures". Or if there are any real language implementations where it's the case that instantiating an object and calling a function actually share some common (non-trivial, i.e. not just library calls) code or data structures.
Does the analogy I'm making, which I know others have made before, go any deeper than mere analogy in any real situations?
You can't get much purer than lambda calculus: http://en.wikipedia.org/wiki/Lambda_calculus. Lambda calculus is in fact so pure, it only has functions!
A standard way of implementing a pair in lambda calculus is like so:
pair = fn a: fn b: fn x: x a b
first = fn a: fn b: a
second = fn a: fn b: b
So pair a b, what you might call a "struct", is actually a function (fn x: x a b). But it's a special type of function called a closure. A closure is essentially a function (fn x: x a b) plus values for all of the "free" variables (in this case, a and b).
So yes, instantiating a "struct" is like calling a function, but more importantly, the actual "struct" itself is like a special type of function (a closure).
If you think about how you would implement a lambda calculus interpreter, you can see the symmetry from the other side: you could implement a closure as an expression plus a struct containing the values of all the free variables.
Sorry if this is all obvious and you just wanted some real world example...
Both f and make-s are functions, but the resemblance doesn't go much further. Applying f calls the function and executes its code; applying make-s creates a structure.
In most language implementations and modelizations, make-s is a different kind of object from f: f is a closure, whereas make-s is a constructor (in the functional languages and logic meaning, which is close to the object oriented languages meaning).
If you like to think in an object-oriented way, both f and make-s have an apply method, but they have completely different implementations of this method.
If you like to think in terms of the underlying logic, f and make-s have a type build on the samme type constructor (the function type constructor), but they are constructed in different ways and have different destruction rules (function application vs. constructor application).
If you'd like to understand that last paragraph, I recommend Types and Programming Languages by Benjamin C. Pierce. Structures are discussed in §11.8.
Is my toy model so oversimplified that it's useless?
Essentially, yes. Your simplified model basically boils down to saying that each of these operations involves performing a computation and putting the result somewhere. But that is so general, it covers anything that a computer does. If you didn't perform a computation, you wouldn't be doing anything useful. If you didn't put the result somewhere, you would have done work for nothing as you have no way to get the result. So anything useful you do with a computer, from adding two registers together, to fetching a web page, could be modeled as performing a computation and putting the result somewhere that it can be accessed later.
There is a relationship between objects and closures. http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg03277.html
The following creates what some might call a function, and others might call an object:
Taken from SICP ( http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-21.html )
(define (make-account balance)
(define (withdraw amount)
(if (>= balance amount)
(begin (set! balance (- balance amount))
balance)
"Insufficient funds"))
(define (deposit amount)
(set! balance (+ balance amount))
balance)
(define (dispatch m)
(cond ((eq? m 'withdraw) withdraw)
((eq? m 'deposit) deposit)
(else (error "Unknown request -- MAKE-ACCOUNT"
m))))
dispatch)