I am reading through a book for homework, and I understand that using #' is treating the variable as a function instead of a variable. But I am a little hazy on FUNCALL. I understand that lisp makes object out of variables, so is the function name just a 'pointer' (may be a bad word, but hopefully you get what I mean), in which case you use #' to invoke it, or is funcall the only way to invoke them? ex.
(defun plot (fn min max step)
(loop for i from min to max by step do
(loop repeat (funcall fn i) do (format t "*"))
(format t "~%")))
couldn't I just do:
(defun plot (fn min max step)
(loop for i from min to max by step do
(loop repeat #'(fn i) do (format t "*"))
(format t "~%")))
I guess my confusion lies in what exactly is in the function names. When I read the book, it said that the variable's value is what will be the function object.
#'function-name is (function function-name). Nothing is called, evaluating either results in the function associated with function-name (the object representing the function). funcall is used to call functions.
See funcall and function in the HyperSpec.
Sample session using both:
CL-USER> (defun square (x) (* x x))
SQUARE
CL-USER> #'square
#<FUNCTION SQUARE>
CL-USER> (function square)
#<FUNCTION SQUARE>
CL-USER> (funcall #'square 3)
9
CL-USER> (funcall 'square 3)
9
The second invocation of funcall works because it also accepts a symbol as function designator (see the link for funcall above for details).
The #' and funcall notations are needed in Common Lisp because this language is a so-called "Lisp-2" where a given symbol can have two separate and unrelated main "meanings" normally listed as
When used as first element of a form it means a function
When used in any other place it means a variable
These are approximate explanations, as you will see in the following example that "first element of a form" and "any other place" are not correct definitions.
Consider for example:
the above code prints 144... it may seem surprising at first but the reason is that the same name square is used with two different meanings: the function that given an argument returns the result of multiplying the argument by itself and the local variable square with value 12.
The first and third uses of the name square the meaning is the function named square and I've painted the name with red color. The second and fourth uses instead are about a variable named square and are painted in blue instead.
How can Common Lisp decide which is which? the point is the position... after defun it's clearly in this case a function name, like it's a function name in the first part of (square square). Likewise as first element of a list inside a let form it's clearly a variable name and it's also a variable name in the second part of (square square).
This looks pretty psychotic... doesn't it? Well there is indeed some splitting in the Lisp community about if this dual meaning is going to make things simpler or more complex and it's one of the main differences between Common Lisp and Scheme.
Without getting into the details I'll just say that this apparently crazy choice has been made to make Lisp macros more useful, providing enough hygiene to make them working nicely without the added complexity and the removed expressiveness power of full hygienic macros. For sure it's a complication that makes it harder to explain the language to whoever is learning it (and that's why Scheme is considered a better (simpler) language for teaching) but many expert lispers think that it's a good choice that makes the Lisp language a better tool for real problems.
Also in human languages the context plays an important role anyway and there it's not a serious problem for humans that sometimes the very same word can be used with different meanings (e.g. as a noun or as a verb like "California is the state I live in" or "State your opinion").
Even in a Lisp-2 you need however to use functions as values, for example passing them as parameters or storing them into a data structure, or you need to use values as functions, for example calling a function that has been received as parameter (your plot case) or that has been stored somewhere. This is where #' and funcall come into play...
#'foo is indeed just a shortcut for (function foo), exactly like 'x is a shortcut for (quote x). This "function" thing is a special form that given a name (foo in this case) returns the associated function as a value, that you can store in variables or pass around:
(defvar *fn* #'square)
in the above code for example the variable *fn* is going to receive the function defined before. A function value can be manipulated as any other value like a string or a number.
funcall is the opposite, allowing to call a function not using its name but by using a value...
(print (funcall *fn* 12))
the above code will display 144... because the function that was stored in the variable *fn* now is being called passing 12 as argument.
If you know the "C" programming language an analogy is considering (let ((p #'square))...) like taking the address of the function square (as with { int (*p)(int) = □ ...}) and instead (funcall p 12) is like calling a function using a pointer (as with (*p)(12) that "C" allows to be abbreviated to p(12)).
The admittely confusing part in Common Lisp is that you can have both a function named square and a variable named square in the same scope and the variable will not hide the function. funcall and function are two tools you can use when you need to use the value of a variable as a function or when you want a function as a value, respectively.
Related
I am reading the SICP book Here about the imperative programming model. I could not understand the illustration in two points:
W.r.t. the arrow from square to the "pair" (the two circles): What does this arrow mean? Though throughout this section, an arrow means "enclosing environment", this specific arrow seems not pointing to an environment.(square 's environment is the global env, not the "pair")
Is below a correct understanding: in the value of a procedure definition, its "code text" part (the left circle) has no interpretation of the symbols within. They are just "text". Only at procedure application, they gain meaning inside the context / environment of the application.
If 2 is correct, Why the arrow from the environment part of the pair (the right
circle) to the enclosing environment is necessary? (since there is no meaning to interpret the meaning of symbols within procedure code inside the procedure definition.)
SICP's arrow notation is a little overloaded. I'll quote the relevant portion of the text to understand this diagram.
The procedure object is a pair whose code specifies that the procedure has one formal parameter, namely x, and a procedure body (* x x). The environment part of the procedure is a pointer to the global environment, since that is the environment in which the lambda expression was evaluated to produce the procedure. A new binding, which associates the procedure object with the symbol square, has been added to the global frame. In general, define creates definitions by adding bindings to frames.
So, let's analyze each arrow.
"global env" → the square. This arrow appears to simply be labeling the square as symbolizing the global environment. Notably, this environment is the only stack frame alive since define was called in the global environment.
"square" → the two dots. This arrow appears to be stating that whatever those two dots represent is stored at the name "square" which is found in the global environment.
left dot → "parameters"/"body". This arrow indicates that the left dot is an "object" thought to be storing two pieces of data, the "list of formal parameters" and the "procedure body".
right dot → the square. This arrow indicates that the right dot contains a "pointer" back to the global environment.
This diagram is giving a highly operational POV on how symbols derive meaning in Lisp. In particular, a symbol is "evaluated" in a particular "context". A context is a linked list of "environment frames" each containing some set of name→value mappings. To evaluate a symbol one follows that linked list and returns the first value which is mapped from the symbol's name. Diagrammatically an example would be
"foo" → { "bar" : 3 → { "foo" : 8 } → { "foo" : 10 }
, "baz" : 4 }
where evaluating foo returns 8 by "skipping" the first frame and finding the value 8 in the second frame while ignoring the third frame. This ignoring feature is important---it suggests that some contexts might have names which shadow values from larger contexts.
So the whole picture here is indicating the following:
Calling define in the global context adds a new name→value mapping to the global frame.
Storing a lambda object stores two pieces of information (two dots)
The left dot contains the text of the body of the lambda along with a listing of the symbols which are to be considered "formal parameters".
The right dot contains a reference to some stack frame which may or may not be the global frame, although it happens to be the global frame in this picture
Finally, we ought to talk about what it means to evaluate a lambda. To evaluate a lambda you must pass it a list of values. It uses that list of input values and matches them against the formal parameter list it stored in order to generate a new environment frame which maps formal parameters to input values. Then, it evaluates the body of the lambda using that new frame as the primary frame and the linked frame as the follow-up context. Diagrammatically, let's say square looked like
+--- Formal parameter list
/ +--- Body of function
| |
(left: (x) (* x x)) (right: {global frame})
Then when we evaluate it like (square 3) we create a new frame using 3 and the formal parameter list
{ "x" : 3 }
and evaluate the body. First we look up the name *. Since it's not in our new local frame we have to find it in the global frame.
"*" → { "x" : 3 } → { global frame }
It turns out to exist there and is the definition of multiplication. We thus need to pass it some values so we look up "x"
"x" → { "x" : 3 } → { global frame }
since x is stored in the local frame we find it there and pass 3 and 3 as arguments to the multiplication function we found.
The important part is that the local frame shadows the global frame. That means that if x also had meaning in the global frame we would override it in the context of evaluating the body of square.
Finally, as I was asked to answer this question in context of questions about what the meaning of "variable" is---it's important to note that the above is a very particular implementation of a very particular semantics of variables. At its surface, you can always say that "variables in lisp mean exactly this process occurs". That can be a little challenging to work with, though.
Another semantics of the word "variable" (one which I and much of mathematics favor) is the idea that a variable in a context stands for a particular, fixed but unknown value in a domain. If we examine the definition of the lambda in the body of square
(lambda (x) (* x x))
we see that this is more-or-less the intended semantics of this phrase---in interpreting (* x x) we see x as being some value value (e.g. a number) but one that we don't know anything about. In interpreting (lambda (x) (* x x)) we see that in order to understand the meaning of the phrase inside of the lambda we must provide it a meaning of x. This is roughly the standard semantics of variables and functions used everywhere.
The challenge is that the stack frame implementation described here is also set up to easily violate this semantics---in fact, it does so very subtly in this example. To be particular: define breaks semantics. The reason is apparent in the following fragment of code
(define foo 3)
foo
(define foo 4)
foo
In this fragment we evaluate each phrase sequentially and see that the (supposedly "fixed but unknown") value of the variable foo changes from line 2 to line 4. This is because define allows us to edit the stack frame that's live in a context rather than merely create a new context which shadows the old one like lambda does. This means that we must consider variables as not "fixed but unknown" but instead a series of mutable slots which cannot be guaranteed to maintain their value over time---a much more sophisticated semantics which perhaps should force us to call foo an "slot" or an "assignable".
We can also see this as a leaky abstraction. We would like variables to have the standard "fixed but unknown" semantics, but due to the mechanism of stack frames and the behavior of define we do not completely adhere to that meaning.
As a final note, Lisps often give you a form called let which can be used to replicate the previous example without throwing away variable semantics:
(let ((foo 3))
foo
(let ((foo 4))
foo)
foo)
In this case, the foo on line 2 takes the value 3, the foo on line 4 exists within a different variable context and thus only shadows the foo on line 2... and thus takes the different fixed value 4, finally the foo on line 5 is again identical to the foo on line 2 and takes the same value.
In other words, let allows us to create arbitrary local contexts (coincidentally by creating new stack frames behind the scenes as you might expect). The golden rule which lets us know theses semantics are safe is called, slightly misfortunately, α-conversion. This rule states that if you rename a variable everywhere and uniformly within a single context then the meaning of the program does not change.
Thus the previous example is, by α-conversion, identical in meaning to this one
(let ((foo 3))
foo
(let ((bar 4))
bar)
foo)
and perhaps slightly less confusing since we no longer need to worry about the effects of shadowing foo.
So can we make Lisp's define semantics safer? Kind of. You might imagine the following transformation:
Disallow cyclic dependencies in sets of define, e.g. (define x y) (define y x) is disallowed while (define x 3) (define y x) isn't.
Move all defines up to the very beginning of any given context (stack frame) and put them in dependency order.
Make it an error to "redefine" any variable
It turns out that this transformation is a little tricky (code movement is tough and so can be cyclic dependencies) but if you iron out some small problems you'll see that in any context a variable can only take exactly one fixed-but-unknown value.
You'll also find the following to hold---any program of the following, transformed form
(define x ... definition of x ...)
(define y ... definition of y ...)
(define z ... definition of z ...)
... body ...
is equivalent to the following
(let ((x ... definition of x ...))
(let ((y ... definition of y ...))
(let ((z ... definition of z ...))
... body ...)))
which is another way of showing that our nice, simple "variable as fixed but unknown quantity" semantics hold.
(This question is a follow-up of this one while studying Haskell.)
I used to find the notion between "variable" and "value" confusing. Therefore I read about the wiki-page of lambda calculus as well as the previous answer above. I come out with below interpretations.
May I confirm whether these are correct? Just want to double confirm because these concept are quite basic but essential to functional programming. Any advice is welcome.
Premises from wiki:
Lambda Calculus syntax
exp → ID
| (exp)
| λ ID.exp // abstraction
| exp exp // application
(Notation: "<=>" equivalent to)
Interpretations:
"value": it is the actual data or instructions stored in computer.
"variable": it is a way locating the data, a value-replacing reference , but not itself the set of data or instruction stored in computer.
"abstraction" <=> "function" ∈ syntactic form. (https://stackoverflow.com/a/25329157/3701346)
"application": it takes an input of "abstraction", and an input of "lambda expression", results in an "lambda expression".
"abstraction" is called "abstraction" because in usual function definition, we abbreviate the (commonly longer) function body into a much shorter form, i.e. a function identifier followed by a list of formal parameters. (Though lambda abstractions are anonymous functions, other functions usually do have name.)
"variable" <=> "symbol" <=> "reference"
a "variable" is associated with a "value" via a process called "binding".
"constant" ∈ "variable"
"literal" ∈ "value"
"formal parameter" ∈ "variable"
"actual parameter"(argument) ∈ "value"
A "variable" can have a "value" of "data"
=> e.g. variable "a" has a value of 3
A "variable"can also have a "value" of "a set of instructions"
=> e.g. an operator "+" is a variable
"value": it is the actual data or instructions stored in computer.
You're trying to think of it very concretely in terms of the machine, which I'm afraid may confuse you. It's better to think of it in terms of math: a value is just a thing that never changes, like the number 42, the letter 'H', or the sequence of letters that constitutes "Hello world".
Another way to think of it is in terms of mental models. We invent mental models in order to reason indirectly about the world; by reasoning about the mental models, we make predictions about things in the real world. We write computer programs to help us work with these mental models reliably and in large volumes.
Values are then things in the mental model. The bits and bytes are just encodings of the model into the computer's architecture.
"variable": it is a way locating the data, a value-replacing reference , but not itself the set of data or instruction stored in computer.
A variable is just a name that stands for a value in a certain scope of the program. Every time a variable is evaluated, its value needs to be looked up in an environment. There are several implementations of this concept in computer terms:
A stack frame in an eager language is an implementation of an environment for looking up the values of local variable, on each invocation of a routine.
A linker provides environments for looking up global-scope names when a program is compiled or loaded into memory.
"abstraction" <=> "function" ∈ syntactic form.
Abstraction and function are not equivalent. In the lambda calculus, "abstraction" a type of syntactic expression, but a function is a value.
One analogy that's not too shabby is names and descriptions vs. things. Names and descriptions are part of language, while things are part of the world. You could say that the meaning of a name or description is the thing that it names or describes.
Languages contain both simple names for things (e.g., 12 is a name for the number twelve) and more complex descriptions of things (5 + 7 is a description of the number twelve). A lambda abstraction is a description of a function; e.g., the expression \x -> x + 7 is a description of the function that adds seven to its argument.
The trick is that when descriptions get very complex, it's not easy to figure out what thing they're describing. If I give you 12345 + 67890, you need to do some amount of work to figure out what number I just described. Computers are machines that do this work way faster and more reliably than we can do it.
"application": it takes an input of "abstraction", and an input of "lambda expression", results in an "lambda expression".
An application is just an expression with two subexpressions, which describes a value by this means:
The first subexpression stands for a function.
The second subexpression stands for some value.
The application as a whole stands for the value that results for applying the function in (1) to the value from (2).
In formal semantics (and don't be scared of that word) we often use the double brackets ⟦∙⟧ to stand for "the meaning of"; e.g. ⟦dog⟧ = "the meaning of dog." Using that notation:
⟦e1 e2⟧ = ⟦e1⟧(⟦e2⟧)
where e1 and e2 are any two expressions or terms (any variable, abstraction or application).
"abstraction" is called "abstraction" because in usual function definition, we abbreviate the (commonly longer) function body into a much shorter form, i.e. a function identifier followed by a list of formal parameters. (Though lambda abstractions are anonymous functions, other functions usually do have name.)
To tell you the truth, I've never stopped to think whether the term "abstraction" is a good term for this or why it was picked. Generally, with math, it doesn't pay to ask questions like that unless the terms have been very badly picked and mislead people.
"constant" ∈ "variable"
"literal" ∈ "value"
The lambda calculus, in and of itself, doesn't have the concepts of "constant" nor "literal." But one way to define these would be:
A literal is an expression that, because of the rules of the language, always has the same value no matter where it occurs.
A constant, in a purely functional language, is a variable at the topmost scope of a program. Every (non-shadowed) use of that variable will always have the same value in the program.
"formal parameter" ∈ "variable"
"actual parameter"(argument) ∈ "value"
Formal parameter is one kind of use of a variable. In any expression of the form λv.e (where v is a variable and e is an expression), v is a formal variable.
An argument is any expression (not value!) that occurs as the second subexpression of an application.
A "variable" can have a "value" of "data" => e.g. variable "a" has a value of 3
All expressions have values, not just variables. For example, 5 + 7 is an application, and it has the value of twelve.
A "variable"can also have a "value" of "a set of instructions" => e.g. an operator "+" is a variable
The value of + is a function—it's the function that adds its arguments. The set of instructions is an implementation of that function.
Think of a function as an abstract table that says, for each combination of argument values, what the result is. The way the instructions come in is this:
For a lot of functions we cannot literally implement them as a table. In the case of addition it's because the table would be infinitely large.
Even for functions where we can enumerate the cases, we want to implement them much more briefly and efficiently.
But the way you check whether a function implementation is correct is, in some sense, to check that in every case it does the same thing the "infinite table" would do. Two sets of instructions that both check out in this way are really two different implementations of the same function.
The word "abstraction" is used because we can't "look inside" a function and see what's going on for the most part so it's "abstract" (contrast with "concrete"). Application is the process of applying a function to an argument. This means that its body is run, but with the thing that's being applied to it replacing the argument name (avoiding any capture). Hopefully this example will explain better than I can (in Haskell syntax. \ represents lambda):
(\x -> x + x) 5 <=> 5 + 5
Here we are applying the lambda expression on the left to the value 5 on the right. We get 5 + 5 as our result (which then may be further reduced to 10).
A "reference" might refer to something somewhat different in the context of Haskell (IORefs and STRefs), but, internally, all bindings ("variables") in Haskell have a layer of indirection like references in other languages (actually, they have even more indirection than that in a way because of the non-strict evaluation).
This mostly looks okay except for the reference issue I mentioned above.
In Haskell, there isn't really a distinction between a variable and a constant.
A "literal" usually is specifically a constructor for a value. For example, 20 constructs the the number 20, but a function application (\x -> 2 * x) 10 wouldn't be considered a literal for 20 because it has an extra step before you get the value.
Right, not all variables are parameters. A parameter is something that is passed to a function. The xs in the lambda expressions above are examples of parameters. A non-example would be something like let a = 15 in a * a. a is a "variable" but not a parameter. Actually, I would call a a "binding" here because it can never change or take on a different value (vary).
The formal parameter vs actual parameter part looks about right.
That looks okay.
I would say that a variable can be a function instead. Usually, in functional programming, we typically think in terms of functions and function applications instead of lists of instructions.
I'd like to point out also that you might get in trouble by thinking of functions as just syntactic forms. You can create new functions by applying certain kinds of higher order functions without using one of the syntactic forms to construct a function directly. A simple example of this is function composition, (.) in Haskell
(f . g) x = f (g x) -- Definition of (.)
(* 10) . (+ 1) <=> \x -> ((* 10) ((+ 1) x)) <=> \x -> 10 * (x + 1)
Writing it as (* 10) . (+ 1) doesn't directly use the lambda syntax or the function definition syntax to create the new function.
How is this possible, what is going on there?
Is there a name for this?
What other languages have this same behavior?
Any without the strong typing system?
This behaviour is really simple and intuitive if you look at the types. To avoid the complications of infix operators like +, I'm going to use the function plus instead. I'm also going to specialise plus to work only on Int, to reduce the typeclass line noise.
Say we have a function plus, of type Int -> Int -> Int. One way to read that is "a function of two Ints that returns an Int". But that notation is a little clumsy for that reading, isn't it? The return type isn't singled out specially anywhere. Why would we write function type signatures this way? Because the -> is right associative, an equivalent type would be Int -> (Int -> Int). This looks much more like it's saying "a function from an Int to (a function from an Int to an Int)". But those two types are in fact exactly the same, and the latter interpretation is the key to understanding how this behaviour works.
Haskell views all functions as being from a single argument to a single result. There may be computations you have in mind where the result of the computation depends on two or more inputs (such as plus). Haskell says that the function plus is a function that takes a single input, and produces an output which is another function. This second function takes a single input and produces an output which is a number. Because the second function was computed by first (and will be different for different inputs to the first function), the "final" output can depend on both the inputs, so we can implement computations with multiple inputs with these functions that take only single inputs.
I promised this would be really easy to understand if you looked at the types. Here's some example expressions with their types explicitly annotated:
plus :: Int -> Int -> Int
plus 2 :: Int -> Int
plus 2 3 :: Int
If something is a function and you apply it to an argument, to get the type of the result of that application all you need to do is remove everything up to the first arrow from the function's type. If that leaves a type that has more arrows, then you still have a function! As you add arguments the right of an expression, you remove parameter types from the left of its type. The type makes it immediately clear what the type of all the intermediate results are, and why plus 2 is a function which can be further applied (its type has an arrow) and plus 2 3 is not (its type doesn't have an arrow).
"Currying" is the process of turning a function of two arguments into a function of one argument that returns a function of another argument that returns whatever the original function returned. It's also used to refer to the property of languages like Haskell that automatically have all functions work this way; people will say that Haskell "is a curried language" or "has currying", or "has curried functions".
Note that this works particularly elegantly because Haskell's syntax for function application is simple token adjacency. You are free to read plus 2 3 as the application of plus to 2 arguments, or the application of plus to 2 and then the application of the result to 3; you can mentally model it whichever way most fits what you're doing at the time.
In languages with C-like function application by parenthesised argument list, this breaks down a bit. plus(2, 3) is very different from plus(2)(3), and in languages with this syntax the two versions of plus involved would probably have different types. So languages with that kind of syntax tend not to have all functions be curried all the time, or even to have automatic currying of any function you like. But such languages have historically also tended not to have functions as first class values, which makes the lack of currying a moot point.
In Haskell, all functions take exactly 1 input, and produce exactly 1 output. Sometimes, the input to or output of a function can be another function. The input to or output of a function can also be a tuple. You can simulate a function with multiple inputs in one of two ways:
Use a tuple as input
(in1, in2) -> out
Use a function as output*
in1 -> (in2 -> out)
Likewise, you can simulate a function with multiple outputs in one of two ways:
Use a tuple as output*
in -> (out1, out2)
Use a function as a "second input" (a la function-as-output)
in -> ((out1 -> (out2 -> a)) -> a)
*this way is typically favored by Haskellers
The (+) function simulates taking 2 inputs in the typical Haskell way of producing a function as output. (Specializing to Int for ease of communication:)
(+) :: Int -> (Int -> Int)
For the sake of convenience, -> is right-associative, so the type signature for (+) can also be written
(+) :: Int -> Int -> Int
(+) is a function that takes in a number, and produces another function from number to number.
(+) 5 is the result of applying (+) to the argument 5, therefore, it is a function from number to number.
(5 +) is another way to write (+) 5
2 + 3 is another way of writing (+) 2 3. Function application is left-associative, so this is another way of writing (((+) 2) 3). In other words: Apply the function (+) to the input 2. The result will be a function. Take that function, and apply it to the input 3. The result of that is a number.
Therefore, (+) is a function, (5 +) is a function, and (+) 2 3 is a number.
In Haskell, you can take a function of two arguments, apply it to one argument, and get a function of one argument. In fact, strictly speaking, + isn't a function of two arguments, it's a function of one argument that returns a function of one argument.
In layman's terms, the + is the actual function and it is waiting to receive a certain number of parameters (in this case 2 or more) until it returns. If you don't give it two or more parameters, then it will remain a function waiting for another parameter.
It's called Currying
Lots of functional languages (Scala,Scheme, etc.)
Most functional languages are strongly typed, but this is good in the end because it reduces errors, which works well in enterprise or critical systems.
As a side note, the language Haskell is named after Haskell Curry, who re-discovered the phenomenon of Functional Currying while working on combinatory logic.
Languages like Haskell or OCaml have a syntax that lends itself particularily to currying, but you can do it in other languages with dynamic typing, like currying in Scheme.
Here it is a strange function in Scheme:
(define f
(call/cc
(lambda (x) x) ) )
(((f 'f) f) 1 )
When f is called in the command line, the result displayed is f .
What is the explanation of this mechanism ?..
Thanks!
You've just stumbled upon 'continuations', possibly the hardest thing of Scheme to understand.
call/cc is an abbreviation for call-with-current-continuation, what the procedure does is it takes a single argument function as its own argument, and calls it with the current 'continuation'.
So what's a continuation? That's infamously hard to explain and you should probably google it to get a better explanation than mine. But a continuation is simply a function of one argument, whose body represents a certain 'continuation' of a value.
Like, when we have (+ 2 (* 2 exp)) with exp being a random expression, if we evaluate that expression there is a 'continuation' waiting for that result, a place where evaluation continues, if it evaluates to 3 for instance, it inserts that value into the expression (* 2 3) and goes on from there with the next 'continuation', or the place where evaluation continues, which is (+ 2 ...).
In almost all contexts of programming languages, the place where computation continues with the value is the same place as it started, though the return statement in many languages is a key counterexample, the continuation is at a totally different place than the return statement itself.
In Scheme, you have direct control over your continuations, you can 'capture' them like done there. What f does is nothing more than evaluate to the current continuation, after all, when (lambda (x) x) is called with the current continuation, it just evaluates to it, so the entire function body does. As I said, continuations are functions themselves whose body can just be seen as the continuation they are to capture, which was famously shown by the designers of Scheme, that continuations are simply just lambda abstractions.
So in the code f first evaluates to the continuation it was called in. Then this continuation as a function is applied to 'f (a symbol). This means that that symbol is brought back to that continuation, where it is evaluated again as a symbol, to reveal the function it is bound to, which again is called with a symbol as its argument, which is finally displayed.
Kind of mind boggling, if you've seen the film 'primer', maybe this explains it:
http://thisdomainisirrelevant.net/1047
I'm trying to pass a list to a function in Lisp, and change the contents of that list within the function without affecting the original list. I've read that Lisp is pass-by-value, and it's true, but there is something else going on that I don't quite understand. For example, this code works as expected:
(defun test ()
(setf original '(a b c))
(modify original)
(print original))
(defun modify (n)
(setf n '(x y z))
n)
If you call (test), it prints (a b c) even though (modify) returns (x y z).
However, it doesn't work that way if you try to change just part of the list. I assume this has something to do with lists that have the same content being the same in memory everywhere or something like that? Here is an example:
(defun test ()
(setf original '(a b c))
(modify original)
(print original))
(defun modify (n)
(setf (first n) 'x)
n)
Then (test) prints (x b c). So how do I change some elements of a list parameter in a function, as if that list was local to that function?
Lisp lists are based on cons cells. Variables are like pointers to cons cells (or other Lisp objects). Changing a variable will not change other variables. Changing cons cells will be visible in all places where there are references to those cons cells.
A good book is Touretzky, Common Lisp: A Gentle Introduction to Symbolic Computation.
There is also software that draws trees of lists and cons cells.
If you pass a list to a function like this:
(modify (list 1 2 3))
Then you have three different ways to use the list:
destructive modification of cons cells
(defun modify (list)
(setf (first list) 'foo)) ; This sets the CAR of the first cons cell to 'foo .
structure sharing
(defun modify (list)
(cons 'bar (rest list)))
Above returns a list that shares structure with the passed in list: the rest elements are the same in both lists.
copying
(defun modify (list)
(cons 'baz (copy-list (rest list))))
Above function BAZ is similar to BAR, but no list cells are shared, since the list is copied.
Needless to say that destructive modification often should be avoided, unless there is a real reason to do it (like saving memory when it is worth it).
Notes:
never destructively modify literal constant lists
Dont' do: (let ((l '(a b c))) (setf (first l) 'bar))
Reason: the list may be write protected, or may be shared with other lists (arranged by the compiler), etc.
Also:
Introduce variables
like this
(let ((original (list 'a 'b 'c)))
(setf (first original) 'bar))
or like this
(defun foo (original-list)
(setf (first original-list) 'bar))
never SETF an undefined variable.
SETF modifies a place. n can be a place. The first element of the list n points to can also be a place.
In both cases, the list held by original is passed to modify as its parameter n. This means that both original in the function test and n in the function modify now hold the same list, which means that both original and n now point to its first element.
After SETF modifies n in the first case, it no longer points to that list but to a new list. The list pointed to by original is unaffected. The new list is then returned by modify, but since this value is not assigned to anything, it fades out of existence and will soon be garbage collected.
In the second case, SETF modifies not n, but the first element of the list n points to. This is the same list original points to, so, afterwards, you can see the modified list also through this variable.
In order to copy a list, use COPY-LIST.
it's nearly the same as this example in C:
void modify1(char *p) {
p = "hi";
}
void modify2(char *p) {
p[0] = 'h';
}
in both cases a pointer is passed, if you change the pointer, you're changing the parameter copy of the pointer value (that it's on the stack), if you change the contents, you're changing the value of whatever object was pointed.
You probably have problems because even though Lisp is pass-by-value references to objects are passed, like in Java or Python. Your cons cells contain references which you modify, so you modify both original and local one.
IMO, you should try to write functions in a more functional style to avoid such problems. Even though Common Lisp is multi-paradigm a functional style is a more appropriate way.
(defun modify (n)
(cons 'x (cdr n))