Checking understanding of: "Variable" v.s. "Value", and "function" vs "abstraction" - function

(This question is a follow-up of this one while studying Haskell.)
I used to find the notion between "variable" and "value" confusing. Therefore I read about the wiki-page of lambda calculus as well as the previous answer above. I come out with below interpretations.
May I confirm whether these are correct? Just want to double confirm because these concept are quite basic but essential to functional programming. Any advice is welcome.
Premises from wiki:
Lambda Calculus syntax
exp → ID
| (exp)
| λ ID.exp // abstraction
| exp exp // application
(Notation: "<=>" equivalent to)
Interpretations:
"value": it is the actual data or instructions stored in computer.
"variable": it is a way locating the data, a value-replacing reference , but not itself the set of data or instruction stored in computer.
"abstraction" <=> "function" ∈ syntactic form. (https://stackoverflow.com/a/25329157/3701346)
"application": it takes an input of "abstraction", and an input of "lambda expression", results in an "lambda expression".
"abstraction" is called "abstraction" because in usual function definition, we abbreviate the (commonly longer) function body into a much shorter form, i.e. a function identifier followed by a list of formal parameters. (Though lambda abstractions are anonymous functions, other functions usually do have name.)
"variable" <=> "symbol" <=> "reference"
a "variable" is associated with a "value" via a process called "binding".
"constant" ∈ "variable"
"literal" ∈ "value"
"formal parameter" ∈ "variable"
"actual parameter"(argument) ∈ "value"
A "variable" can have a "value" of "data"
=> e.g. variable "a" has a value of 3
A "variable"can also have a "value" of "a set of instructions"
=> e.g. an operator "+" is a variable

"value": it is the actual data or instructions stored in computer.
You're trying to think of it very concretely in terms of the machine, which I'm afraid may confuse you. It's better to think of it in terms of math: a value is just a thing that never changes, like the number 42, the letter 'H', or the sequence of letters that constitutes "Hello world".
Another way to think of it is in terms of mental models. We invent mental models in order to reason indirectly about the world; by reasoning about the mental models, we make predictions about things in the real world. We write computer programs to help us work with these mental models reliably and in large volumes.
Values are then things in the mental model. The bits and bytes are just encodings of the model into the computer's architecture.
"variable": it is a way locating the data, a value-replacing reference , but not itself the set of data or instruction stored in computer.
A variable is just a name that stands for a value in a certain scope of the program. Every time a variable is evaluated, its value needs to be looked up in an environment. There are several implementations of this concept in computer terms:
A stack frame in an eager language is an implementation of an environment for looking up the values of local variable, on each invocation of a routine.
A linker provides environments for looking up global-scope names when a program is compiled or loaded into memory.
"abstraction" <=> "function" ∈ syntactic form.
Abstraction and function are not equivalent. In the lambda calculus, "abstraction" a type of syntactic expression, but a function is a value.
One analogy that's not too shabby is names and descriptions vs. things. Names and descriptions are part of language, while things are part of the world. You could say that the meaning of a name or description is the thing that it names or describes.
Languages contain both simple names for things (e.g., 12 is a name for the number twelve) and more complex descriptions of things (5 + 7 is a description of the number twelve). A lambda abstraction is a description of a function; e.g., the expression \x -> x + 7 is a description of the function that adds seven to its argument.
The trick is that when descriptions get very complex, it's not easy to figure out what thing they're describing. If I give you 12345 + 67890, you need to do some amount of work to figure out what number I just described. Computers are machines that do this work way faster and more reliably than we can do it.
"application": it takes an input of "abstraction", and an input of "lambda expression", results in an "lambda expression".
An application is just an expression with two subexpressions, which describes a value by this means:
The first subexpression stands for a function.
The second subexpression stands for some value.
The application as a whole stands for the value that results for applying the function in (1) to the value from (2).
In formal semantics (and don't be scared of that word) we often use the double brackets ⟦∙⟧ to stand for "the meaning of"; e.g. ⟦dog⟧ = "the meaning of dog." Using that notation:
⟦e1 e2⟧ = ⟦e1⟧(⟦e2⟧)
where e1 and e2 are any two expressions or terms (any variable, abstraction or application).
"abstraction" is called "abstraction" because in usual function definition, we abbreviate the (commonly longer) function body into a much shorter form, i.e. a function identifier followed by a list of formal parameters. (Though lambda abstractions are anonymous functions, other functions usually do have name.)
To tell you the truth, I've never stopped to think whether the term "abstraction" is a good term for this or why it was picked. Generally, with math, it doesn't pay to ask questions like that unless the terms have been very badly picked and mislead people.
"constant" ∈ "variable"
"literal" ∈ "value"
The lambda calculus, in and of itself, doesn't have the concepts of "constant" nor "literal." But one way to define these would be:
A literal is an expression that, because of the rules of the language, always has the same value no matter where it occurs.
A constant, in a purely functional language, is a variable at the topmost scope of a program. Every (non-shadowed) use of that variable will always have the same value in the program.
"formal parameter" ∈ "variable"
"actual parameter"(argument) ∈ "value"
Formal parameter is one kind of use of a variable. In any expression of the form λv.e (where v is a variable and e is an expression), v is a formal variable.
An argument is any expression (not value!) that occurs as the second subexpression of an application.
A "variable" can have a "value" of "data" => e.g. variable "a" has a value of 3
All expressions have values, not just variables. For example, 5 + 7 is an application, and it has the value of twelve.
A "variable"can also have a "value" of "a set of instructions" => e.g. an operator "+" is a variable
The value of + is a function—it's the function that adds its arguments. The set of instructions is an implementation of that function.
Think of a function as an abstract table that says, for each combination of argument values, what the result is. The way the instructions come in is this:
For a lot of functions we cannot literally implement them as a table. In the case of addition it's because the table would be infinitely large.
Even for functions where we can enumerate the cases, we want to implement them much more briefly and efficiently.
But the way you check whether a function implementation is correct is, in some sense, to check that in every case it does the same thing the "infinite table" would do. Two sets of instructions that both check out in this way are really two different implementations of the same function.

The word "abstraction" is used because we can't "look inside" a function and see what's going on for the most part so it's "abstract" (contrast with "concrete"). Application is the process of applying a function to an argument. This means that its body is run, but with the thing that's being applied to it replacing the argument name (avoiding any capture). Hopefully this example will explain better than I can (in Haskell syntax. \ represents lambda):
(\x -> x + x) 5 <=> 5 + 5
Here we are applying the lambda expression on the left to the value 5 on the right. We get 5 + 5 as our result (which then may be further reduced to 10).
A "reference" might refer to something somewhat different in the context of Haskell (IORefs and STRefs), but, internally, all bindings ("variables") in Haskell have a layer of indirection like references in other languages (actually, they have even more indirection than that in a way because of the non-strict evaluation).
This mostly looks okay except for the reference issue I mentioned above.
In Haskell, there isn't really a distinction between a variable and a constant.
A "literal" usually is specifically a constructor for a value. For example, 20 constructs the the number 20, but a function application (\x -> 2 * x) 10 wouldn't be considered a literal for 20 because it has an extra step before you get the value.
Right, not all variables are parameters. A parameter is something that is passed to a function. The xs in the lambda expressions above are examples of parameters. A non-example would be something like let a = 15 in a * a. a is a "variable" but not a parameter. Actually, I would call a a "binding" here because it can never change or take on a different value (vary).
The formal parameter vs actual parameter part looks about right.
That looks okay.
I would say that a variable can be a function instead. Usually, in functional programming, we typically think in terms of functions and function applications instead of lists of instructions.
I'd like to point out also that you might get in trouble by thinking of functions as just syntactic forms. You can create new functions by applying certain kinds of higher order functions without using one of the syntactic forms to construct a function directly. A simple example of this is function composition, (.) in Haskell
(f . g) x = f (g x) -- Definition of (.)
(* 10) . (+ 1) <=> \x -> ((* 10) ((+ 1) x)) <=> \x -> 10 * (x + 1)
Writing it as (* 10) . (+ 1) doesn't directly use the lambda syntax or the function definition syntax to create the new function.

Related

SML : why functions always take one-argument make language flexible

I have learned (from a SML book) that functions in SML always takes just one argument: a tuple. A function that takes multiple arguments is just a function that takes one tuple as argument, implemented with a tuple binding in function binding. I understand this point.
But after this, the book says something that I don't understand:
this point makes SML language flexible and elegant design, and you can do something useful that you cannot do in Java.
Why does this design make the language Flexible? What is the text referring to, that SML can but java cannot?
Using tuples instead of multiple arguments adds flexibility in the sense that higher-order functions can work with functions of any "arity". For example to create the list [f x, f y, f z], you can use the higher-order function map like this:
map f [x, y, z]
That's easy enough - you can do that in any language. But now let's consider the case where f actually needs two arguments. If f were a true binary function (supposing SML had such functions), we'd need a different version of map that can work with binary functions instead of unary functions (and if we'd want to use a 3-ary functions, we'd need a version for those as well). However using tuples we can just write it like this:
map f [(x,a), (y,b), (z,c)]
This will create the list [f (x,a), f (y,b), f (z,c)].
PS: It's not really true that all functions that need multiple arguments take tuples in SML. Often functions use currying, not tuples, to represent multiple arguments, but I suppose your book hasn't gotten to currying yet. Curried functions can't be used in the same way as described above, so they're not as general in that sense.
Actually I don't think you really understand this at all.
First of all, functions in SML doesn't take a tuple as argument, they can take anything as argument. It is just sometimes convenient to use tuples as a means of passing multiple arguments. For example a function may take a record as argument, an integer, a string or it may even take another function as argument. One could also say that it can take "no arguments" in the sense that it may take unit as the argument.
If I understand your statement correctly about functions that takes "multiple arguments" you are talking about currying. For example
fun add x y = x + y
In SML, currying is implemented as a derived form (syntactic sugar). See this answer for an elaboration on how this actually works. In summary there is only anonymous functions in SML, however we can bind them to names such that they may "referred to"/used later.
Behold, ramblings about to start.
Before talking about flexibility of anything, I think it would be in order to state how I think of it. I quite like this definition of flexibility of programming languages: "[...] the unexpectedly many ways in which utterings in the language can be used"
In the case of SML, a small and simple core language has been chosen. This makes implementing compilers and interpreters easy. The flexibility comes in the form that many features of the SML language has been implemented using these core language features such as anonymous functions, pattern matching and the fact that SML has higher-order functions.
Examples of this is currying, case expressions, record selectors, if-the-else expressions, expression sequences.
I would say that this makes the SML core language very flexible and frankly quite elegant.
I'm not quite sure where the author was going regarding what SML can do, that java can't (in this context). However I'm quite sure that the author might be a bit biased, as you can do anything in java as well. However it might take immensely amounts of coding :)

In Haskell, (+) is a function, ((+) 2) is a function, ((+) 2 3) is 5. What exactly is going on there?

How is this possible, what is going on there?
Is there a name for this?
What other languages have this same behavior?
Any without the strong typing system?
This behaviour is really simple and intuitive if you look at the types. To avoid the complications of infix operators like +, I'm going to use the function plus instead. I'm also going to specialise plus to work only on Int, to reduce the typeclass line noise.
Say we have a function plus, of type Int -> Int -> Int. One way to read that is "a function of two Ints that returns an Int". But that notation is a little clumsy for that reading, isn't it? The return type isn't singled out specially anywhere. Why would we write function type signatures this way? Because the -> is right associative, an equivalent type would be Int -> (Int -> Int). This looks much more like it's saying "a function from an Int to (a function from an Int to an Int)". But those two types are in fact exactly the same, and the latter interpretation is the key to understanding how this behaviour works.
Haskell views all functions as being from a single argument to a single result. There may be computations you have in mind where the result of the computation depends on two or more inputs (such as plus). Haskell says that the function plus is a function that takes a single input, and produces an output which is another function. This second function takes a single input and produces an output which is a number. Because the second function was computed by first (and will be different for different inputs to the first function), the "final" output can depend on both the inputs, so we can implement computations with multiple inputs with these functions that take only single inputs.
I promised this would be really easy to understand if you looked at the types. Here's some example expressions with their types explicitly annotated:
plus :: Int -> Int -> Int
plus 2 :: Int -> Int
plus 2 3 :: Int
If something is a function and you apply it to an argument, to get the type of the result of that application all you need to do is remove everything up to the first arrow from the function's type. If that leaves a type that has more arrows, then you still have a function! As you add arguments the right of an expression, you remove parameter types from the left of its type. The type makes it immediately clear what the type of all the intermediate results are, and why plus 2 is a function which can be further applied (its type has an arrow) and plus 2 3 is not (its type doesn't have an arrow).
"Currying" is the process of turning a function of two arguments into a function of one argument that returns a function of another argument that returns whatever the original function returned. It's also used to refer to the property of languages like Haskell that automatically have all functions work this way; people will say that Haskell "is a curried language" or "has currying", or "has curried functions".
Note that this works particularly elegantly because Haskell's syntax for function application is simple token adjacency. You are free to read plus 2 3 as the application of plus to 2 arguments, or the application of plus to 2 and then the application of the result to 3; you can mentally model it whichever way most fits what you're doing at the time.
In languages with C-like function application by parenthesised argument list, this breaks down a bit. plus(2, 3) is very different from plus(2)(3), and in languages with this syntax the two versions of plus involved would probably have different types. So languages with that kind of syntax tend not to have all functions be curried all the time, or even to have automatic currying of any function you like. But such languages have historically also tended not to have functions as first class values, which makes the lack of currying a moot point.
In Haskell, all functions take exactly 1 input, and produce exactly 1 output. Sometimes, the input to or output of a function can be another function. The input to or output of a function can also be a tuple. You can simulate a function with multiple inputs in one of two ways:
Use a tuple as input
(in1, in2) -> out
Use a function as output*
in1 -> (in2 -> out)
Likewise, you can simulate a function with multiple outputs in one of two ways:
Use a tuple as output*
in -> (out1, out2)
Use a function as a "second input" (a la function-as-output)
in -> ((out1 -> (out2 -> a)) -> a)
*this way is typically favored by Haskellers
The (+) function simulates taking 2 inputs in the typical Haskell way of producing a function as output. (Specializing to Int for ease of communication:)
(+) :: Int -> (Int -> Int)
For the sake of convenience, -> is right-associative, so the type signature for (+) can also be written
(+) :: Int -> Int -> Int
(+) is a function that takes in a number, and produces another function from number to number.
(+) 5 is the result of applying (+) to the argument 5, therefore, it is a function from number to number.
(5 +) is another way to write (+) 5
2 + 3 is another way of writing (+) 2 3. Function application is left-associative, so this is another way of writing (((+) 2) 3). In other words: Apply the function (+) to the input 2. The result will be a function. Take that function, and apply it to the input 3. The result of that is a number.
Therefore, (+) is a function, (5 +) is a function, and (+) 2 3 is a number.
In Haskell, you can take a function of two arguments, apply it to one argument, and get a function of one argument. In fact, strictly speaking, + isn't a function of two arguments, it's a function of one argument that returns a function of one argument.
In layman's terms, the + is the actual function and it is waiting to receive a certain number of parameters (in this case 2 or more) until it returns. If you don't give it two or more parameters, then it will remain a function waiting for another parameter.
It's called Currying
Lots of functional languages (Scala,Scheme, etc.)
Most functional languages are strongly typed, but this is good in the end because it reduces errors, which works well in enterprise or critical systems.
As a side note, the language Haskell is named after Haskell Curry, who re-discovered the phenomenon of Functional Currying while working on combinatory logic.
Languages like Haskell or OCaml have a syntax that lends itself particularily to currying, but you can do it in other languages with dynamic typing, like currying in Scheme.

Does the term "monad" apply to values of types like Maybe or List, or does it instead apply only to the types themselves?

I've noticed that the word "monad" seems to be used in a somewhat inconsistent way. I've come to believe that this is because many (if not most) of the monad tutorials out there are written by folks who have only just started to figure monads out themselves (eg: nuclear waste spacesuit burritos), and so the term ends up getting kind of overloaded/corrupted.
In particular, I'm wondering whether the term "monad" can be applied to individual values of types like Maybe, List or IO, or if the term "monad" should really only be applied to the types themselves.
This is a subtle distinction, so perhaps an analogy might make it more clear. In mathematics we have, rings, fields, groups, etc. These terms apply to an entire set of values along with the operations that can be performed on them, rather than to individual elements. For example, integers (along with the operations of addition, negation and multiplication) form a ring. You could say "Integer is a ring", but you would never say "5 is a ring".
So, can you say "Just 5 is a monad", or would that be as wrong as saying "5 is a ring"? I don't know category theory, but I'm under the impression that it really only makes sense to say "Maybe is a monad" and not "Just 5 is a monad".
"Monad" (and "Functor") are popularly misused as describing values.
No value is a monad, functor, monoid, applicative functor, etc.
Only types & type constructors (higher-kinded types) can be.
When you hear (and you will) that "lists are monoids" or "functions are monads", etc, or "this function takes a monad as an argument", don't believe it.
Ask the speaker "How can any value be a monoid (or monad or ...), considering that Haskells classes classify types (including higher-order ones) rather than values?"
Lists are not monoids (etc). List a is.
My guess is that this popular misuse stems from mainstream languages having value classes and not type classes, so that habitual, unconscious value-class thinking sneaks in.
Why does it matter whether we use language precisely?
Because we think in language and we build & convey understandings via language.
So in order to have clear thoughts, it helps to have clear language (or be able to at any time).
"The slovenliness of our language makes it easier for us to have foolish thoughts. The point is that the process is reversible." - George Orwell, Politics and the English Language
Edit: These remarks apply to Haskell, not to the more general setting of category theory.
List is a monad, List a is a type, and [] is a List a (an element of a type).
Technically, a monad is a functor with extra structure; and in Haskell we only use functors from the category of Haskell types to itself.
It is thus in particular a "function" which takes a type and returns another type (it has kind * -> *).
List, State s, Maybe, etc are monads. State is not a monad, since it has kind * -> * -> *.
(aside: to confuse matters, Monads are just functors, and if I give myself a partially ordered set A, then it forms a category, with Hom(a, b) = { 1 element } if a <= b and Hom(a, b) = empty otherwise. Now any increasing function f : A -> A forms a functor, and monads are those functions which satisfy x <= f(x) and f(f(x)) <= f(x), hence f(f(x)) = f(x) -- monads here are technically "elements of A -> A". See also closure operators.)
(aside 2: since you appear to know some mathematics, I encourage you to read about category theory. You'll see among others that algebraic structures can be seen as arising from monads. See this excellent blog entry from the excellent blog by Dan Piponi for a teaser.)
To be exact, monads are structures from category theory. They don't have a direct code counterpart. For simplicity let's talk about general functors instead of monads. In the case of Haskell roughly speaking a functor is a mapping from a class of types to a class of types that also maps functions in the first class to functions in the second. The Functor instance gives you access to the mapping function, but doesn't directly capture the concept of functors.
It is however fair to say that the type constructor as mentioned in the Functor instance is the actual functor:
instance Functor Tree
In this case Tree is the functor. However, because Tree is a type constructor it can't stand for both mapping functions that make a functor at the same time. The function that maps functions is called fmap. So if you want to be precise you have to say that the tuple (Tree, fmap) is the functor, where fmap is the particular fmap from Tree's Functor instance. For convenience, again, we say that Tree is the functor, because the corresponding fmap follows from its Functor instance.
Note that functors are always types of kind * -> *. So Maybe Int is not a functor – the functor is Maybe. Also people often talk about "the state monad", which is also imprecise. State is a whole family of infinitely many state monads, as you can see in the instance:
instance Monad (State s)
For every type s the type constructor State s (of kind * -> *) is a state monad, one of many.
So, can you say "Just 5 is a monad", or would that be as wrong as saying "5 is a ring"?
Your intuition is exactly right. Int is to Ring (or AbelianGroup or whatever) as Maybe is to Monad (or Functor or whatever). Values (5, Just 5, etc.) are unimportant.
In algebra, we say the set of integers form a ring; in Haskell we would say (informally) that Int is a member of the Ring typeclass, or (slightly more formally) that there exists a Ring instance for Int. You might find this proposal fun and/or useful. Anyway, same deal with monads.
I don't know category theory, but ...
Whatever, if you know a thing or two about abstract algebra, you're golden.
I would say "Just 5 is of a type that is an instance of a Monad" like i would say "5 is a number that has type (Integer) is a ring".
I use the term instance because is how in Haskell you declare an implementation of a typeclass, and Monad is one of them.

Clojure - test for equality of function expression?

Suppose I have the following clojure functions:
(defn a [x] (* x x))
(def b (fn [x] (* x x)))
(def c (eval (read-string "(defn d [x] (* x x))")))
Is there a way to test for the equality of the function expression - some equivalent of
(eqls a b)
returns true?
It depends on precisely what you mean by "equality of the function expression".
These functions are going to end up as bytecode, so I could for example dump the bytecode corresponding to each function to a byte[] and then compare the two bytecode arrays.
However, there are many different ways of writing semantically equivalent methods, that wouldn't have the same representation in bytecode.
In general, it's impossible to tell what a piece of code does without running it. So it's impossible to tell whether two bits of code are equivalent without running both of them, on all possible inputs.
This is at least as bad, computationally speaking, as the halting problem, and possibly worse.
The halting problem is undecidable as it is, so the general-case answer here is definitely no (and not just for Clojure but for every programming language).
I agree with the above answers in regards to Clojure not having a built in ability to determine the equivalence of two functions and that it has been proven that you can not test programs functionally (also known as black box testing) to determine equality due to the halting problem (unless the input set is finite and defined).
I would like to point out that it is possible to algebraically determine the equivalence of two functions, even if they have different forms (different byte code).
The method for proving the equivalence algebraically was developed in the 1930's by Alonzo Church and is know as beta reduction in Lambda Calculus. This method is certainly applicable to the simple forms in your question (which would also yield the same byte code) and also for more complex forms that would yield different byte codes.
I cannot add to the excellent answers by others, but would like to offer another viewpoint that helped me. If you are e.g. testing that the correct function is returned from your own function, instead of comparing the function object you might get away with just returning the function as a 'symbol.
I know this probably is not what the author asked for but for simple cases it might do.

What is the difference between FUNCALL and #'function-name in common lisp?

I am reading through a book for homework, and I understand that using #' is treating the variable as a function instead of a variable. But I am a little hazy on FUNCALL. I understand that lisp makes object out of variables, so is the function name just a 'pointer' (may be a bad word, but hopefully you get what I mean), in which case you use #' to invoke it, or is funcall the only way to invoke them? ex.
(defun plot (fn min max step)
(loop for i from min to max by step do
(loop repeat (funcall fn i) do (format t "*"))
(format t "~%")))
couldn't I just do:
(defun plot (fn min max step)
(loop for i from min to max by step do
(loop repeat #'(fn i) do (format t "*"))
(format t "~%")))
I guess my confusion lies in what exactly is in the function names. When I read the book, it said that the variable's value is what will be the function object.
#'function-name is (function function-name). Nothing is called, evaluating either results in the function associated with function-name (the object representing the function). funcall is used to call functions.
See funcall and function in the HyperSpec.
Sample session using both:
CL-USER> (defun square (x) (* x x))
SQUARE
CL-USER> #'square
#<FUNCTION SQUARE>
CL-USER> (function square)
#<FUNCTION SQUARE>
CL-USER> (funcall #'square 3)
9
CL-USER> (funcall 'square 3)
9
The second invocation of funcall works because it also accepts a symbol as function designator (see the link for funcall above for details).
The #' and funcall notations are needed in Common Lisp because this language is a so-called "Lisp-2" where a given symbol can have two separate and unrelated main "meanings" normally listed as
When used as first element of a form it means a function
When used in any other place it means a variable
These are approximate explanations, as you will see in the following example that "first element of a form" and "any other place" are not correct definitions.
Consider for example:
the above code prints 144... it may seem surprising at first but the reason is that the same name square is used with two different meanings: the function that given an argument returns the result of multiplying the argument by itself and the local variable square with value 12.
The first and third uses of the name square the meaning is the function named square and I've painted the name with red color. The second and fourth uses instead are about a variable named square and are painted in blue instead.
How can Common Lisp decide which is which? the point is the position... after defun it's clearly in this case a function name, like it's a function name in the first part of (square square). Likewise as first element of a list inside a let form it's clearly a variable name and it's also a variable name in the second part of (square square).
This looks pretty psychotic... doesn't it? Well there is indeed some splitting in the Lisp community about if this dual meaning is going to make things simpler or more complex and it's one of the main differences between Common Lisp and Scheme.
Without getting into the details I'll just say that this apparently crazy choice has been made to make Lisp macros more useful, providing enough hygiene to make them working nicely without the added complexity and the removed expressiveness power of full hygienic macros. For sure it's a complication that makes it harder to explain the language to whoever is learning it (and that's why Scheme is considered a better (simpler) language for teaching) but many expert lispers think that it's a good choice that makes the Lisp language a better tool for real problems.
Also in human languages the context plays an important role anyway and there it's not a serious problem for humans that sometimes the very same word can be used with different meanings (e.g. as a noun or as a verb like "California is the state I live in" or "State your opinion").
Even in a Lisp-2 you need however to use functions as values, for example passing them as parameters or storing them into a data structure, or you need to use values as functions, for example calling a function that has been received as parameter (your plot case) or that has been stored somewhere. This is where #' and funcall come into play...
#'foo is indeed just a shortcut for (function foo), exactly like 'x is a shortcut for (quote x). This "function" thing is a special form that given a name (foo in this case) returns the associated function as a value, that you can store in variables or pass around:
(defvar *fn* #'square)
in the above code for example the variable *fn* is going to receive the function defined before. A function value can be manipulated as any other value like a string or a number.
funcall is the opposite, allowing to call a function not using its name but by using a value...
(print (funcall *fn* 12))
the above code will display 144... because the function that was stored in the variable *fn* now is being called passing 12 as argument.
If you know the "C" programming language an analogy is considering (let ((p #'square))...) like taking the address of the function square (as with { int (*p)(int) = &square; ...}) and instead (funcall p 12) is like calling a function using a pointer (as with (*p)(12) that "C" allows to be abbreviated to p(12)).
The admittely confusing part in Common Lisp is that you can have both a function named square and a variable named square in the same scope and the variable will not hide the function. funcall and function are two tools you can use when you need to use the value of a variable as a function or when you want a function as a value, respectively.