I had this question in an exam, how would you solve it?
CONS is a fundamental Common Lisp function. Which functionality must the Common Lisp environment provide to make it work? What would happen to this code without it?
(defun test(n l1 l2)
(when (plusp n)
(append l1 l2)
(something (1- n) l1 l2)))
prompt> (test fourtytwo '(4) '(2))
From which perspective?
From a language implementer you need memory and a data type that takes two pointers and perhaps flags for type and gc unless it's embedded in the pointer.
For a developer it needs two arguments holding any data. Both the reader and append uses it so without you won't have cons cells and therefor not lists either.
Related
It has come to my attention that there are several ways to deal with universal quantification when working with Isabelle/HOL Isar. I am trying to write some proofs in a style that is suitable for undergraduate students to understand and reproduce (that's why I'm using Isar!) and I am confused about how to express universal quantification in a nice way.
In Coq for example, I can write forall x, P(x) and then I may say "induction x" and that will automatically generate goals according to the corresponding induction principle. However, in Isabelle/HOL Isar, if I want to directly apply an induction principle I must state the theorem without any quantification, like this:
lemma foo: P(x)
proof (induct x)
And this works fine as x is then treated as a schematic variable, as if it was universally quantified. However, it lacks the universal quantification in the statement which is not very educational. Another way I have fund is by using \<And> and \<forall>. However, I can not directly apply the induction principle if I state the lemma in this way, I have to first fix the universally quantified variables... which again seems inconvenient from an educational point of view:
lemma foo: \<And>x. P(x)
proof -
fix x
show "P(x)"
proof (induct x)
What is a nice proof pattern for expressing universal quantification that does not require me to explicitly fix variables before induction?
You can use induct_tac, case_tac, etc. These are the legacy variant of the induct/induction and cases methods used in proper Isar. They can operate on bound meta-universally-quantified variables in the goal state, like the x in your second example:
lemma foo: "⋀x. P(x :: nat)"
proof (induct_tac x)
One disadvantage of induct_tac over induction is that it does not provide cases, so you cannot just write case (Suc x) and then from Suc.IH and show ?case in your proof. Another disadvantage is that addressing bound variables is, in general, rather fragile, since their names are often generated automatically by Isabelle and may change when Isabelle changes. (not in the case you have shown above, of course)
This is one of the reasons why Isar proofs are preferred these days. I would strongly advise against showing your students ‘bad’ Isabelle with the intention that it is easier for them to understand.
The facts are these: free variables in a theorem statement in Isabelle are logically equivalent to universally-quantified variables and Isabelle automatically converts them to schematic variables after you have proven it. This convention is not unique to Isabelle; it is common in mathematics and logic, and it helps to reduce clutter. Isar in particular tries to avoid explicit use of the ⋀ operator in goal statements (i.e. have/show; they still appear in assume).
Or, in short: free variables in theorems are universally quantified by default. I doubt that students will find this hard to understand; I certainly did not when I started with Isabelle as a BSc student. In fact, I found it much more natural to state a theorem as xs # (ys # zs) = (xs # ys) # zs instead of ∀xs ys zs. xs # (ys # zs) = (xs # ys) # zs.
Suppose I have the following clojure functions:
(defn a [x] (* x x))
(def b (fn [x] (* x x)))
(def c (eval (read-string "(defn d [x] (* x x))")))
Is there a way to test for the equality of the function expression - some equivalent of
(eqls a b)
returns true?
It depends on precisely what you mean by "equality of the function expression".
These functions are going to end up as bytecode, so I could for example dump the bytecode corresponding to each function to a byte[] and then compare the two bytecode arrays.
However, there are many different ways of writing semantically equivalent methods, that wouldn't have the same representation in bytecode.
In general, it's impossible to tell what a piece of code does without running it. So it's impossible to tell whether two bits of code are equivalent without running both of them, on all possible inputs.
This is at least as bad, computationally speaking, as the halting problem, and possibly worse.
The halting problem is undecidable as it is, so the general-case answer here is definitely no (and not just for Clojure but for every programming language).
I agree with the above answers in regards to Clojure not having a built in ability to determine the equivalence of two functions and that it has been proven that you can not test programs functionally (also known as black box testing) to determine equality due to the halting problem (unless the input set is finite and defined).
I would like to point out that it is possible to algebraically determine the equivalence of two functions, even if they have different forms (different byte code).
The method for proving the equivalence algebraically was developed in the 1930's by Alonzo Church and is know as beta reduction in Lambda Calculus. This method is certainly applicable to the simple forms in your question (which would also yield the same byte code) and also for more complex forms that would yield different byte codes.
I cannot add to the excellent answers by others, but would like to offer another viewpoint that helped me. If you are e.g. testing that the correct function is returned from your own function, instead of comparing the function object you might get away with just returning the function as a 'symbol.
I know this probably is not what the author asked for but for simple cases it might do.
My Programming Paradigms textbook, Essential of Programming Languages (3rd ed), Chapter 1 has an exercise:
Exercise 1.12
Eliminate the one call to subst-in-s-exp in subst by
replacing it by its definition and simplifying the resulting
procedure. The result will be a version of subst that does not need
subst-in-s-exp. This technique is called inlining, and is used by
optimizing compilers.
The original code would have two functions: subst and subst-in-sexp which basically substitutes the all occurrences of old symbol with new symbol in the input list.
(define subst
(lambda (new old slist)
(if (null? slist) '()
(cons
(subst-in-s-exp new old (car slist))
(subst new old (cdr slist))))))
(define subst-in-s-exp
(lambda (new old sexp)
(if (symbol? sexp)
(if (eqv? sexp old) new sexp)
(subst new old sexp))))
The answer to this question is to eliminate subst-in-sexp, which becomes this
(define subst
(lambda (slist old new)
(cond
[ (null? slist) '()]
[ (eqv? (car slist) old) (cons new (subst (cdr slist) old new))]
[ else (cons (car slist) (subst (cdr slist) old new))])))
Why is in-lining better besides it may be a lot shorter (less space)? Does the size of the recursion changes? In other words, does this inlining creates fewer stack elements?
Moreover, how can I use this idea to make my C++, Python, and Java code faster? Can I extend this idea easily? Thanks.
I tagged this in Scheme (actually, Racket) because this is the choice of language in the book.
Inlining is a pretty standard compiler optimization, but as AoeAoe said, it's generally better to write your code so that it's readable and let the compiler do all the inlining for you.
The immediate benefit of inlining is that it eliminates branches in your code. It means your CPU can keep reading straight down your code rather than having to spend a couple of clock cycles finding the next section of code to execute.
However, inlining has some other benefits as well. You end up with bigger chunks of code, which means the compiler has more code and data to play with. It might be able to stick more things in registers, or do constant folding to eliminate more computations. The compiler can also do a better job of instruction scheduling, because it has more instructions to move around.
The drawback is that inlining increases your resulting code size. Especially with modern CPUs running so much faster than memory, it can often be better to inline less code in order to keep all of a hot section of code in L1 cache.
Adding just a bit to Eric's answer, inlining can be a big win in dynamically typed languages, where inlining a call may make it possible for a compiler to specialize the implementation to the kinds of data that appear.
For instance: suppose I have a function called f:
(define (f x) (+ (* x x) 3.0))
... and I call it inline:
(+ (f 3.2) (g 3.9))
In this case, the inlined code makes it clear that the multiplication and addition can't be called with non-numbers, so this error check can be elided.
In answer to: "How can I use this idea to make my C++, Python, and Java code faster?"
I don't think that the Python runtime does things like automatically inlining small methods. (If this is wrong, someone please correct me!) So in Python, if you have a piece of code which is very performance-sensitive, perhaps running tens of thousands or millions of times in an inner loop, you could try inlining manually. Only do this if the code is really a bottleneck, and it really needs to be as fast as possible, and always measure to see if such optimizations are actually helping anything. (If you try inlining something and it doesn't help, it's better to undo the optimization, because inlining will generally make your code harder to read.)
In Java and C++, any good compiler will inline small methods for you. The thing which you can (sometimes) do, is help the compiler see that a method can be inlined. If the exact method which is called depends on the run-time type of an object (as when using virtual methods in C++), the compiler will not be able to inline the call. static methods in Java can easily be inlined, and declaring methods as final (when it makes sense to do so) may also make it possible for the compiler to inline.
If you learn more in the future about compilers and how they work, you will better be able to see how to write performance-sensitive code in a way that the compiler is able to optimize for you.
Imagine a simple (made up) language where functions look like:
function f(a, b) = c + 42
where c = a * b
(Say it's a subset of Lisp that includes 'defun' and 'let'.)
Also imagine that it includes immutable objects that look like:
struct s(a, b, c = a * b)
Again analogizing to Lisp (this time a superset), say a struct definition like that would generate functions for:
make-s(a, b)
s-a(s)
s-b(s)
s-c(s)
Now, given the simple set up, it seems clear that there is a lot of similarity between what happens behind the scenes when you either call 'f' or 'make-s'. Once 'a' and 'b' are supplied at call/instantiate time, there is enough information to compute 'c'.
You could think of instantiating a struct as being like a calling a function, and then storing the resulting symbolic environment for later use when the generated accessor functions are called. Or you could think of a evaluting a function as being like creating a hidden struct and then using it as the symbolic environment with which to evaluate the final result expression.
Is my toy model so oversimplified that it's useless? Or is it actually a helpful way to think about how real languages work? Are there any real languages/implementations that someone without a CS background but with an interest in programming languages (i.e. me) should learn more about in order to explore this concept?
Thanks.
EDIT: Thanks for the answers so far. To elaborate a little, I guess what I'm wondering is if there are any real languages where it's the case that people learning the language are told e.g. "you should think of objects as being essentially closures". Or if there are any real language implementations where it's the case that instantiating an object and calling a function actually share some common (non-trivial, i.e. not just library calls) code or data structures.
Does the analogy I'm making, which I know others have made before, go any deeper than mere analogy in any real situations?
You can't get much purer than lambda calculus: http://en.wikipedia.org/wiki/Lambda_calculus. Lambda calculus is in fact so pure, it only has functions!
A standard way of implementing a pair in lambda calculus is like so:
pair = fn a: fn b: fn x: x a b
first = fn a: fn b: a
second = fn a: fn b: b
So pair a b, what you might call a "struct", is actually a function (fn x: x a b). But it's a special type of function called a closure. A closure is essentially a function (fn x: x a b) plus values for all of the "free" variables (in this case, a and b).
So yes, instantiating a "struct" is like calling a function, but more importantly, the actual "struct" itself is like a special type of function (a closure).
If you think about how you would implement a lambda calculus interpreter, you can see the symmetry from the other side: you could implement a closure as an expression plus a struct containing the values of all the free variables.
Sorry if this is all obvious and you just wanted some real world example...
Both f and make-s are functions, but the resemblance doesn't go much further. Applying f calls the function and executes its code; applying make-s creates a structure.
In most language implementations and modelizations, make-s is a different kind of object from f: f is a closure, whereas make-s is a constructor (in the functional languages and logic meaning, which is close to the object oriented languages meaning).
If you like to think in an object-oriented way, both f and make-s have an apply method, but they have completely different implementations of this method.
If you like to think in terms of the underlying logic, f and make-s have a type build on the samme type constructor (the function type constructor), but they are constructed in different ways and have different destruction rules (function application vs. constructor application).
If you'd like to understand that last paragraph, I recommend Types and Programming Languages by Benjamin C. Pierce. Structures are discussed in §11.8.
Is my toy model so oversimplified that it's useless?
Essentially, yes. Your simplified model basically boils down to saying that each of these operations involves performing a computation and putting the result somewhere. But that is so general, it covers anything that a computer does. If you didn't perform a computation, you wouldn't be doing anything useful. If you didn't put the result somewhere, you would have done work for nothing as you have no way to get the result. So anything useful you do with a computer, from adding two registers together, to fetching a web page, could be modeled as performing a computation and putting the result somewhere that it can be accessed later.
There is a relationship between objects and closures. http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg03277.html
The following creates what some might call a function, and others might call an object:
Taken from SICP ( http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-21.html )
(define (make-account balance)
(define (withdraw amount)
(if (>= balance amount)
(begin (set! balance (- balance amount))
balance)
"Insufficient funds"))
(define (deposit amount)
(set! balance (+ balance amount))
balance)
(define (dispatch m)
(cond ((eq? m 'withdraw) withdraw)
((eq? m 'deposit) deposit)
(else (error "Unknown request -- MAKE-ACCOUNT"
m))))
dispatch)
This site makes the following claim:
http://hyperpolyglot.wikidot.com/lisp#ten-primitives
McCarthy introduced the ten primitives of lisp in 1960. All other pure lisp
functions (i.e. all functions which don't do I/O or interact with the environment)
can be implemented with these primitives. Thus, when implementing or porting lisp,
these are the only functions which need to be implemented in a lower language. The
way the non-primitives of lisp can be constructed from primitives is analogous to
the way theorems can be proven from axioms in mathematics.
The primitives are: atom, quote, eq, car, cdr, cons, cond, lambda, label, apply.
My question is - can you really do this without type predicates such as numberp? Surely there is a point when writing a higher level function that you need to do a numeric operation - which the primitives above don't allow for.
Some numbers can be represented with just those primitives, it's just rather inconvenient and difficult the conceptualize the first time you see it.
Similar to how the natural numbers are represented with sets increasing in size, they can be simulated in Lisp as nested cons cells.
Zero would be the empty list, or (). One would be the singleton cons cell, or (() . ()). Two would be one plus one, or the successor of one, where we define the successor of x to be (cons () x) , which is of course (() . (() . ())). If you accept the Infinity Axiom (and a few more, but mostly the Infinity Axiom for our purposes so far), and ignore the memory limitations of real computers, this can accurately represent all the natural numbers.
It's easy enough to extend this to represent all the integers and then the rationals [1], but representing the reals in this notation would be (I think) impossible. Fortunately, this doesn't dampen our fun, as we can't represent the all the reals on our computers anyway; we make do with floats and doubles. So our representation is just as powerful.
In a way, 1 is just syntactic sugar for (() . ()).
Hurray for set theory! Hurray for Lisp!
EDIT Ah, for further clarification, let me address your question of type predicates, though at this point it could be clear. Since your numbers have a distinct form, you can test these linked lists with a function of your own creation that tests for this particular structure. My Scheme isn't good enough anymore to write it in Scheme, but I can attempt to in Clojure.
Regardless, you may be saying that it could give you false positives: perhaps you're simply trying to represent sets and you end up having the same structure as a number in this system. To that I reply: well, in that case, you do in fact have a number.
So you can see, we've got a pretty decent representation of numbers here, aside from how much memory they take up (not our concern) and how ugly they look when printed at the REPL (also, not our concern) and how inefficient it will be to operate on them (e.g. we have to define our addition etc. in terms of list operations: slow and a bit complicated.) But none of these are out concern: the speed really should and could depend on the implementation details, not what you're doing this the language.
So here, in Clojure (but using only things we basically have access to in our simple Lisp, is numberp. (I hope; feel free to correct me, I'm groggy as hell etc. excuses etc.)
(defn numberp
[x]
(cond
(nil? x) true
(and (coll? x) (nil? (first x))) (numberp (second x))
:else false))
[1] For integers, represent them as cons cells of the naturals. Let the first element in the cons cell be the "negative" portion of the integer, and the second element be the "positive" portion of the integer. In this way, -2 can be represented as (2, 0) or (4, 2) or (5, 3) etc. For the rationals, let them be represented as cons cells of the integers: e.g. (-2, 3) etc. This does give us the possibility of having the same data structure representing the same number: however, this can be remedied by writing functions that test two numbers to see if they're equivalent: we'd define these functions in terms of the already-existing equivalence relations set theory offers us. Fun stuff :)