I am currently reading the first volume of the softwarefoundations series. In one of the exercises I am supposed to write a function which turns a natural number (unary form) into the equivalent binary number.
This is my code/approach:
Inductive bin : Type :=
| Z
| B0 (n : bin)
| B1 (n : bin).
Fixpoint evenb (n:nat) : bool :=
match n with
| O => true
| S O => false
| S (S n') => evenb n'
end.
Fixpoint nat_to_bin (n:nat) : bin :=
match n with
| 0 => Z
| 1 => B1 Z
| 2 => B0 (B1 Z)
| m => match evenb(m) with
| true => B0 (nat_to_bin (Nat.div m 2))
| false => B1 (nat_to_bin (Nat.modulo m 2))
end
end.
I am using https://jscoq.github.io/scratchpad.html to work on these exercises.
Now I get this error message:
Recursive definition of nat_to_bin is ill-formed. In environment
nat_to_bin : nat -> bin
n : nat
n0 : nat
n1 : nat
n2 : nat
Recursive call to nat_to_bin has principal argument equal to "Nat.div
n 2 " instead of one of the following variables: "n0" "n1" "n2" .
Recursive definition is: "fun n : nat => match n with
| 0 => Z
| 1 => B1 Z
| 2 => B0 (B1 Z )
| S (S (S _) ) =>
if evenb n then B0 (nat_to_bin (Nat.div n 2 ) )
else B1 (nat_to_bin (Nat.modulo n 2 ) )
end " .
To retain good logical properties, all functions definable in Coq are terminating. To enforce that, there is a restriction on fixpoint definitions, like the one you are trying to do, called the guard condition. This restriction is roughly that the recursive call can only be done on subterms of the argument of the function.
This is not the case in your definition, where you apply nat_to_bin to the terms (Nat.div n 2) and (Nat.modulo n 2) which are functions applied to n. Although you can mathematically prove that those are always smaller than n, they are no subterms of n, so your function does not respect the guard condition.
If you wanted to define nat_to_bin in the way you are doing, you would need to resort to well-founded induction, which would use the well-foundedness of the order on nat to allow you to call you function on any term you can prove smaller than n. However, this solution is quite complex, because it would force you to do some proofs that are not that easy.
Instead, I would advise going another way: just above in the book, it is suggested to define a function incr : bin -> bin that increments a binary number by one. You can use that one to define nat_to_bin by a simple recursion on n, like this:
Fixpoint nat_to_bin (n:nat) : bin :=
match n with
| 0 => Z
| S n' => incr (nat_to_bin n')
end.
As for incr itself, you should also be able to write it down using a simple recursion on your binary number, as they are written with low-order bit outside.
Related
I want a function maxfunct, with input f (a function) and input n (int), that computes all outputs of function f with inputs 0 to n, and checks for the max value of the output.
I am quite new to haskell, what I tried is something like that:
maxfunct f n
| n < 0 = 0
| otherwise = maximum [k | k <- [\(f, x)-> f x], x<- [0..n]]
Idea is that I store every output of f in a list, and check for the maximum in this list.
How can I achieve that?
You're close. First, let's note the type of the function we're trying to write. Starting with the type, in addition to helping you get a better feel for the function, also lets the compiler give us better error messages. It looks like you're expecting a function and an integer. The result of the function should be compatible with maximum (i.e. should satisfy Ord) and also needs to have a reasonable "zero" value (so we'll just say it needs Num, for simplicity's sake; in reality, we might consider using Bounded or Monoid or something, depending on your needs, but Num will suffice for now).
So here's what I propose as the type signature.
maxfunct :: (Num a, Ord a) => (Int -> a) -> Int -> a
Technically, we could generalize a bit more and make the Int a type argument as well (requires Num, Enum, and Ord), but that's probably overkill. Now, let's look at your implementation.
maxfunct f n
| n < 0 = 0
| otherwise = maximum [k | k <- [\(f, x)-> f x], x<- [0..n]]
Not bad. The first case is definitely good. But I think you may have gotten a bit confused in the list comprehension syntax. What we want to say is: take every value from 0 to n, apply f to it, and then maximize.
maxfunct :: (Num a, Ord a) => (Int -> a) -> Int -> a
maxfunct f n
| n < 0 = 0
| otherwise = maximum [f x | x <- [0..n]]
and there you have it. For what it's worth, you can also do this with map pretty easily.
maxfunct :: (Num a, Ord a) => (Int -> a) -> Int -> a
maxfunct f n
| n < 0 = 0
| otherwise = maximum $ map f [0..n]
It's just a matter of which you find more easily readable. I'm a map / filter guy myself, but lots of folks prefer list comprehensions, so to each his own.
Im pretty much new to Haskell, so if Im missing key concept, please point it out.
Lets say we have these two functions:
fact n
| n == 0 = 1
| n > 0 = n * (fact (n - 1))
The polymorphic type for fact is (Eq t, Num t) => t -> t Because n is used in the if condition and n must be of valid type to do the == check. Therefor t must be a Number and t can be of any type within class constraint Eq t
fib n
| n == 1 = 1
| n == 2 = 1
| n > 2 = fib (n - 1) + fib (n - 2)
Then why is the polymorphic type of fib is (Eq a, Num a, Num t) => a -> t?
I don't understand, please help.
Haskell always aims to derive the most generic type signature.
Now for fact, we know that the type of the output, should be the same as the type of the input:
fact n | n == 0 = 1
| n > 0 = n * (fact (n - 1))
This is due to the last line. We use n * (fact (n-1)). So we use a multiplication (*) :: a -> a -> a. Multiplication thus takes two members of the same type and returns a member of that type. Since we multiply with n, and n is input, the output is of the same type as the input. Since we use n == 0, we know that (==) :: Eq a => a -> a -> Bool so that means that that input type should have Eq a =>, and furthermore 0 :: Num a => a. So the resulting type is fact :: (Num a, Eq a) => a -> a.
Now for fib, we see:
fib n | n == 1 = 1
| n == 2 = 1
| n > 2 = fib (n - 1) + fib (n - 2)
Now we know that for n, the type constraints are again Eq a, Num a, since we use n == 1, and (==) :: Eq a => a -> a -> Bool and 1 :: Num a => a. But the input n is never directly used in the output. Indeed, the last line has fib (n-1) + fib (n-2), but here we use n-1 and n-2 as input of a new call. So that means we can safely asume that the input type and the output type act independently. The output type, still has a type constraint: Num t: this is since we return 1 for the first two cases, and 1 :: Num t => t, and we also return the addition of two outputs: fib (n-1) + fib (n-2), so again (+) :: Num t => t -> t -> t.
The difference is that in fact, you use the argument directly in an arithmetic expression which makes up the final result:
fact n | ... = n * ...
IOW, if you write out the expanded arithmetic expression, n appears in it:
fact 3 ≡ n * (n-1) * (n-2) * 1
This fixes that the argument must have the same type as the result, because
(*) :: Num n => n -> n -> n
Not so in fib: here the actual result is only composed of literals and of sub-results. IOW, the expanded expression looks like
fib 3 ≡ (1 + 1) + 1
No n in here, so no unification between argument and result required.
Of course, in both cases you also used n to decide how this arithmetic expression looks, but for that you've just used equality comparisons with literals, whose type is not connected to the final result.
Note that you can also give fib a type-preservig signature: (Eq a, Num a, Num t) => a -> t is strictly more general than (Eq t, Num t) => t -> t. Conversely, you can make a fact that doesn't require input- and output to be the same type, by following it with a conversion function:
fact' :: (Eq a, Integral a, Num t) => a -> t
fact' = fromIntegral . fact
This doesn't make a lot of sense though, because Integer is pretty much the only type that can reliably be used in fact, but to achieve that in the above version you need to start out with Integer. Hence if anything, you should do the following:
fact'' :: (Eq t, Integral a, Num t) => a -> t
fact'' = fact . fromIntegral
This can then be used also as Int -> Integer, which is somewhat sensible.
I'd recommend to just keep the signature (Eq t, Num t) => t -> t though, and only add conversion operations where it's actually needed. Or really, what I'd recommend is to not use fact at all – this is a very expensive function that's hardly ever really useful in practice; most applications that naïvely end up with a factorial really just need something like binomial coefficients, and those can be implemented more efficiently without a factorial.
Is it possible to define a single notation for multiple constructors in Coq? If the constructors differ by their argument types, they might be inferrable from them. A minimal (non-)working example:
Inductive A : Set := a | b | c: C -> A | d: D -> A
with C: Set := c1 | c2
with D: Set := d1 | d2.
Notation "' x" := (_ x) (at level 19).
Check 'c1. (*?6 c1 : ?8*)
In this case, constructor inference doesn't work. Maybe there's another way to specify a constructor as a variable?
You can create a typeclass with the constructors as instances and let the instance resolution mechanism infer the constructor to call for you:
Class A_const (X:Type) : Type :=
a_const : X -> A.
Instance A_const_c : A_const C := c.
Instance A_const_d : A_const D := d.
Check a_const c1.
Check a_const d2.
By the way, with Coq 8.5, if you really want a notation ' x to result in the exact constructor applied to x, rather than e.g. #a_const C A_const_c c1, then you can use ltac-terms to accomplish that:
Notation "' x" := ltac:(match constr:(a_const x) with
| #a_const _ ?f _ =>
let unfolded := (eval unfold f in f) in
exact (unfolded x)
end) (at level 0).
Check 'c1. (* c c1 : A *)
Check 'd2. (* d d2 : A *)
In fact, the idea of using an ltac-term leads to an entirely different solution from the other one I posted:
Notation "' x" := ltac:(let T := (type of x) in
let T' := (eval hnf in T) in
match T' with
| C => exact (c x)
| D => exact (d x)
end) (at level 0).
Check 'c1. (* c c1 : A *)
Check 'd2. (* d d2 : A *)
(Here the eval hnf part lets it work even if the type of the argument isn't syntactically equal to C or D, but it does reduce to one of them.)
Apparently, it's easy:
Notation "' x" := ((_:_->A) x) (at level 19).
Check 'c1. (*' c1 : A*)
According to this course, all constructors (for inductive types) are injective and disjoint:
...Similar principles apply to all inductively defined types: all
constructors are injective, and the values built from distinct
constructors are never equal. For lists, the cons constructor is
injective and nil is different from every non-empty list. For
booleans, true and false are unequal.
(And the inversion tactic based on this assumption)
I am just wondering how do we know this assumption holds?
How do we know that, e.g., we cannot define natural numbers based on
1) a Successor and maybe a "Double" constructor like this:
Inductive num: Type :=
| O : num
| S : num -> num
| D : num -> num.
and
2) some plus function so that one number 2 can be reached via two different sequences/routes of constructors, S (S O) and D (S O)?
What's the mechanism in Coq that ensures the above will never happen?
P.S.
I am not suggesting the above num example is possible. I am just wondering what makes it impossible.
Thanks
When you define an inductive data type in Coq, you are essentially
defining a tree type. Each constructor gives a kind of node that is
allowed to occur in your tree, and its arguments determine the
children and elements that that node can have. Finally, functions
defined on inductive types (with the match clause) can check the
constructors that were used to produce a value of that type in
arbitrary ways. This makes Coq constructors very different from
constructors you see in an OO language, for instance. An object
constructor is implemented as a regular function that initializes a
value of a given type; Coq constructors, on the other hand, are
enumerating the possible values that the representation of our type
allows. To understand this difference better, we can compare the
different functions we can define on an object in a traditional OO
language, and on an element of an inductive type in Coq. Let's use
your num type as an example. Here's an object-oriented definition:
class Num {
int val;
private Num(int v) {
this.val = v;
}
/* These are the three "constructors", even though they
wouldn't correspond to what is called a "constructor" in
Java, for instance */
public static zero() {
return new Num(0);
}
public static succ(Num n) {
return new Num(n.val + 1);
}
public static doub(Num n) {
return new Num(2 * n.val);
}
}
And here's a definition in Coq:
Inductive num : Type :=
| zero : num
| succ : num -> num
| doub : num -> num.
In the OO example, when we write a function that takes a Num
argument, there's no way of knowing which "constructor" was used to
produce that value, because this information is not stored in the
val field. In particular Num.doub(Num.succ(Num.zero())) and
Num.succ(Num.succ(Num.zero())) would be equal values.
In the Coq example, on the other hand, things change, because we can
determine which constructor was used to form a num value, thanks to
the match statement. For instance, using Coq strings, we could write
a function like this:
Require Import Coq.Strings.String.
Open Scope string_scope.
Definition cons_name (n : num) : string :=
match n with
| zero => "zero"
| succ _ => "succ"
| doub _ => "doub"
end.
In particular, even though your intended meaning for the constructors
implies that succ (succ zero) and doub (succ zero) should be
"morally" equal, we can distinguish them by applying the cons_name
function to them:
Compute cons_name (doub (succ zero)). (* ==> "doub" *)
Compute cons_name (succ (succ zero)). (* ==> "succ" *)
As a matter of fact, we can use match to distinguish between succ
and doub in arbitrary ways:
match n with
| zero => false
| succ _ => false
| doub _ => true
end
Now, a = b in Coq means that there is no possible way we can
distinguish between a and b. The above examples show why doub
(succ zero) and succ (succ zero) cannot be equal, because we can
write functions that don't respect the meaning that we had in mind
when we wrote that type.
This explains why constructors are disjoint. That they are injective
is actually also a consequence of pattern-matching. For instance,
suppose that we wanted to prove the following statement:
forall n m, succ n = succ m -> n = m
We can begin the proof with
intros n m H.
Leading us to
n, m : num
H : succ n = succ m
===============================
n = m
Notice that this goal is by simplification equivalent to
n, m : num
H : succ n = succ m
===============================
match succ n with
| succ n' => n' = m
| _ => True
end
If we do rewrite H, we obtain
n, m : num
H : succ n = succ m
===============================
match succ m with
| succ n' => n' = m
| _ => True
end
which simplifies to
n, m : num
H : succ n = succ m
===============================
m = m
At this point, we can conclude with reflexivity. This technique is
quite general, and is actually at the core of what inversion does.
There is none: the constructors O, S and D are indeed disjoint and injective but the semantics for nums you have in your head is not, as a function, injective.
That is why num would usually be considered to be a bad representation of the natural numbers: working up-to equivalence is quite annoying.
Trying to prove correctness of a insertion function of elements into a bst I got stuck trying to prove a seemingly trivial lemma.
My attempt so far:
Inductive tree : Set :=
| leaf : tree
| node : tree -> nat -> tree -> tree.
Fixpoint In (n : nat) (T : tree) {struct T} : Prop :=
match T with
| leaf => False
| node l v r => In n l \/ v = n \/ In n r
end.
(* all_lte is the proposition that all nodes in tree t
have value at most n *)
Definition all_lte (n : nat) (t : tree) : Prop :=
forall x, In x t -> (x <= n).
Lemma all_lte_trans: forall n m t, n <= m /\ all_lte n t -> all_lte m t.
Proof.
intros.
destruct H.
unfold all_lte in H0.
unfold all_lte.
intros.
Clearly if everything in the tree is smaller than n and n <= m everything is smaller than m, but I cannot seem to make coq believe me. How do I continue?
You have to use the le_trans theorem :
le_trans: forall n m p : nat, n <= m -> m <= p -> n <= p
that comes from Le package.
It meas that you have to import Le or more generally Arith with :
Require Import Arith.
at the beginning of your file. Then, you can do :
eapply le_trans.
eapply H0; trivial.
trivial.