Using coq, trying to prove a simple lemma on trees - proof

Trying to prove correctness of a insertion function of elements into a bst I got stuck trying to prove a seemingly trivial lemma.
My attempt so far:
Inductive tree : Set :=
| leaf : tree
| node : tree -> nat -> tree -> tree.
Fixpoint In (n : nat) (T : tree) {struct T} : Prop :=
match T with
| leaf => False
| node l v r => In n l \/ v = n \/ In n r
end.
(* all_lte is the proposition that all nodes in tree t
have value at most n *)
Definition all_lte (n : nat) (t : tree) : Prop :=
forall x, In x t -> (x <= n).
Lemma all_lte_trans: forall n m t, n <= m /\ all_lte n t -> all_lte m t.
Proof.
intros.
destruct H.
unfold all_lte in H0.
unfold all_lte.
intros.
Clearly if everything in the tree is smaller than n and n <= m everything is smaller than m, but I cannot seem to make coq believe me. How do I continue?

You have to use the le_trans theorem :
le_trans: forall n m p : nat, n <= m -> m <= p -> n <= p
that comes from Le package.
It meas that you have to import Le or more generally Arith with :
Require Import Arith.
at the beginning of your file. Then, you can do :
eapply le_trans.
eapply H0; trivial.
trivial.

Related

Finding inverse functions [duplicate]

In pure functional languages like Haskell, is there an algorithm to get the inverse of a function, (edit) when it is bijective? And is there a specific way to program your function so it is?
In some cases, yes! There's a beautiful paper called Bidirectionalization for Free! which discusses a few cases -- when your function is sufficiently polymorphic -- where it is possible, completely automatically to derive an inverse function. (It also discusses what makes the problem hard when the functions are not polymorphic.)
What you get out in the case your function is invertible is the inverse (with a spurious input); in other cases, you get a function which tries to "merge" an old input value and a new output value.
No, it's not possible in general.
Proof: consider bijective functions of type
type F = [Bit] -> [Bit]
with
data Bit = B0 | B1
Assume we have an inverter inv :: F -> F such that inv f . f ≡ id. Say we have tested it for the function f = id, by confirming that
inv f (repeat B0) -> (B0 : ls)
Since this first B0 in the output must have come after some finite time, we have an upper bound n on both the depth to which inv had actually evaluated our test input to obtain this result, as well as the number of times it can have called f. Define now a family of functions
g j (B1 : B0 : ... (n+j times) ... B0 : ls)
= B0 : ... (n+j times) ... B0 : B1 : ls
g j (B0 : ... (n+j times) ... B0 : B1 : ls)
= B1 : B0 : ... (n+j times) ... B0 : ls
g j l = l
Clearly, for all 0<j≤n, g j is a bijection, in fact self-inverse. So we should be able to confirm
inv (g j) (replicate (n+j) B0 ++ B1 : repeat B0) -> (B1 : ls)
but to fulfill this, inv (g j) would have needed to either
evaluate g j (B1 : repeat B0) to a depth of n+j > n
evaluate head $ g j l for at least n different lists matching replicate (n+j) B0 ++ B1 : ls
Up to that point, at least one of the g j is indistinguishable from f, and since inv f hadn't done either of these evaluations, inv could not possibly have told it apart – short of doing some runtime-measurements on its own, which is only possible in the IO Monad.
                                                                                                                                   ⬜
You can look it up on wikipedia, it's called Reversible Computing.
In general you can't do it though and none of the functional languages have that option. For example:
f :: a -> Int
f _ = 1
This function does not have an inverse.
Not in most functional languages, but in logic programming or relational programming, most functions you define are in fact not functions but "relations", and these can be used in both directions. See for example prolog or kanren.
Tasks like this are almost always undecidable. You can have a solution for some specific functions, but not in general.
Here, you cannot even recognize which functions have an inverse. Quoting Barendregt, H. P. The Lambda Calculus: Its Syntax and Semantics. North Holland, Amsterdam (1984):
A set of lambda-terms is nontrivial if it is neither the empty nor the full set. If A and B are two nontrivial, disjoint sets of lambda-terms closed under (beta) equality, then A and B are recursively inseparable.
Let's take A to be the set of lambda terms that represent invertible functions and B the rest. Both are non-empty and closed under beta equality. So it's not possible to decide whether a function is invertible or not.
(This applies to the untyped lambda calculus. TBH I don't know if the argument can be directly adapted to a typed lambda calculus when we know the type of a function that we want to invert. But I'm pretty sure it will be similar.)
If you can enumerate the domain of the function and can compare elements of the range for equality, you can - in a rather straightforward way. By enumerate I mean having a list of all the elements available. I'll stick to Haskell, since I don't know Ocaml (or even how to capitalise it properly ;-)
What you want to do is run through the elements of the domain and see if they're equal to the element of the range you're trying to invert, and take the first one that works:
inv :: Eq b => [a] -> (a -> b) -> (b -> a)
inv domain f b = head [ a | a <- domain, f a == b ]
Since you've stated that f is a bijection, there's bound to be one and only one such element. The trick, of course, is to ensure that your enumeration of the domain actually reaches all the elements in a finite time. If you're trying to invert a bijection from Integer to Integer, using [0,1 ..] ++ [-1,-2 ..] won't work as you'll never get to the negative numbers. Concretely, inv ([0,1 ..] ++ [-1,-2 ..]) (+1) (-3) will never yield a value.
However, 0 : concatMap (\x -> [x,-x]) [1..] will work, as this runs through the integers in the following order [0,1,-1,2,-2,3,-3, and so on]. Indeed inv (0 : concatMap (\x -> [x,-x]) [1..]) (+1) (-3) promptly returns -4!
The Control.Monad.Omega package can help you run through lists of tuples etcetera in a good way; I'm sure there's more packages like that - but I don't know them.
Of course, this approach is rather low-brow and brute-force, not to mention ugly and inefficient! So I'll end with a few remarks on the last part of your question, on how to 'write' bijections. The type system of Haskell isn't up to proving that a function is a bijection - you really want something like Agda for that - but it is willing to trust you.
(Warning: untested code follows)
So can you define a datatype of Bijection s between types a and b:
data Bi a b = Bi {
apply :: a -> b,
invert :: b -> a
}
along with as many constants (where you can say 'I know they're bijections!') as you like, such as:
notBi :: Bi Bool Bool
notBi = Bi not not
add1Bi :: Bi Integer Integer
add1Bi = Bi (+1) (subtract 1)
and a couple of smart combinators, such as:
idBi :: Bi a a
idBi = Bi id id
invertBi :: Bi a b -> Bi b a
invertBi (Bi a i) = (Bi i a)
composeBi :: Bi a b -> Bi b c -> Bi a c
composeBi (Bi a1 i1) (Bi a2 i2) = Bi (a2 . a1) (i1 . i2)
mapBi :: Bi a b -> Bi [a] [b]
mapBi (Bi a i) = Bi (map a) (map i)
bruteForceBi :: Eq b => [a] -> (a -> b) -> Bi a b
bruteForceBi domain f = Bi f (inv domain f)
I think you could then do invert (mapBi add1Bi) [1,5,6] and get [0,4,5]. If you pick your combinators in a smart way, I think the number of times you'll have to write a Bi constant by hand could be quite limited.
After all, if you know a function is a bijection, you'll hopefully have a proof-sketch of that fact in your head, which the Curry-Howard isomorphism should be able to turn into a program :-)
I've recently been dealing with issues like this, and no, I'd say that (a) it's not difficult in many case, but (b) it's not efficient at all.
Basically, suppose you have f :: a -> b, and that f is indeed a bjiection. You can compute the inverse f' :: b -> a in a really dumb way:
import Data.List
-- | Class for types whose values are recursively enumerable.
class Enumerable a where
-- | Produce the list of all values of type #a#.
enumerate :: [a]
-- | Note, this is only guaranteed to terminate if #f# is a bijection!
invert :: (Enumerable a, Eq b) => (a -> b) -> b -> Maybe a
invert f b = find (\a -> f a == b) enumerate
If f is a bijection and enumerate truly produces all values of a, then you will eventually hit an a such that f a == b.
Types that have a Bounded and an Enum instance can be trivially made RecursivelyEnumerable. Pairs of Enumerable types can also be made Enumerable:
instance (Enumerable a, Enumerable b) => Enumerable (a, b) where
enumerate = crossWith (,) enumerate enumerate
crossWith :: (a -> b -> c) -> [a] -> [b] -> [c]
crossWith f _ [] = []
crossWith f [] _ = []
crossWith f (x0:xs) (y0:ys) =
f x0 y0 : interleave (map (f x0) ys)
(interleave (map (flip f y0) xs)
(crossWith f xs ys))
interleave :: [a] -> [a] -> [a]
interleave xs [] = xs
interleave [] ys = []
interleave (x:xs) ys = x : interleave ys xs
Same goes for disjunctions of Enumerable types:
instance (Enumerable a, Enumerable b) => Enumerable (Either a b) where
enumerate = enumerateEither enumerate enumerate
enumerateEither :: [a] -> [b] -> [Either a b]
enumerateEither [] ys = map Right ys
enumerateEither xs [] = map Left xs
enumerateEither (x:xs) (y:ys) = Left x : Right y : enumerateEither xs ys
The fact that we can do this both for (,) and Either probably means that we can do it for any algebraic data type.
Not every function has an inverse. If you limit the discussion to one-to-one functions, the ability to invert an arbitrary function grants the ability to crack any cryptosystem. We kind of have to hope this isn't feasible, even in theory!
In some cases, it is possible to find the inverse of a bijective function by converting it into a symbolic representation. Based on this example, I wrote this Haskell program to find inverses of some simple polynomial functions:
bijective_function x = x*2+1
main = do
print $ bijective_function 3
print $ inverse_function bijective_function (bijective_function 3)
data Expr = X | Const Double |
Plus Expr Expr | Subtract Expr Expr | Mult Expr Expr | Div Expr Expr |
Negate Expr | Inverse Expr |
Exp Expr | Log Expr | Sin Expr | Atanh Expr | Sinh Expr | Acosh Expr | Cosh Expr | Tan Expr | Cos Expr |Asinh Expr|Atan Expr|Acos Expr|Asin Expr|Abs Expr|Signum Expr|Integer
deriving (Show, Eq)
instance Num Expr where
(+) = Plus
(-) = Subtract
(*) = Mult
abs = Abs
signum = Signum
negate = Negate
fromInteger a = Const $ fromIntegral a
instance Fractional Expr where
recip = Inverse
fromRational a = Const $ realToFrac a
(/) = Div
instance Floating Expr where
pi = Const pi
exp = Exp
log = Log
sin = Sin
atanh = Atanh
sinh = Sinh
cosh = Cosh
acosh = Acosh
cos = Cos
tan = Tan
asin = Asin
acos = Acos
atan = Atan
asinh = Asinh
fromFunction f = f X
toFunction :: Expr -> (Double -> Double)
toFunction X = \x -> x
toFunction (Negate a) = \a -> (negate a)
toFunction (Const a) = const a
toFunction (Plus a b) = \x -> (toFunction a x) + (toFunction b x)
toFunction (Subtract a b) = \x -> (toFunction a x) - (toFunction b x)
toFunction (Mult a b) = \x -> (toFunction a x) * (toFunction b x)
toFunction (Div a b) = \x -> (toFunction a x) / (toFunction b x)
with_function func x = toFunction $ func $ fromFunction x
simplify X = X
simplify (Div (Const a) (Const b)) = Const (a/b)
simplify (Mult (Const a) (Const b)) | a == 0 || b == 0 = 0 | otherwise = Const (a*b)
simplify (Negate (Negate a)) = simplify a
simplify (Subtract a b) = simplify ( Plus (simplify a) (Negate (simplify b)) )
simplify (Div a b) | a == b = Const 1.0 | otherwise = simplify (Div (simplify a) (simplify b))
simplify (Mult a b) = simplify (Mult (simplify a) (simplify b))
simplify (Const a) = Const a
simplify (Plus (Const a) (Const b)) = Const (a+b)
simplify (Plus a (Const b)) = simplify (Plus (Const b) (simplify a))
simplify (Plus (Mult (Const a) X) (Mult (Const b) X)) = (simplify (Mult (Const (a+b)) X))
simplify (Plus (Const a) b) = simplify (Plus (simplify b) (Const a))
simplify (Plus X a) = simplify (Plus (Mult 1 X) (simplify a))
simplify (Plus a X) = simplify (Plus (Mult 1 X) (simplify a))
simplify (Plus a b) = (simplify (Plus (simplify a) (simplify b)))
simplify a = a
inverse X = X
inverse (Const a) = simplify (Const a)
inverse (Mult (Const a) (Const b)) = Const (a * b)
inverse (Mult (Const a) X) = (Div X (Const a))
inverse (Plus X (Const a)) = (Subtract X (Const a))
inverse (Negate x) = Negate (inverse x)
inverse a = inverse (simplify a)
inverse_function x = with_function inverse x
This example only works with arithmetic expressions, but it could probably be generalized to work with lists as well. There are also several implementations of computer algebra systems in Haskell that may be used to find the inverse of a bijective function.
No, not all functions even have inverses. For instance, what would the inverse of this function be?
f x = 1

Proving if n = m and m = o, then n + m = m + o in Idris?

I am trying to improve my Idris skill by looking at some of the exercises Software Foundations (originally for Coq, but I am hoping the translation to Idris not too bad). I am having trouble with the "Exercise: 1 star (plus_id_exercise)" which reads:
Remove "Admitted." and fill in the proof.
Theorem plus_id_exercise : ∀ n m o : nat,
n = m → m = o → n + m = m + o.
Proof.
(* FILL IN HERE *) Admitted.
I have translated to the following problem in Idris:
plusIdExercise : (n : Nat) ->
(m : Nat) ->
(o : Nat) ->
(n == m) = True ->
(m == o) = True ->
(n + m == m + o) = True
I am trying to perform a case by case analysis and I am having a lot of issues. The first case:
plusIdExercise Z Z Z n_eq_m n_eq_o = Refl
seems to work, but then I want to say for instance:
plusIdExercise (S n) Z Z n_eq_m n_eq_o = absurd
But this doesn't work and gives:
When checking right hand side of plusIdExercise with expected type
S n + 0 == 0 + 0 = True
Type mismatch between
t -> a (Type of absurd)
and
False = True (Expected type)
Specifically:
Type mismatch between
\uv => t -> uv
and
(=) FalseUnification failure
I am trying to say this case can never happen because n == m, but Z (= m) is never the successor of any number (n). Is there anything I can do to fix this? Am I approaching this correctly? I am somewhat confused.
I would argue that the translation is not entirely correct. The lemma stated in Coq does not use boolean equality on natural numbers, it uses the so-called propositional equality. In Coq you can ask the system to give you more information about things:
Coq < About "=".
eq : forall A : Type, A -> A -> Prop
The above means = (it is syntactic sugar for eq type) takes two arguments of some type A and produces a proposition, not a boolean value.
That means that a direct translation would be the following snippet
plusIdExercise : (n = m) -> (m = o) -> (n + m = m + o)
plusIdExercise Refl Refl = Refl
And when you pattern-match on values of the equality type, Idris essentially rewrites terms according to the corresponding equation (it's roughly equivalent to Coq's rewrite tactic).
By the way, you might find the Software Foundations in Idris project useful.

Proving identity for binary operator on Fin

I've defined an operator, +- (ignore the terrible name), as follows:
infixr 10 +-
(+-) : Fin (S n) -> Fin (S m) -> Fin (S (n + m))
(+-) {n} {m} FZ f' = rewrite plusCommutative n m in weakenN n f'
(+-) {n = S n} (FS f) f' = FS (f +- f')
The intention is that it behaves exactly like + as defined on Fin, but the upper bound of the result is tighter by 1. As far as I can tell, it works correctly.
The problem I'm having is in trying to prove that (FZ +- f) = f for any f : Fin n. I'm not expecting this to be true in general, because it will usually be the case that FZ +- f has a looser bound than f on account of the call to weakenN. However, in the particular case where the FZ has type Fin 1, then the types (and the values) ought to match up.
Is there any way of indicating to Idris that I only want to assert the equality in that particular case, rather than for all possible types of FZ? Or is there a completely different approach that I ought to be taking?
If we reshuffle the definition for (+-) a bit, the proof becomes easy:
import Data.Fin
infixr 10 +-
total
(+-) : Fin (S n) -> Fin (S m) -> Fin (S (n + m))
(+-) {n = Z} {m = m} a b = b
(+-) {n = (S n)}{m = m} FZ b = rewrite plusCommutative (S n) m in weakenN (S n) b
(+-) {n = (S n)}{m = m} (FS a) b = FS (a +- b)
lem : (f : Fin (S n)) -> the (Fin 1) FZ +- f = f
lem FZ = Refl
lem (FS x) = Refl
This checks out because the rewrite on the right hand side of the (+-) definition happens to normalize to concrete values instead of substitutions/coercions.
On the other hand, if we'd like to stick with the original definition for (+-), then the rewrite doesn't go away, and we're in for a whole world of pain, because now we have to work with heterogeneous equalities. I did a proof with heterogeneous equalities in Agda, however I couldn't get it to work in Idris on short notice, and I believe making it work would be a rather painful experience. Here it is in Agda.
Note though that we would have to add one more case to the original definition in order to make proving properties about it feasible in the first place. That's because it doesn't pass the coverage checker as it is. It's evident to us that Fin 1 only has FZ as constructor, but this has to be also explained to the compiler:
(+-) : Fin (S n) -> Fin (S m) -> Fin (S (n + m))
(+-) {n} {m} FZ f' = rewrite plusCommutative n m in weakenN n f'
(+-) {n = Z} (FS FZ) f' impossible
(+-) {n = S n} (FS f) f' = FS (f +- f')

Prove So (0 < m) -> (n ** m = S n)

I'm trying to make an Idris function of type (j : Nat) -> {auto p : So (j < n)} -> Fin n to convert a Nat into a Fin n. To get the Z case to work (and output FZ), I'm trying to prove that a proof of 0 < n is sufficient to be able to make FZ : Fin n. But I can't work out how to do this.
I'm open to making a completely different function, as long as it can convert Nat values into Fin n values (where they exist). My goal is to have some other function that can convert any Nat into a Mod n value, so that, for example, 15 : Nat is mapped to 3 : Mod 4. My Mod type currently has a single constructor, mkMod : Fin n -> Mod n.
After learning about LT : Nat -> Nat -> Type, I took a different approach. I started with the declaration:
natToFin : (j : Nat) -> {auto p : j `LT` n} -> Fin n
natToFin {n} j {p} = ?natToFin_rhs_1
. Case-splitting on n, then on p in the n = Z case resulted in:
natToFin : (j : Nat) -> {auto p : j `LT` n} -> Fin n
natToFin {n = (S k)} j {p = p} = ?natToFin_rhs_2
, which is essentially the proof I was asking for. From there, I case-split on j and filled the zero case, leaving:
natToFin : (j : Nat) -> {auto p : j `LT` n} -> Fin n
natToFin {n = (S k)} Z = FZ
natToFin {n = (S k)} (S j) {p = p} = ?natToFin_rhs_3
. I wanted to fill ?natToFin_rhs_3 with FS (natToFin j), but the type checker wasn't letting me. However, after a case split on p, it was fine:
natToFin : (j : Nat) -> {auto p : j `LT` n} -> Fin n
natToFin {n = (S k)} Z = FZ
natToFin {n = (S k)} (S j) {p = (LTESucc x)} = FS (natToFin j)
Finally, I added total, and it all checked out.
The only problem now is that Idris can't seem to find LT proofs automatically. This is what happens:
λΠ> the (Fin 6) (natToFin 2)
When elaborating argument p to function mod2.natToFin:
Can't solve goal
LT (fromInteger 2) (fromInteger 6)
Is there any way to fix that?

Proof on less than and less or equal on nat

Assuming the following definitions (the first two are taken from http://www.cis.upenn.edu/~bcpierce/sf/Basics.html):
Fixpoint beq_nat (n m : nat) : bool :=
match n with
| O => match m with
| O => true
| S m' => false
end
| S n' => match m with
| O => false
| S m' => beq_nat n' m'
end
end.
Fixpoint ble_nat (n m : nat) : bool :=
match n with
| O => true
| S n' =>
match m with
| O => false
| S m' => ble_nat n' m'
end
end.
Definition blt_nat (n m : nat) : bool :=
if andb (ble_nat n m) (negb (beq_nat n m)) then true else false.
I would like to prove the following:
Lemma blt_nat_flip0 : forall (x y : nat),
blt_nat x y = false -> ble_nat y x = true.
Lemma blt_nat_flip : forall (x y : nat),
blt_nat x y = false -> beq_nat x y = false -> blt_nat y x = true.
The furthest I was able to get to is to prove blt_nat_flip assuming blt_nat_flip0. I spent a lot of time and I am almost there but overall it seems more complex than it should be. Anybody has a better idea on how to prove the two lemmas?
Here is my attempt so far:
Lemma beq_nat_symmetric : forall (x y : nat),
beq_nat x y = beq_nat y x.
Proof.
intros x. induction x.
intros y. simpl. destruct y.
reflexivity. reflexivity.
intros y. simpl. destruct y.
reflexivity.
simpl. apply IHx.
Qed.
Lemma and_negb_false : forall (b1 b2 : bool),
b2 = false -> andb b1 (negb b2) = b1.
Proof.
intros. rewrite -> H. unfold negb. destruct b1.
simpl. reflexivity.
simpl. reflexivity.
Qed.
Lemma blt_nat_flip0 : forall (x y : nat),
blt_nat x y = false -> ble_nat y x = true.
Proof.
intros x.
induction x.
intros. destruct y.
simpl. reflexivity.
simpl. inversion H.
intros. destruct y. simpl. reflexivity.
simpl. rewrite -> IHx. reflexivity.
(* I am giving up for now at this point ... *)
Admitted.
Lemma blt_nat_flip : forall (x y : nat),
blt_nat x y = false -> beq_nat x y = false ->
blt_nat y x = true.
Proof.
intros.
unfold blt_nat.
rewrite -> beq_nat_symmetric. rewrite -> H0.
rewrite -> and_negb_false.
replace (ble_nat y x) with true.
reflexivity.
rewrite -> blt_nat_flip0. reflexivity. apply H. reflexivity.
Qed.
coq seems to have trouble doing an inversion on H in the last case of your induction, but if you unfold blt_nat before, it seems to work as intended.