Proving ¬ (A ∧ B) → (A → ¬ B) in Lean - proof

I"m trying to prove ¬ (A ∧ B) → (A → ¬ B) with the Lean theorem prover. I've set it up like so.
example : ¬ (A ∧ B) → (A → ¬ B) :=
assume h1: ¬ (A ∧ B),
assume h2: A,
show ¬ B, from sorry
I've tried using and.left and and.right for h1 but these do not work when the conjunction is negated. I am unable to find any examples that prove implication like this starting from a negation. Any help would be much appreciated.

¬ B is defined to be B -> false so you could start with
example (A B : Prop): ¬ (A ∧ B) → (A → ¬ B) :=
assume h1: ¬ (A ∧ B),
assume h2: A,
assume h3: B,
show false, from sorry

Related

Lawvere's fixed point theorem in agda

I was struggling to prove a more basic version of lawvere's fixed point theorem in agda. Precisely I am trying to figure out the proof for the bottom theorem.
surjective : {A : _} {B : _} → (A → B) → Set
surjective {B = B} f = (b : B) → ∃ λ a → f a ≡ b
fixedPoint : {A : _} → (A → A) -> Set
fixedPoint f = ∃ λ a → f a ≡ a
lawvere : {A : _} {B : _}
→ (ϕ : A → A → B) → (surjective ϕ) → (f : B → B) →
fixedPoint f
lawvere = ?
General tips about how to approach similar proofs involving existentials would also be helpful.
I think my problem was hesitancy to use equational reasoning which I often have trouble with in agda. The solution I eventually found was:
lawvere : {A : _} {B : _}
→ (ϕ : A → A → B) → (surjective ϕ) → (f : B → B) →
fixedPoint f
lawvere {A} {B} ϕ surj f = ϕ p p , sym proof
where
q = λ a → f (ϕ a a)
p = Σ.fst (surj q)
proof =
begin
ϕ p p
≡⟨ (cong-app (Σ.snd (surj q)) p) ⟩
q p
≡⟨ refl ⟩
(λ a → f (ϕ a a)) p
≡⟨ refl ⟩
f (ϕ p p)
∎

Inferring function type from function definition Haskell

So I was taking a test about Haskell and one question said:
let the function be
lolo g x = ys
where ys = [x] ++ filter (curry g x) ys
then determine the type of the function called lolo. The options are:
a) (a,b) -> Bool -> b -> [(a,b)]
b) (b -> b -> b) -> Bool -> [b]
c) ((b,b) -> Bool) -> b -> [b]
d) (a -> b -> Bool) -> b -> [c]
Can somebody please explain which one it is and why? I'm really confused about this one... things I do not understand are:
1) the curry function can only be applied to functions right? not datatypes that may be tuples? then you can infer that g is a function in this context? what if g and x are both functions? is it possible to use curry with nth arguments? I've only seen curry used with 1 argument.
2) the other thing I don't understand very much is the recursion in the definition of ys. so ys is defined by ys, but I don't see the base case in this scenario. Will it ever end? maybe it's the filter function that makes the recursion end.
3) also in curry g x = curry (g x) right? (this is a question about precedence in application of functions)
Thanks a lot
1) The first argument to curry has to be a function, it is what is known as a higher order function, it takes a function and returns a new one. While its type is printed out in GHCi as
curry :: ((a, b) -> c) -> a -> b -> c
It is more clearly represented (IMO) as
curry :: ((a, b) -> c) -> (a -> b -> c)
Which makes it more obvious that it takes a function and returns a new function. Technically, you could say that curry takes 3 arguments, one of type (a, b) -> c, one of type a, and one of type b. It just takes a function that normally accepts a tuple of arguments and converts it into a function that takes 2 arguments.
2) The computation for ys will never end, don't bother trying to call length on it, you'll just run the computation forever. This isn't a problem, though, you can work with infinite lists and non-terminating lists just fine (non-terminating being a list where it takes forever to compute the next element, not just one that has infinite elements). You can still use functions like take and drop on it, though.
3) Does curry g x == curry (g x)? No! When you see an expression like a b c d e, all of b, c, d, and e are arguments to a. If you instead saw a b c (d e), then e is applied to d, and that result is applied to a b c. Consider filter even [1..10], this is certainly not the same as filter (even [1..10]), since it wouldn't even compile! (even :: Integral a => a -> Bool).
When solving this sort of problem, first look at what functions are used in the expression that you already know the types of:
(++) :: [a] -> [a] -> [a]
filter :: (b -> Bool) -> [b] -> [b]
curry :: ((c, d) -> e) -> c -> d -> e
I've used different type variables in each so that there will be less confusion when trying to line up the types. You can get these types by loading up GHCi, then typing
> :type (++)
(++) :: [a] -> [a] -> [a]
> -- Or just use :t
> :t filter
filter :: (a -> Bool) -> [a] -> [a]
> :t curry
curry :: ((a, b) -> c) -> a -> b -> c
As you can see, I've changed filter to use b instead of a, and curry to use c, d, and e. This doesn't change the meaning any more than f x = x + 1 versus f y = y + 1, it'll just make it easier to talk about.
Now that we've broken down our function into its subcomponents, we can work from the "top" down. By top, I mean the last function that gets called, namely (++). You can picture this function by a tree like
(++)
/ \
[x] filter
/ \
curry ys
/ \
g x
So we can clearly see that (++) is at the top. Using that, we can infer that [x] has the type [a], which means that x ~ a (the tilde is the type equality symbol) and consequently ys ~ [a], since ys = [x] ++ something. Now that we know the type of x, we can start filling out the rest of the expression. Next, we work down to filter (curry g x) ys. Since it is the second argument to (++), we can infer that this subexpression also has the type [a]. If we look at the type of filter:
filter :: (b -> Bool) -> [b] -> [b]
The final result is a list of type [b]. Since it's being applied to [x] ++, we can infer that filter (curry g x) ys :: [a]. This means that [b] ~ [a] => b ~ a. For reference, this makes filter's type
filter :: (a -> Bool) -> [a] -> [a]
This now places a constraint on curry g x, it must fit into filter's first argument which has the type a -> Bool. Looking at curry's type again:
curry :: ((c, d) -> e) -> c -> d -> e
This means that e ~ Bool, and d ~ a. If we plug those back in
curry :: ((c, a) -> Bool) -> c -> a -> Bool
Ignoring g for now, we look at the type of x, which we figured out is a. Since x is the second argument to curry, that means that x matches with the argument of type c, implying that c ~ a. Substituting this into what we just computed we get
curry :: ((a, a) -> Bool) -> a -> a -> Bool
With
curry g x :: a -> Bool
filter (curry g x) :: [a] -> [a]
filter (curry g x) ys :: [a]
[x] ++ filter (curry g x) ys :: [a]
From this we can directly infer that lolo's type signature ends with [a], so
lolo :: ??? -> [a]
I'll leave you to do the remaining few steps to figure out what ??? is.

Agda - Reverse-helper

I'm trying to prove this lema
reverse-++ : ∀{ℓ}{A : Set ℓ}(l1 l2 : 𝕃 A) → reverse (l1 ++ l2) ≡ (reverse l2) ++ (reverse l1)
reverse-++ [] [] = refl
reverse-++ l1 [] rewrite ++[] l1 = refl
reverse-++ l1 (x :: xs) = {!!}
But another function, reverse-helper keeps coming up into my goal and I have no idea how I get rid of it. Any guidance or suggestions?
I'm assuming that in the implementation of reverse, you call reverse-helper. In that case, you probably want to prove a lemma about reverse-helper that you can call in the lemma about reverse. This is a general thing: If you are proving something about a function with a helper function, you usually need a proof with a helper proof, because the induction structure of the proof usually matches the recursion structure of the function.
I think you should start with the different argument.
Since ++ is probably defined with [] ++ a = a, and reverse (x :: xs) = (reverse xs) ++ (x :: nil) it will be better to prove reverse-++ (x :: xs) ys = cong (\xs -> xs ++ (x :: nil)) (reverse-++ xs ys)

Using the value of a computed function for a proof in agda

I'm still trying to wrap my head around agda, so I wrote a little tic-tac-toe game Type
data Game : Player -> Vec Square 9 -> Set where
start : Game x ( - ∷ - ∷ - ∷
- ∷ - ∷ - ∷
- ∷ - ∷ - ∷ [] )
xturn : {gs : Vec Square 9} -> (n : ℕ) -> Game x gs -> n < (#ofMoves gs) -> Game o (makeMove gs x n )
oturn : {gs : Vec Square 9} -> (n : ℕ) -> Game o gs -> n < (#ofMoves gs) -> Game x (makeMove gs o n )
Which will hold a valid game path.
Here #ofMoves gs would return the number of empty Squares,
n < (#ofMoves gs) would prove that the nth move is valid,
and makeMove gs x n replaces the nth empty Square in the game state vector.
After a few stimulating games against myself, I decided to shoot for something more awesome. The goal was to create a function that would take an x player and an o player and pit them against each other in an epic battle to the death.
--two programs enter, one program leaves
gameMaster : {p : Player } -> {gs : Vec Square 9} --FOR ALL
-> ({gs : Vec Square 9} -> Game x gs -> (0 < (#ofMoves gs)) -> Game o (makeMove gs x _ )) --take an x player
-> ({gs : Vec Square 9} -> Game o gs -> (0 < (#ofMoves gs)) -> Game x (makeMove gs o _ )) --take an o player
-> ( Game p gs) --Take an initial configuration
-> GameCondition --return a winner
gameMaster {_} {gs} _ _ game with (gameCondition gs)
... | xWin = xWin
... | oWin = oWin
... | draw = draw
... | ongoing with #ofMoves gs
... | 0 = draw --TODO: really just prove this state doesn't exist, it will always be covered by gameCondition gs = draw
gameMaster {x} {gs} fx fo game | ongoing | suc nn = gameMaster (fx) (fo) (fx game (s≤s z≤n)) -- x move
gameMaster {o} {gs} fx fo game | ongoing | suc nn = gameMaster (fx) (fo) (fo game (s≤s z≤n)) -- o move
Here (0 < (#ofMoves gs)) is "short hand" for a proof that that the game is ongoing,
gameCondition gs will return the game state as you would expect (one of xWin, oWin, draw, or ongoing)
I want to prove that there are valid moves (the s≤s z≤n part). This should be possible since suc nn <= #ofMoves gs. I have no idea how how to make this work in agda.
I'll try to answer some of your questions, but I don't think you're apporaching this from the right angle. While you certainly can work with bounded numbers using explicit proofs, you'll most likely be more successful with data type instead.
For your makeMove (I've renamed it to move in the rest of the answer), you need a number bounded by the available free squares. That is, when you have 4 free squares, you want to be able to call move only with 0, 1, 2 and 3. There's one very nice way to achieve that.
Looking at Data.Fin, we find this interesting data type:
data Fin : ℕ → Set where
zero : {n : ℕ} → Fin (suc n)
suc : {n : ℕ} (i : Fin n) → Fin (suc n)
Fin 0 is empty (both zero and suc construct Fin n for n greater or equal than 1). Fin 1 only has zero, Fin 2 has zero and suc zero, and so on. This represents exactly what we need: a number bounded by n. You might have seen this used in the implementation of safe vector lookup:
lookup : ∀ {a n} {A : Set a} → Fin n → Vec A n → A
lookup zero (x ∷ xs) = x
lookup (suc i) (x ∷ xs) = lookup i xs
The lookup _ [] case is impossible, because Fin 0 has no elements!
How to apply this nicely to your problem? Firstly, we'll have to track how many empty squares we have. This allows us to prove that gameMaster is indeed a terminating function (the number of empty squares is always decreasing). Let's write a variant of Vec which tracks not only length, but also the empty squares:
data Player : Set where
x o : Player
data SquareVec : (len : ℕ) (empty : ℕ) → Set where
[] : SquareVec 0 0
-∷_ : ∀ {l e} → SquareVec l e → SquareVec (suc l) (suc e)
_∷_ : ∀ {l e} (p : Player) → SquareVec l e → SquareVec (suc l) e
Notice that I got rid of the Square data type; instead, the empty square is baked directly into the -∷_ constructor. Instead of - ∷ rest we have -∷ rest.
We can now write the move function. What should its type be? Well, it'll take a SquareVec with at least one empty square, a Fin e (where e is the number of empty squares) and a Player. The Fin e guarantees us that this function can always find the appropriate empty square:
move : ∀ {l e} → Player → SquareVec l (suc e) → Fin (suc e) → SquareVec l e
move p ( -∷ sqs) zero = p ∷ sqs
move {e = zero} p ( -∷ sqs) (suc ())
move {e = suc e} p ( -∷ sqs) (suc fe) = -∷ move p sqs fe
move p (p′ ∷ sqs) fe = p′ ∷ move p sqs fe
Notice that this function gives us a SquareVec with exactly one empty square filled. This function couldn't have filled more than one or no empty squares at all!
We walk down the vector looking for an empty square; once we find it, the Fin argument tells us whether it's the square we want to fill in. If it's zero, we fill in the player; if it isn't, we continue searching the rest of the vector but with a smaller number.
Now, the game representation. Thanks to the extra work we did earlier, we can simplify the Game data type. The move-p constructor just tells us where the move happened and that's it! I got rid of the Player index for simplicity; but it would work just fine with it.
data Game : ∀ {e} → SquareVec 9 e → Set where
start : Game empty
move-p : ∀ {e} {gs} p (fe : Fin (suc e)) → Game gs → Game (move p gs fe)
Oh, what's empty? It's shortcut for your - ∷ - ∷ ...:
empty : ∀ {n} → SquareVec n n
empty {zero} = []
empty {suc _} = -∷ empty
Now, the states. I separated the states into a state of a possibly running game and a state of an ended game. Again, you can use your original GameCondition:
data State : Set where
win : Player → State
draw : State
going : State
data FinalState : Set where
win : Player → FinalState
draw : FinalState
For the following code, we'll need these imports:
open import Data.Empty
open import Data.Product
open import Relation.Binary.PropositionalEquality
And a function to determine the game state. Fill in with your actual implementation; this one just lets players play until the board is completly full.
-- Dummy implementation.
state : ∀ {e} {gs : SquareVec 9 e} → Game gs → State
state {zero} gs = draw
state {suc _} gs = going
Next, we need to prove that the State cannot be going when there are no empty squares:
zero-no-going : ∀ {gs : SquareVec 9 0} (g : Game gs) → state g ≢ going
zero-no-going g ()
Again, this is the proof for the dummy state, the proof for your actual implementation will be very different.
Now, we have all the tools we need to implement gameMaster. Firstly, we'll have to decide what its type is. Much like your version, we'll take two functions that represent the AI, one for o and other for x. Then we'll take the game state and produce FinalState. In this version, I'm actually returning the final board so we can see how the game progressed.
Now, the AI functions will return just the turn they want to make instead of returning whole new game state. This is easier to work with.
Brace yourself, here's the type signature I conjured up:
AI : Set
AI = ∀ {e} {sqv : SquareVec 9 (suc e)} → Game sqv → Fin (suc e)
gameMaster : ∀ {e} {sqv : SquareVec 9 e} (sp : Player)
(x-move o-move : AI) → Game sqv →
FinalState × (Σ[ e′ ∈ ℕ ] Σ[ sqv′ ∈ SquareVec 9 e′ ] Game sqv′)
Notice that the AI functions take a game state with at least one empty square and return a move. Now for the implementation.
gameMaster sp xm om g with state g
... | win p = win p , _ , _ , g
... | draw = draw , _ , _ , g
... | going = ?
So, if the current state is win or draw, we'll return the corresponding FinalState and the current board. Now, we have to deal with the going case. We'll pattern match on e (the number of empty squares) to figure out whether the game is at the end or not:
gameMaster {zero} sp xm om g | going = ?
gameMaster {suc e} x xm om g | going = ?
gameMaster {suc e} o xm om g | going = ?
The zero case cannot happen, we proved earlier that state cannot return going when the number of empty squares is zero. How to apply that proof here?
We have pattern matched on state g and we now know that state g ≡ going; but sadly Agda already forgot this information. This is what Dominique Devriese was hinting at: the inspect machinery allows us to retain the proof!
Instead of pattern matching on just state g, we'll also pattern matching on inspect state g:
gameMaster sp xm om g with state g | inspect state g
... | win p | _ = win p , _ , _ , g
... | draw | _ = draw , _ , _ , g
gameMaster {zero} sp xm om g | going | [ pf ] = ?
gameMaster {suc e} x xm om g | going | _ = ?
gameMaster {suc e} o xm om g | going | _ = ?
pf is now the proof that state g ≡ going, which we can feed to zero-no-going:
gameMaster {zero} sp xm om g | going | [ pf ]
= ⊥-elim (zero-no-going g pf)
The other two cases are easy: we just apply the AI function and recursively apply gameMaster to the result:
gameMaster {suc e} x xm om g | going | _
= gameMaster o xm om (move-p x (xm g) g)
gameMaster {suc e} o xm om g | going | _
= gameMaster x xm om (move-p o (om g) g)
I wrote some dumb AIs, the first one fills the first available empty square; the second one fills the last one.
player-lowest : AI
player-lowest _ = zero
max : ∀ {e} → Fin (suc e)
max {zero} = zero
max {suc e} = suc max
player-highest : AI
player-highest _ = max
Now, let's match player-lowest against player-lowest! In the Emacs, type C-c C-n gameMaster x player-lowest player-lowest start <RET>:
draw ,
0 ,
x ∷ (o ∷ (x ∷ (o ∷ (x ∷ (o ∷ (x ∷ (o ∷ (x ∷ [])))))))) ,
move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero start))))))))
We can see that all squares are filled and they alternate between x and o. Matching player-lowest with player-highest gives us:
draw ,
0 ,
x ∷ (x ∷ (x ∷ (x ∷ (x ∷ (o ∷ (o ∷ (o ∷ (o ∷ [])))))))) ,
move-p x zero
(move-p o (suc zero)
(move-p x zero
(move-p o (suc (suc (suc zero)))
(move-p x zero
(move-p o (suc (suc (suc (suc (suc zero)))))
(move-p x zero
(move-p o (suc (suc (suc (suc (suc (suc (suc zero)))))))
(move-p x zero start))))))))
If you really want to work with the proofs, then I suggest the following representation of Fin:
Fin₂ : ℕ → Set
Fin₂ n = ∃ λ m → m < n
fin⟶fin₂ : ∀ {n} → Fin n → Fin₂ n
fin⟶fin₂ zero = zero , s≤s z≤n
fin⟶fin₂ (suc n) = map suc s≤s (fin⟶fin₂ n)
fin₂⟶fin : ∀ {n} → Fin₂ n → Fin n
fin₂⟶fin {zero} (_ , ())
fin₂⟶fin {suc _} (zero , _) = zero
fin₂⟶fin {suc _} (suc _ , s≤s p) = suc (fin₂⟶fin (_ , p))
Not strictly related to the question, but inspect uses rather interesting trick which might be worth mentioning. To understand this trick, we'll have to take a look at how with works.
When you use with on an expression expr, Agda goes through the types of all arguments and replaces any occurence of expr with a fresh variable, let's call it w. For example:
test : (n : ℕ) → Vec ℕ (n + 0) → ℕ
test n v = ?
Here, the type of v is Vec ℕ (n + 0), as you would expect.
test : (n : ℕ) → Vec ℕ (n + 0) → ℕ
test n v with n + 0
... | w = ?
However, once we abstract over n + 0, the type of v suddenly changes to Vec ℕ w. If you later want to use something which contains n + 0 in its type, the substitution won't take place again - it's a one time deal.
In the gameMaster function, we applied with to state g and pattern matched to find out it's going. By the time we use zero-no-going, state g and going are two separate things which have no relation as far as Agda is concerned.
How do we preserve this information? We somehow need to get state g ≡ state g and have the with replace only state g on either side - this would give us the needed state g ≡ going.
What the inspect does is that it hides the function application state g. We have to write a function hide in a way that Agda cannot see hide state g and state g are in fact the same thing.
One possible way to hide something is to use the fact that for any type A, A and ⊤ → A are isomorphic - that is, we can freely go from one representation to the other without losing any information.
However, we cannot use the ⊤ as defined in the standard library. In a moment I'll show why, but for now, we'll define a new type:
data Unit : Set where
unit : Unit
And what it means for a value to be hidden:
Hidden : Set → Set
Hidden A = Unit → A
We can easily reveal the hidden value by applying unit to it:
reveal : {A : Set} → Hidden A → A
reveal f = f unit
The last step we need to take is the hide function:
hide : {A : Set} {B : A → Set} →
((x : A) → B x) → ((x : A) → Hidden (B x))
hide f x unit = f x
Why wouldn't this work with ⊤? If you declare ⊤ as record, Agda can figure out on its own that tt is the only value. So, when faced with hide f x, Agda won't stop at the third argument (because it already knows how it must look like) and automatically reduce it to λ _ → f x. Data types defined with the data keyword don't have these special rules, so hide f x remains that way until someone reveals it and the type checker cannot see that there's a f x subexpression inside hide f x.
The rest is just arranging stuff so we can get the proof later:
data Reveal_is_ {A : Set} (x : Hidden A) (y : A) : Set where
[_] : (eq : reveal x ≡ y) → Reveal x is y
inspect : {A : Set} {B : A → Set}
(f : (x : A) → B x) (x : A) → Reveal (hide f x) is (f x)
inspect f x = [ refl ]
So, there you have it:
inspect state g : Reveal (hide state g) is (state g)
-- pattern match on (state g)
inspect state g : Reveal (hide state g) is going
When you then reveal hide state g, you'll get state g and finally the proof that state g ≡ going.
I think you are looking for an Agda technique known by the name "inspect" or "inspect on steroids". It allows you to obtain an equality proof for the knowledge learned from a with pattern match. I recommend you read the code in the following mail and try to understand how it works. Focus on how the function foo at the bottom needs to remember that "f x = z" and does so by with-matching on "inspect (hide f x)" together with "f x":
https://lists.chalmers.se/pipermail/agda/2011/003286.html
To use this in actual code, I recommend you import Relation.Binary.PropositionalEquality from the Agda standard library and use its version of inspect (which is superficially different from the code above). It has the following example code:
f x y with g x | inspect g x
f x y | c z | [ eq ] = ...
Note: "Inspect on steroids" is an updated version of an older approach at the inspect idiom.
I hope this helps...

Is it possible to map tuple of functions over a list in Haskell?

I'm trying to do find a way to do something like this:
(head, last) `someFunction` [1, 2, 3]
to produce the tuple (1, 3) as output.
It seems similar in theory to an applicative functor, but a little backwards. I'm guessing there's a similar function that does this (or some way to make one), but I can't seem to find it/figure it out.
I tried defining a function like this:
fmap' :: ((a -> b), (a -> b)) -> [a] -> (b, b)
fmap' (f1, f2) xs = (f1 xs, f2 xs)
but GHC won't actually compile this.
Any help would be great; thanks!
Edit (a whole year later!):
My fmap' wouldn't compile because the type signature was wrong. Obviously there are better ways to do what I was doing, but the type of my fmap' should instead be:
fmap' :: ((a -> b), (a -> b)) -> a -> (b, b)
In that case, it compiles and runs just fine.
I think you can do this with arrows.
head &&& last $ [1,2,3]
will return (1,3).
It seems similar in theory to an applicative functor, but a little backwards.
Actually, it's a boring old forwards applicative functor; specifically, the reader ((->) r).
Prelude Control.Applicative> liftA2 (,) head last [1,2,3]
(1,3)
Or, if you're into that kind of thing:
Prelude Control.Applicative> let sequenceA [] = pure []; sequenceA (x:xs) = (:) <$> x <*> sequenceA xs
Prelude Control.Applicative> [head, last] `sequenceA` [1,2,3]
[1,3]
The type of fmap' is wrong. It should be
fmap' :: ([a] -> b, [a] -> b) -> [a] -> (b, b)
or, it can be more generalized
fmap' :: (a -> b, a -> c) -> a -> (b, c)
It doesn't really resemble fmap :: (a -> b) -> f a -> f b.
Something to try in this situation is to omit the type signature and check what GHC infers.
Doing so and asking GHCi :t fmap' yields the signature
fmap' :: (t2 -> t, t2 -> t1) -> t2 -> (t, t1)
which is identical to KennyTM's generalized version, and will give you the behaviour you're looking for.