I want to prove function definition correctness using the function keyword definition. Here is the definition of an addition function on the usual inductive definition of natural numbers:
theory FunctionDefinition
imports Main
begin
datatype natural = Zero | Succ natural
function add :: "natural => natural => natural"
where
"add Zero m = m"
| "add (Succ n) m = Succ (add n m)"
Isabelle/JEdit shows me the following subgoals:
goal (4 subgoals):
1. ⋀P x. (⋀m. x = (Zero, m) ⟹ P) ⟹ (⋀n m. x = (Succ n, m) ⟹ P) ⟹ P
2. ⋀m ma. (Zero, m) = (Zero, ma) ⟹ m = ma
3. ⋀m n ma. (Zero, m) = (Succ n, ma) ⟹ m = Succ (add_sumC (n, ma))
4. ⋀n m na ma. (Succ n, m) = (Succ na, ma) ⟹ Succ (add_sumC (n, m)) = Succ (add_sumC (na, ma))
Auto solve_direct: ⋀m ma. (Zero, m) = (Zero, ma) ⟹ m = ma can be solved directly with
Product_Type.Pair_inject: (?a, ?b) = (?a', ?b') ⟹ (?a = ?a' ⟹ ?b = ?b' ⟹ ?R) ⟹ ?R
using
apply (auto simp add: Product_Type.Pair_inject)
I get
goal (1 subgoal):
1. ⋀P a b. (⋀m. a = Zero ∧ b = m ⟹ P) ⟹ (⋀n m. a = Succ n ∧ b = m ⟹ P) ⟹ P
It is not clear how to proceed. At all, is this the right way to tackle this problem?
I know that Isabelle would do this automatically if I used a fun definition -- I want to learn how to do this manually .
The tutorial on the function package mentions in section 3 that fun f where ... abbreviates
function (sequential) f where ...
by pat_completeness auto
termination by lexicographic_order
Here pat_completeness is a proof method from the function package that automates proof of completeness for patterns of datatype constructors. This is the first subgoal that you have to prove. It is recommended to apply pat_completeness before auto, because auto changes the syntactic structure of the goal and pat_completeness might not work after auto.
If you want to prove pattern completeness without pat_completeness, you should try to do case analysis of all function parameters, i.e., case_tac a in your example.
Manuel already mentioned it in his comment, but I thought a more detailed example might be helpful anyway. Here is what you can do manually:
First you specify your function as usual
function add :: "natural => natural => natural"
where
"add Zero m = m" |
"add (Succ n) m = Succ (add n m)"
and then you prove that the given patterns cover all cases by
by (pat_completeness) auto
Afterwards you take care of termination. E.g., every datatype comes with a size function and you might note that the first argument of add strictly decreases in size for every recursive call. By default function will bundle all arguments of a function into a tuple for a termination proof, i.e., instead of two arguments n and m, in the termination proof you work with the single pair (n, m). Thus if you want to tell the system that it should use the size of the first argument you can do this as follows:
termination add
apply (relation "measure (size o fst)")
This will yield the remaining goals:
goal (2 subgoals):
1. wf (measure (size o fst))
2. !!n m. ((n, m), Succ n, m) : measure (size o fst)
That is, you have to show that the given relation is well-founded (which is trivial for measures, which are always well-founded, since they are constructed by mapping arguments to natural numbers and then using less-than on the naturals as relation) and that for every recursive call the arguments are actually in the given relation. Both goals are easily dispatched by simp.
apply simp
apply simp
done
Related
I am learning Idris by following along with this book: https://idris-hackers.github.io/software-foundations/pdf/sf-idris-2018.pdf
I kind of hit a snag when getting to the section on proof with simplification (yes it is right at the start). The small bit of code I am working on is:
namespace Numbers
data Nat : Type where
Zero : Numbers.Nat
Successor : Numbers.Nat -> Numbers.Nat
plus : Numbers.Nat -> Numbers.Nat -> Numbers.Nat
plus Zero b = b
plus (Successor a) b = Successor(plus a b)
These simpler proofs work okay:
One : Numbers.Nat
One = Successor Zero
Two : Numbers.Nat
Two = Successor One
Three : Numbers.Nat
Three = Successor Two
proofOnePlusZero : plus One Zero = One
proofOnePlusZero = Refl
proofOnePlusZero' : plus Zero One = One
proofOnePlusZero' = Refl
however as I tried to copy in the more complicated proof I get an error
-- works
plus_Z_n : (n : Numbers.Nat) -> plus Zero n = n
plus_Z_n n = Refl
-- breaks / errors
plus_Z_n' : (n : Numbers.Nat) -> plus n Zero = n
plus_Z_n' n = Refl
This is the error
When checking right hand side of plus_Z_n' with expected type
plus n One = Successor n
Type mismatch between
Successor n = Successor n (Type of Refl)
and
plus n (Successor Zero) = Successor n (Expected type)
Specifically:
Type mismatch between
Successor n
and
plus n (Successor Zero)
this is the expected behavior - however the recommendation is that one is able to understand why. I am at a loss and looking for hints or how to conisder this.
Here Idris just follows the definition (“proof of simplification”). So take plus Zero n = n. To simplify this type, the definition of plus helps: one branch defines plus Zero b = b. So we can replace plus Zero n with n to get to n = n, voilà. On the other hand, if trying to simplify plus n Zero = n, none of the branches in the definition of plus matches plus n Zero. So no replacement can be done and Idris is stuck with plus n Zero = n, until you help f.e. by case-splitting on n.
More precisely, if Idris tries to replace plus x Zero, it goes through all of the branches one by one and tries to match them, just as it would evaluate it. If it could be matched, it will stop. But only if the branch is equal to plus x Zero, it will be replaced. So given:
plus : Numbers.Nat -> Numbers.Nat -> Numbers.Nat
plus Zero b = b
plus a Zero = a
plus (Successor a) b = Successor(plus a b)
plus1 : plus n Zero = n
plus1 = Refl
This won't compile, because plus n Zero could be handled by plus Zero b = b, depending on what n is. But because n is not known, Idris already stops here, but does not replace it. So the second branch is not reached.
I found this on stack: reversible "binary to number" predicate
But I don't understand
:- use_module(library(clpfd)).
binary_number(Bs0, N) :-
reverse(Bs0, Bs),
binary_number(Bs, 0, 0, N).
binary_number([], _, N, N).
binary_number([B|Bs], I0, N0, N) :-
B in 0..1,
N1 #= N0 + (2^I0)*B,
I1 #= I0 + 1,
binary_number(Bs, I1, N1, N).
Example queries:
?- binary_number([1,0,1], N).
N = 5.
?- binary_number(Bs, 5).
Bs = [1, 0, 1] .
Could somebody explain me the code
Especialy this : binary_number([], _, N, N). (The _ )
Also what does library(clpfd) do ?
And why reverse(Bs0, Bs) ? I took it away it still works fine...
thx in advance
In the original, binary_number([], _, N, N)., the _ means you don't care what the value of the variable is. If you used, binary_number([], X, N, N). (not caring what X is), Prolog would issue a singleton variable warning. Also, what this predicate clause says is that when the first argument is [] (the empty list), then the 3rd and 4th arguments are unified.
As explained in the comments, use_module(library(clpfd)) causes Prolog to use the library for Constraint Logic Programming over Finite Domains. You can also find lots of good info on it via Google search of "prolog clpfd".
Normally, in Prolog, arithmetic expressions of comparison require that the expressions be fully instantiated:
X + Y =:= Z + 2. % Requires X, Y, and Z to be instantiated
Prolog would evaluate and do the comparison and yield true or false. It would throw an error if any of these variables were not instantiated. Likewise, for assignment, the is/2 predicate requires that the right hand side expression be fully evaluable with specific variables all instantiated:
Z is X + Y. % Requires X and Y to be instantiated
Using CLPFD you can have Prolog "explore" solutions for you. And you can further specify what domain you'd like to restrict the variables to. So, you can say X + Y #= Z + 2 and Prolog can enumerate possible solutions in X, Y, and Z.
As an aside, the original implementation could be refactored a little to avoid the exponentiation each time and to eliminate the reverse:
:- use_module(library(clpfd)).
binary_number(Bin, N) :-
binary_number(Bin, 0, N).
binary_number([], N, N).
binary_number([Bit|Bits], Acc, N) :-
Bit in 0..1,
Acc1 #= Acc*2 + Bit,
binary_number(Bits, Acc1, N).
This works well for queries such as:
| ?- binary_number([1,0,1,0], N).
N = 10 ? ;
no
| ?- binary_number(B, 10).
B = [1,0,1,0] ? ;
B = [0,1,0,1,0] ? ;
B = [0,0,1,0,1,0] ? ;
...
But it has termination issues, as pointed out in the comments, for cases such as, Bs = [1|_], N #=< 5, binary_number(Bs, N). A solution was presented by #false which simply modifies the above helps solve those termination issues. I'll reiterate that solution here for convenience:
:- use_module(library(clpfd)).
binary_number(Bits, N) :-
binary_number_min(Bits, 0,N, N).
binary_number_min([], N,N, _M).
binary_number_min([Bit|Bits], N0,N, M) :-
Bit in 0..1,
N1 #= N0*2 + Bit,
M #>= N1,
binary_number_min(Bits, N1,N, M).
So, I'm trying to implement a polyvariadic ZipWithN as described here. Unfortunately, Paczesiowa's code seems to have been compiled with outdated versions of both ghc and HList, so in the process of trying to understand how it works, I've also been porting it up to the most recent versions of both of those (ghc-7.8.3 and HList-0.3.4.1 at this time). It's been fun.
Anyways, I've run into a bug that google isn't helping me fix for once, in the definition of an intermediary function curryN'. In concept, curryN' is simple: it takes a type-level natural number N (or, strictly speaking, a value of that type), and a function f whose first argument is an HList of length N, and returns an N-ary function that takes makes an HList out of its first N arguments, and returns f applied to that HList. It's curry, but polyvariadic.
It uses three helper functions/classes:
The first is ResultType/resultType, as I've defined here. resultType takes a single function as an argument, and returns the type of that function after applying it to as many arguments as it will take. (Strictly speaking, again, it returns an undefined value of that type).
For example:
ghci> :t resultType (++)
resultType (++) :: [a]
ghci> :t resultType negate
resultType negate :: (ResultType a result, Num a) => result
(The latter case because if a happens to be a function of type x -> y, resultType would have to return y. So it's not perfect applied to polymorphic functions.)
The second two are Eat/eat and MComp/mcomp, defined together (along with curryN') in a single file (along with the broken curryN') like this.
eat's first argument is a value whose type is a natural number N, and returns a function that takes N arguments and returns them combined into an HList:
ghci> :t eat (hSucc (hSucc hZero))
eat (hSucc (hSucc hZero)) :: x -> x1 -> HList '[x, x1]
ghci> eat (hSucc (hSucc hZero)) 5 "2"
H[5, "2"]
As far as I can tell it works perfectly. mcomp is a polyvariadic compose function. It takes two functions, f and g, where f takes some number of arguments N. It returns a function that takes N arguments, applies f to all of them, and then applies g to f. (The function order parallels (>>>) more than (.))
ghci> :t (,,) `mcomp` show
(,,) `mcomp` show :: (Show c, Show b, Show a) => a -> b -> c -> [Char]
ghci> ((,,) `mcomp` show) 4 "str" 'c'
"(4,\"str\",'c')"
Like resultType, it "breaks" on functions whose return types are type variables, but since I only plan on using it on eat (whose ultimate return type is just an HList), it should work (Paczesiowa seems to think so, at least). And it does, if the first argument to eat is fixed:
\f -> eat (hSucc (hSucc hZero)) `mcomp` f
works fine.
curryN' however, is defined like this:
curryN' n f = eat n `mcomp` f
Trying to load this into ghci, however, gets this error:
Part3.hs:51:1:
Could not deduce (Eat n '[] f0)
arising from the ambiguity check for ‘curryN'’
from the context (Eat n '[] f,
MComp f cp d result,
ResultType f cp)
bound by the inferred type for ‘curryN'’:
(Eat n '[] f, MComp f cp d result, ResultType f cp) =>
Proxy n -> (cp -> d) -> result
at Part3.hs:51:1-29
The type variable ‘f0’ is ambiguous
When checking that ‘curryN'’
has the inferred type ‘forall f cp d result (n :: HNat).
(Eat n '[] f, MComp f cp d result, ResultType f cp) =>
Proxy n -> (cp -> d) -> result’
Probable cause: the inferred type is ambiguous
Failed, modules loaded: Part1.
So clearly eat and mcomp don't play as nicely together as I would hope. Incidentally, this is significantly different from the kind of error that mcomp (+) (+1) gives, which complains about overlapping instances for MComp.
Anyway, trying to find information on this error didn't lead me to anything useful - with the biggest obstacle for my own debugging being that I have no idea what the type variable f0 even is, as it doesn't appear in any of the type signatures or contexts ghci infers.
My best guess is that mcomp is having trouble recursing through eat's polymorphic return type (even though what that is is fixed by a type-level natural number). But if that is the case, I don't know how I'd go about fixing it.
Additionally (and bizarrely to me), if I try to combine Part1.hs and Part2.hs into a single file, I still get an error...but a different one
Part3alt.hs:59:12:
Overlapping instances for ResultType f0 cp
arising from the ambiguity check for ‘curryN'’
Matching givens (or their superclasses):
(ResultType f cp)
bound by the type signature for
curryN' :: (MComp f cp d result, Eat n '[] f, ResultType f cp) =>
Proxy n -> (cp -> d) -> result
at Part3alt.hs:(59,12)-(60,41)
Matching instances:
instance result ~ x => ResultType x result
-- Defined at Part3alt.hs:19:10
instance ResultType y result => ResultType (x -> y) result
-- Defined at Part3alt.hs:22:10
(The choice depends on the instantiation of ‘cp, f0’)
In the ambiguity check for:
forall (n :: HNat) cp d result f.
(MComp f cp d result, Eat n '[] f, ResultType f cp) =>
Proxy n -> (cp -> d) -> result
To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
In the type signature for ‘curryN'’:
curryN' :: (MComp f cp d result, Eat n [] f, ResultType f cp) =>
Proxy n -> (cp -> d) -> result
Failed, modules loaded: none.
Again with the mysterious f0 type variable. I'll admit that I'm a little bit over my head here with all this typehackery, so if anyone could help me figure out what exactly the problem here is, and, more importantly, how I can fix it (if it is, hopefully, possible), I'd be incredibly grateful.
Final note: the reasons that the two files here are called Part1 and Part3 is that Part2 contains some auxiliary functions used in zipWithN, but not curryN'. For the most part they work fine, but there are a couple of oddities that I might ask about later.
Ok, about Monad, I am aware that there are enough questions having been asked. I am not trying to bother anyone to ask what is monad again.
Actually, I read What is a monad?, it is very helpful. And I feel I am very close to really understand it.
I create this question here is just to describe some of my thoughts on Monad and Function, and hope someone could correct me or confirm it correct.
Some answers in that post let me feel monad is a little bit like function.
Monad takes a type, return a wrapper type (return), also, it can take a type, doing some operations on it and returns a wrapper type (bind).
From my point of view, it is a little bit like function. A function takes something and do some operations and return something.
Then why we even need monad? I think one of the key reasons is that monad provides a better way or pattern for sequential operations on the initial data/type.
For example, we have an initial integer i. In our code, we need to apply 10 functions f1, f2, f3, f4, ..., f10 step by step, i.e., we apply f1 on i first, get a result, and then apply f2 on that result, then we get a new result, then apply f3...
We can do this by functions rawly, just like f1 i |> f2 |> f3.... However, the intermediate results during the steps are not consistent; Also if we have to handle possible failure somewhere in middle, things get ugly. And an Option anyway has to be constructed if we don't want the whole process fail on exceptions. So naturally, monad comes in.
Monad unifies and forces the return types in all steps. This largely simplifies the logic and readability of the code (this is also the purpose of those design patterns, isn't it). Also, it is more bullet proof against error or bug. For example, Option Monad forces every intermediate results to be options and it is very easy to implement the fast fail paradigm.
Like many posts about monad described, monad is a design pattern and a better way to combine functions / steps to build up a process.
Am I understanding it correctly?
It sounds to me like you're discovering the limits of learning by analogy. Monad is precisely defined both as a type class in Haskell and as a algebraic thing in category theory; any comparison using "... like ..." is going to be imprecise and therefore wrong.
So no, since Haskell's monads aren't like functions, since they are 1) implemented as type classes, and 2) intended to be used differently than functions.
This answer is probably unsatisfying; are you looking for intuition? If so, I'd suggest doing lots of examples, and especially reading through LYAH. It's very difficult to get an intuitive understanding of abstract things like monads without having a solid base of examples and experience to fall back on.
Why do we even need monads? This is a good question, and maybe there's more than one question here:
Why do we even need the Monad type class? For the same reason that we need any type class.
Why do we even need the monad concept? Because it's useful. Also, it's not a function, so it can't be replaced by a function. (Your example seems like it does not require a Monad (rather, it needs an Applicative)).
For example, you can implement context-free parser combinators using the Applicative type class. But try implementing a parser for the language consisting of the same string of symbols twice (separated by a space) without Monad, i.e.:
a a -> yes
a b -> no
ab ab -> yes
ab ba -> no
So that's one thing a monad provides: the ability to use previous results to "decide" what to do. Here's another example:
f :: Monad m => m Int -> m [Char]
f m =
m >>= \x ->
if x > 2
then return (replicate x 'a')
else return []
f (Just 1) -->> Just ""
f (Just 3) -->> Just "aaa"
f [1,2,3,4] -->> ["", "", "aaa", "aaaa"]
Monads (and Functors, and Applicative Functors) can be seen as being about "generalized function application": they all create functions of type f a ⟶ f b where not only the "values inside a context", like types a and b, are involved, but also the "context" -- the same type of context -- represented by f.
So "normal" function application involves functions of type (a ⟶ b), "generalized" function application is with functions of type (f a ⟶ f b). Such functions can too be composed under normal function composition, because of the more uniform types structure: f a ⟶ f b ; f b ⟶ f c ==> f a ⟶ f c.
Each of the three creates them in a different way though, starting from different things:
Functors: fmap :: Functor f => (a ⟶ b) ⟶ (f a ⟶ f b)
Applicative Functors: (<*>) :: Applicative f => f (a ⟶ b) ⟶ (f a ⟶ f b)
Monadic Functors: (=<<) :: Monad f => (a ⟶ f b) ⟶ (f a ⟶ f b)
In practice, the difference is in how do we get to use the resulting value-in-context type, seen as denoting some type of computations.
Writing in generalized do notation,
Functors: do { x <- a ; return (g x) } g <$> a -- fmap
Applicative do { x <- a ; y <- b ; return (g x y) } g <$> a <*> b
Functors: (\ x -> g x <$> b ) =<< a
Monadic do { x <- a ; y <- k x ; return (g x y) } (\ x -> g x <$> k x) =<< a
Functors:
And their types,
"liftF1" :: (Functor f) => (a ⟶ b) ⟶ f a ⟶ f b -- fmap
liftA2 :: (Applicative f) => (a ⟶ b ⟶ c) ⟶ f a ⟶ f b ⟶ f c
"liftBind" :: (Monad f) => (a ⟶ b ⟶ c) ⟶ f a ⟶ (a ⟶ f b) ⟶ f c
I'm still trying to wrap my head around agda, so I wrote a little tic-tac-toe game Type
data Game : Player -> Vec Square 9 -> Set where
start : Game x ( - ∷ - ∷ - ∷
- ∷ - ∷ - ∷
- ∷ - ∷ - ∷ [] )
xturn : {gs : Vec Square 9} -> (n : ℕ) -> Game x gs -> n < (#ofMoves gs) -> Game o (makeMove gs x n )
oturn : {gs : Vec Square 9} -> (n : ℕ) -> Game o gs -> n < (#ofMoves gs) -> Game x (makeMove gs o n )
Which will hold a valid game path.
Here #ofMoves gs would return the number of empty Squares,
n < (#ofMoves gs) would prove that the nth move is valid,
and makeMove gs x n replaces the nth empty Square in the game state vector.
After a few stimulating games against myself, I decided to shoot for something more awesome. The goal was to create a function that would take an x player and an o player and pit them against each other in an epic battle to the death.
--two programs enter, one program leaves
gameMaster : {p : Player } -> {gs : Vec Square 9} --FOR ALL
-> ({gs : Vec Square 9} -> Game x gs -> (0 < (#ofMoves gs)) -> Game o (makeMove gs x _ )) --take an x player
-> ({gs : Vec Square 9} -> Game o gs -> (0 < (#ofMoves gs)) -> Game x (makeMove gs o _ )) --take an o player
-> ( Game p gs) --Take an initial configuration
-> GameCondition --return a winner
gameMaster {_} {gs} _ _ game with (gameCondition gs)
... | xWin = xWin
... | oWin = oWin
... | draw = draw
... | ongoing with #ofMoves gs
... | 0 = draw --TODO: really just prove this state doesn't exist, it will always be covered by gameCondition gs = draw
gameMaster {x} {gs} fx fo game | ongoing | suc nn = gameMaster (fx) (fo) (fx game (s≤s z≤n)) -- x move
gameMaster {o} {gs} fx fo game | ongoing | suc nn = gameMaster (fx) (fo) (fo game (s≤s z≤n)) -- o move
Here (0 < (#ofMoves gs)) is "short hand" for a proof that that the game is ongoing,
gameCondition gs will return the game state as you would expect (one of xWin, oWin, draw, or ongoing)
I want to prove that there are valid moves (the s≤s z≤n part). This should be possible since suc nn <= #ofMoves gs. I have no idea how how to make this work in agda.
I'll try to answer some of your questions, but I don't think you're apporaching this from the right angle. While you certainly can work with bounded numbers using explicit proofs, you'll most likely be more successful with data type instead.
For your makeMove (I've renamed it to move in the rest of the answer), you need a number bounded by the available free squares. That is, when you have 4 free squares, you want to be able to call move only with 0, 1, 2 and 3. There's one very nice way to achieve that.
Looking at Data.Fin, we find this interesting data type:
data Fin : ℕ → Set where
zero : {n : ℕ} → Fin (suc n)
suc : {n : ℕ} (i : Fin n) → Fin (suc n)
Fin 0 is empty (both zero and suc construct Fin n for n greater or equal than 1). Fin 1 only has zero, Fin 2 has zero and suc zero, and so on. This represents exactly what we need: a number bounded by n. You might have seen this used in the implementation of safe vector lookup:
lookup : ∀ {a n} {A : Set a} → Fin n → Vec A n → A
lookup zero (x ∷ xs) = x
lookup (suc i) (x ∷ xs) = lookup i xs
The lookup _ [] case is impossible, because Fin 0 has no elements!
How to apply this nicely to your problem? Firstly, we'll have to track how many empty squares we have. This allows us to prove that gameMaster is indeed a terminating function (the number of empty squares is always decreasing). Let's write a variant of Vec which tracks not only length, but also the empty squares:
data Player : Set where
x o : Player
data SquareVec : (len : ℕ) (empty : ℕ) → Set where
[] : SquareVec 0 0
-∷_ : ∀ {l e} → SquareVec l e → SquareVec (suc l) (suc e)
_∷_ : ∀ {l e} (p : Player) → SquareVec l e → SquareVec (suc l) e
Notice that I got rid of the Square data type; instead, the empty square is baked directly into the -∷_ constructor. Instead of - ∷ rest we have -∷ rest.
We can now write the move function. What should its type be? Well, it'll take a SquareVec with at least one empty square, a Fin e (where e is the number of empty squares) and a Player. The Fin e guarantees us that this function can always find the appropriate empty square:
move : ∀ {l e} → Player → SquareVec l (suc e) → Fin (suc e) → SquareVec l e
move p ( -∷ sqs) zero = p ∷ sqs
move {e = zero} p ( -∷ sqs) (suc ())
move {e = suc e} p ( -∷ sqs) (suc fe) = -∷ move p sqs fe
move p (p′ ∷ sqs) fe = p′ ∷ move p sqs fe
Notice that this function gives us a SquareVec with exactly one empty square filled. This function couldn't have filled more than one or no empty squares at all!
We walk down the vector looking for an empty square; once we find it, the Fin argument tells us whether it's the square we want to fill in. If it's zero, we fill in the player; if it isn't, we continue searching the rest of the vector but with a smaller number.
Now, the game representation. Thanks to the extra work we did earlier, we can simplify the Game data type. The move-p constructor just tells us where the move happened and that's it! I got rid of the Player index for simplicity; but it would work just fine with it.
data Game : ∀ {e} → SquareVec 9 e → Set where
start : Game empty
move-p : ∀ {e} {gs} p (fe : Fin (suc e)) → Game gs → Game (move p gs fe)
Oh, what's empty? It's shortcut for your - ∷ - ∷ ...:
empty : ∀ {n} → SquareVec n n
empty {zero} = []
empty {suc _} = -∷ empty
Now, the states. I separated the states into a state of a possibly running game and a state of an ended game. Again, you can use your original GameCondition:
data State : Set where
win : Player → State
draw : State
going : State
data FinalState : Set where
win : Player → FinalState
draw : FinalState
For the following code, we'll need these imports:
open import Data.Empty
open import Data.Product
open import Relation.Binary.PropositionalEquality
And a function to determine the game state. Fill in with your actual implementation; this one just lets players play until the board is completly full.
-- Dummy implementation.
state : ∀ {e} {gs : SquareVec 9 e} → Game gs → State
state {zero} gs = draw
state {suc _} gs = going
Next, we need to prove that the State cannot be going when there are no empty squares:
zero-no-going : ∀ {gs : SquareVec 9 0} (g : Game gs) → state g ≢ going
zero-no-going g ()
Again, this is the proof for the dummy state, the proof for your actual implementation will be very different.
Now, we have all the tools we need to implement gameMaster. Firstly, we'll have to decide what its type is. Much like your version, we'll take two functions that represent the AI, one for o and other for x. Then we'll take the game state and produce FinalState. In this version, I'm actually returning the final board so we can see how the game progressed.
Now, the AI functions will return just the turn they want to make instead of returning whole new game state. This is easier to work with.
Brace yourself, here's the type signature I conjured up:
AI : Set
AI = ∀ {e} {sqv : SquareVec 9 (suc e)} → Game sqv → Fin (suc e)
gameMaster : ∀ {e} {sqv : SquareVec 9 e} (sp : Player)
(x-move o-move : AI) → Game sqv →
FinalState × (Σ[ e′ ∈ ℕ ] Σ[ sqv′ ∈ SquareVec 9 e′ ] Game sqv′)
Notice that the AI functions take a game state with at least one empty square and return a move. Now for the implementation.
gameMaster sp xm om g with state g
... | win p = win p , _ , _ , g
... | draw = draw , _ , _ , g
... | going = ?
So, if the current state is win or draw, we'll return the corresponding FinalState and the current board. Now, we have to deal with the going case. We'll pattern match on e (the number of empty squares) to figure out whether the game is at the end or not:
gameMaster {zero} sp xm om g | going = ?
gameMaster {suc e} x xm om g | going = ?
gameMaster {suc e} o xm om g | going = ?
The zero case cannot happen, we proved earlier that state cannot return going when the number of empty squares is zero. How to apply that proof here?
We have pattern matched on state g and we now know that state g ≡ going; but sadly Agda already forgot this information. This is what Dominique Devriese was hinting at: the inspect machinery allows us to retain the proof!
Instead of pattern matching on just state g, we'll also pattern matching on inspect state g:
gameMaster sp xm om g with state g | inspect state g
... | win p | _ = win p , _ , _ , g
... | draw | _ = draw , _ , _ , g
gameMaster {zero} sp xm om g | going | [ pf ] = ?
gameMaster {suc e} x xm om g | going | _ = ?
gameMaster {suc e} o xm om g | going | _ = ?
pf is now the proof that state g ≡ going, which we can feed to zero-no-going:
gameMaster {zero} sp xm om g | going | [ pf ]
= ⊥-elim (zero-no-going g pf)
The other two cases are easy: we just apply the AI function and recursively apply gameMaster to the result:
gameMaster {suc e} x xm om g | going | _
= gameMaster o xm om (move-p x (xm g) g)
gameMaster {suc e} o xm om g | going | _
= gameMaster x xm om (move-p o (om g) g)
I wrote some dumb AIs, the first one fills the first available empty square; the second one fills the last one.
player-lowest : AI
player-lowest _ = zero
max : ∀ {e} → Fin (suc e)
max {zero} = zero
max {suc e} = suc max
player-highest : AI
player-highest _ = max
Now, let's match player-lowest against player-lowest! In the Emacs, type C-c C-n gameMaster x player-lowest player-lowest start <RET>:
draw ,
0 ,
x ∷ (o ∷ (x ∷ (o ∷ (x ∷ (o ∷ (x ∷ (o ∷ (x ∷ [])))))))) ,
move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero start))))))))
We can see that all squares are filled and they alternate between x and o. Matching player-lowest with player-highest gives us:
draw ,
0 ,
x ∷ (x ∷ (x ∷ (x ∷ (x ∷ (o ∷ (o ∷ (o ∷ (o ∷ [])))))))) ,
move-p x zero
(move-p o (suc zero)
(move-p x zero
(move-p o (suc (suc (suc zero)))
(move-p x zero
(move-p o (suc (suc (suc (suc (suc zero)))))
(move-p x zero
(move-p o (suc (suc (suc (suc (suc (suc (suc zero)))))))
(move-p x zero start))))))))
If you really want to work with the proofs, then I suggest the following representation of Fin:
Fin₂ : ℕ → Set
Fin₂ n = ∃ λ m → m < n
fin⟶fin₂ : ∀ {n} → Fin n → Fin₂ n
fin⟶fin₂ zero = zero , s≤s z≤n
fin⟶fin₂ (suc n) = map suc s≤s (fin⟶fin₂ n)
fin₂⟶fin : ∀ {n} → Fin₂ n → Fin n
fin₂⟶fin {zero} (_ , ())
fin₂⟶fin {suc _} (zero , _) = zero
fin₂⟶fin {suc _} (suc _ , s≤s p) = suc (fin₂⟶fin (_ , p))
Not strictly related to the question, but inspect uses rather interesting trick which might be worth mentioning. To understand this trick, we'll have to take a look at how with works.
When you use with on an expression expr, Agda goes through the types of all arguments and replaces any occurence of expr with a fresh variable, let's call it w. For example:
test : (n : ℕ) → Vec ℕ (n + 0) → ℕ
test n v = ?
Here, the type of v is Vec ℕ (n + 0), as you would expect.
test : (n : ℕ) → Vec ℕ (n + 0) → ℕ
test n v with n + 0
... | w = ?
However, once we abstract over n + 0, the type of v suddenly changes to Vec ℕ w. If you later want to use something which contains n + 0 in its type, the substitution won't take place again - it's a one time deal.
In the gameMaster function, we applied with to state g and pattern matched to find out it's going. By the time we use zero-no-going, state g and going are two separate things which have no relation as far as Agda is concerned.
How do we preserve this information? We somehow need to get state g ≡ state g and have the with replace only state g on either side - this would give us the needed state g ≡ going.
What the inspect does is that it hides the function application state g. We have to write a function hide in a way that Agda cannot see hide state g and state g are in fact the same thing.
One possible way to hide something is to use the fact that for any type A, A and ⊤ → A are isomorphic - that is, we can freely go from one representation to the other without losing any information.
However, we cannot use the ⊤ as defined in the standard library. In a moment I'll show why, but for now, we'll define a new type:
data Unit : Set where
unit : Unit
And what it means for a value to be hidden:
Hidden : Set → Set
Hidden A = Unit → A
We can easily reveal the hidden value by applying unit to it:
reveal : {A : Set} → Hidden A → A
reveal f = f unit
The last step we need to take is the hide function:
hide : {A : Set} {B : A → Set} →
((x : A) → B x) → ((x : A) → Hidden (B x))
hide f x unit = f x
Why wouldn't this work with ⊤? If you declare ⊤ as record, Agda can figure out on its own that tt is the only value. So, when faced with hide f x, Agda won't stop at the third argument (because it already knows how it must look like) and automatically reduce it to λ _ → f x. Data types defined with the data keyword don't have these special rules, so hide f x remains that way until someone reveals it and the type checker cannot see that there's a f x subexpression inside hide f x.
The rest is just arranging stuff so we can get the proof later:
data Reveal_is_ {A : Set} (x : Hidden A) (y : A) : Set where
[_] : (eq : reveal x ≡ y) → Reveal x is y
inspect : {A : Set} {B : A → Set}
(f : (x : A) → B x) (x : A) → Reveal (hide f x) is (f x)
inspect f x = [ refl ]
So, there you have it:
inspect state g : Reveal (hide state g) is (state g)
-- pattern match on (state g)
inspect state g : Reveal (hide state g) is going
When you then reveal hide state g, you'll get state g and finally the proof that state g ≡ going.
I think you are looking for an Agda technique known by the name "inspect" or "inspect on steroids". It allows you to obtain an equality proof for the knowledge learned from a with pattern match. I recommend you read the code in the following mail and try to understand how it works. Focus on how the function foo at the bottom needs to remember that "f x = z" and does so by with-matching on "inspect (hide f x)" together with "f x":
https://lists.chalmers.se/pipermail/agda/2011/003286.html
To use this in actual code, I recommend you import Relation.Binary.PropositionalEquality from the Agda standard library and use its version of inspect (which is superficially different from the code above). It has the following example code:
f x y with g x | inspect g x
f x y | c z | [ eq ] = ...
Note: "Inspect on steroids" is an updated version of an older approach at the inspect idiom.
I hope this helps...