In Lean, we can define a function like this
def f (n : ℕ) : ℕ := n + 1
However, inside a proof this is no longer possible. The following code is invalid:
theorem exmpl (x : ℕ) : false :=
begin
def f (n : ℕ) : ℕ := n + 1,
end
I would assume that it is possible with have instead, but attempts like
theorem exmpl (x : ℕ) : false :=
begin
have f (n : ℕ) : n := n + 1,
have f : ℕ → ℕ := --some definition,
end
did not work for me. Is it possible to define a function inside of a proof in lean and how would you achive that?
(In the example above, it would be possible to define it before the proof, but you could also imagine a function like f (n : ℕ) : ℕ := n + x, which can only be defined after x is introduced)
Inside a tactic proof, you have the have and let tactics for new definitions. The have tactic immediately forgets everything but the type of the new definition, and it is generally used just for propositions. The let tactic in contrast remembers the value for the definition.
These tactics don't have the syntax for including arguments to the left of the colon, but you can make do with lambda expressions:
theorem exmpl (x : ℕ) : false :=
begin
let f : ℕ → ℕ := λ n, n + 1,
end
(Try changing that let to have to see how the context changes.)
Another way to do it is to use let expressions outside the tactic proof. These expressions do have syntax for arguments before the colon. For example,
theorem exmpl (x : ℕ) : false :=
let f (n : ℕ) : ℕ := n + x
in begin
end
Related
When using the cases-tactic on an inductive data type, lean produces multiple cases, but does not create a hypothesis stating the assumption of the current case. For example:
inductive color | blue | red
theorem exmpl (c : color) : true :=
begin
cases c,
end
results in the following tactic state
case color.blue
⊢ true
case color.red
⊢ true
but does not create a separate hypothesis (like c = color.red) to work with. How would you get such a hypothesis?
Use cases h : c to get a new hypothesis h for each case. For more detail, see the documentation.
In the example, this would be
theorem exmpl (c : color) : true :=
begin
cases h : c,
end
resulting in
case color.blue
c: color
h: c = color.blue
⊢ true
case color.red
c: color
h: c = color.red
⊢ true
This definition of modular arithmetic doesn’t compile in agda :
data mod (n : nat): n → Set where
zeroM : mod n
S : mod n → mod n
equMod : { x : nat} → (x ≡ n) → (x ≡ zeroM)
Error: nat should be a sort, but isn’t
Can someone help me ?
When you write n -> Set you need n to be a type but it is a natural number. I guess you just want to write data mod (n : nat) : Set which means that mod : nat -> Set.
Following this approach, I'm trying to model functional programs using effect handlers in Coq, based on an implementation in Haskell. There are two approaches presented in the paper:
Effect syntax is represented as a functor and combined with the free monad.
data Prog sig a = Return a | Op (sig (Prog sig a))
Due to the termination check not liking non-strictly positive definitions, this data type can't be defined directly. However, containers can be used to represent strictly-positive functors, as described in this paper. This approach works but since I need to model a scoped effect that requires explicit scoping syntax, mismatched begin/end tags are possible. For reasoning about programs, this is not ideal.
The second approach uses higher-order functors, i.e. the following data type.
data Prog sig a = Return a | Op (sig (Prog sig) a)
Now sig has the type (* -> *) -> * -> *. The data type can't be defined in Coq for the same reasons as before. I'm looking for ways to model this data type, so that I can implement scoped effects without explicit scoping tags.
My attempts of defining a container for higher-order functors have not been fruitful and I can't find anything about this topic. I'm thankful for pointers in the right direction and helpful comments.
Edit: One example of scoped syntax from the paper that I would like to represent is the following data type for exceptions.
data HExc e m a = Throw′ e | forall x. Catch′ (m x) (e -> m x) (x -> m a)
Edit2: I have merged the suggested idea with my approach.
Inductive Ext Shape (Pos : Shape -> Type -> Type -> Type) (F : Type -> Type) A :=
ext : forall s, (forall X, Pos s A X -> F X) -> Ext Shape Pos F A.
Class HContainer (H : (Type -> Type) -> Type -> Type):=
{
Shape : Type;
Pos : Shape -> Type -> Type -> Type;
to : forall M A, Ext Shape Pos M A -> H M A;
from : forall M A, H M A -> Ext Shape Pos M A;
to_from : forall M A (fx : H M A), #to M A (#from M A fx) = fx;
from_to : forall M A (e : Ext Shape Pos M A), #from M A (#to M A e) = e
}.
Section Free.
Variable H : (Type -> Type) -> Type -> Type.
Inductive Free (HC__F : HContainer H) A :=
| pure : A -> Free HC__F A
| impure : Ext Shape Pos (Free HC__F) A -> Free HC__F A.
End Free.
The code can be found here. The Lambda Calculus example works and I can prove that the container representation is isomorphic to the data type.
I have tried to to the same for a simplified version of the exception handler data type but it does not fit the container representation.
Defining a smart constructor does not work, either. In Haskell, the constructor works by applying Catch' to a program where an exception may occur and a continuation, which is empty in the beginning.
catch :: (HExc <: sig) => Prog sig a -> Prog sig a
catch p = inject (Catch' p return)
The main issue I see in the Coq implementation is that the shape needs to be parameterized over a functor, which leads to all sorts of problems.
This answer gives more intuition about how to derive containers from functors than my previous one. I'm taking quite a different angle, so I'm making a new answer instead of revising the old one.
Simple recursive types
Let's consider a simple recursive type first to understand non-parametric containers, and for comparison with the parameterized generalization. Lambda calculus, without caring about scopes, is given by the following functor:
Inductive LC_F (t : Type) : Type :=
| App : t -> t -> LC_F t
| Lam : t -> LC_F t
.
There are two pieces of information we can learn from this type:
The shape tells us about the constructors (App, Lam), and potentially also auxiliary data not relevant to the recursive nature of the syntax (none here). There are two constructors, so the shape has two values. Shape := App_S | Lam_S (bool also works, but declaring shapes as standalone inductive types is cheap, and named constructors also double as documentation.)
For every shape (i.e., constructor), the position tells us about recursive occurences of syntax in that constructor. App contains two subterms, hence we can define their two positions as booleans; Lam contains one subterm, hence its position is a unit. One could also make Pos (s : Shape) an indexed inductive type, but that is a pain to program with (just try).
(* Lambda calculus *)
Inductive ShapeLC :=
| App_S (* The shape App _ _ *)
| Lam_S (* The shape Lam _ *)
.
Definition PosLC s :=
match s with
| App_S => bool
| Lam_S => unit
end.
Parameterized recursive types
Now, properly scoped lambda calculus:
Inductive LC_F (f : Type -> Type) (a : Type) : Type :=
| App : f a -> f a -> LC_F a
| Lam : f (unit + a) -> LC_F a
.
In this case, we can still reuse the Shape and Pos data from before. But this functor encodes one more piece of information: how each position changes the type parameter a. I call this parameter the context (Ctx).
Definition CtxLC (s : ShapeLC) : PosLC s -> Type -> Type :=
match s with
| App_S => fun _ a => a (* subterms of App reuse the same context *)
| Lam_S => fun _ a => unit + a (* Lam introduces one variable in the context of its subterm *)
end.
This container (ShapeLC, PosLC, CtxLC) is related to the functor LC_F by an isomorphism: between the sigma { s : ShapeLC & forall p : PosLC s, f (CtxLC s p a) } and LC_F a. In particular, note how the function y : forall p, f (CtxLC s p) tells you exactly how to fill the shape s = App_S or s = Lam_S to construct a value App (y true) (y false) : LC_F a or Lam (y tt) : LC_F a.
My previous representation encoded Ctx in some additional type indices of Pos. The representations are equivalent, but this one here looks tidier.
Exception handler calculus
We'll consider just the Catch constructor. It has four fields: a type X, the main computation (which returns an X), an exception handler (which also recovers an X), and a continuation (consuming the X).
Inductive Exc_F (E : Type) (F : Type -> Type) (A : Type) :=
| ccatch : forall X, F X -> (E -> F X) -> (X -> F A) -> Exc_F E F A.
The shape is a single constructor, but you must include X. Essentially, look at all the fields (possibly unfolding nested inductive types), and keep all the data that doesn't mention F, that's your shape.
Inductive ShapeExc :=
| ccatch_S (X : Type) (* The shape ccatch X _ (fun e => _) (fun x => _) *)
.
(* equivalently, Definition ShapeExc := Type. *)
The position type lists all the ways to get an F out of an Exc_F of the corresponding shape. In particular, a position contains the arguments to apply functions with, and possibly any data to resolve branching of any other sort. In particular, you need to know the exception type to store exceptions for the handler.
Inductive PosExc (E : Type) (s : ShapeExc) : Type :=
| main_pos (* F X *)
| handle_pos (e : E) (* E -> F X *)
| continue_pos (x : getX s) (* X -> F A *)
.
(* The function getX takes the type X contained in a ShapeExc value, by pattern-matching: getX (ccatch_S X) := X. *)
Finally, for each position, you need to decide how the context changes, i.e., whether you're now computing an X or an A:
Definition Ctx (E : Type) (s : ShapeExc) (p : PosExc E s) : Type -> Type :=
match p with
| main_pos | handle_pos _ => fun _ => getX s
| continue_pos _ => fun A => A
end.
Using the conventions from your code, you can then encode the Catch constructor as follows:
Definition Catch' {E X A}
(m : Free (C__Exc E) X)
(h : E -> Free (C__Exc E) X)
(k : X -> Free (C__Exc E) A) : Free (C__Exc E) A :=
impure (#ext (C__Exc E) (Free (C__Exc E)) A (ccatch_S X) (fun p =>
match p with
| main_pos => m
| handle_pos e => h e
| continue_pos x => k x
end)).
(* I had problems with type inference for some reason, hence #ext is explicitly applied *)
Full gist https://gist.github.com/Lysxia/6e7fb880c14207eda5fc6a5c06ef3522
The main trick in the "first-order" free monad encoding is to encode a functor F : Type -> Type as a container, which is essentially a dependent pair { Shape : Type ; Pos : Shape -> Type }, so that, for all a, the type F a is isomorphic to the sigma type { s : Shape & Pos s -> a }.
Taking this idea further, we can encode a higher-order functor F : (Type -> Type) -> (Type -> Type) as a container { Shape : Type & Pos : Shape -> Type -> (Type -> Type) }, so that, for all f and a, F f a is isomorphic to { s : Shape & forall x : Type, Pos s a x -> f x }.
I don't quite understand what the extra Type parameter in Pos is doing there, but It Works™, at least to the point that you can construct some lambda calculus terms in the resulting type.
For example, the lambda calculus syntax functor:
Inductive LC_F (f : Type -> Type) (a : Type) : Type :=
| App : f a -> f a -> LC_F a
| Lam : f (unit + a) -> LC_F a
.
is represented by the container (Shape, Pos) defined as:
(* LC container *)
Shape : Type := bool; (* Two values in bool = two constructors in LC_F *)
Pos (b : bool) : Type -> (Type -> Type) :=
match b with
| true => App_F
| false => Lam_F
end;
where App_F and Lam_F are given by:
Inductive App_F (a : Type) : TyCon :=
| App_ (b : bool) : App_F a a
.
Inductive Lam_F (a : Type) : TyCon :=
| Lam_ : Lam_F a (unit + a)
.
Then the free-like monad Prog (implicitly parameterized by (Shape, Pos)) is given by:
Inductive Prog (a : Type) : Type :=
| Ret : a -> Prog a
| Op (s : Shape) : (forall b, Pos s a b -> Prog b) -> Prog a
.
Having defined some boilerplate, you can write the following example:
(* \f x -> f x x *)
Definition omega {a} : LC a :=
Lam (* f *) (Lam (* x *)
(let f := Ret (inr (inl tt)) in
let x := Ret (inl tt) in
App (App f x) x)).
Full gist: https://gist.github.com/Lysxia/5485709c4594b836113736626070f488
Assume the following definition:
def app {α : Type} : Π{m n : ℕ}, vector α m → vector α n → vector α (n + m)
| 0 _ [] v := by simp [add_zero]; assumption
| (nat.succ _) _ (h :: t) v' := begin apply vector.cons,
exact h,
apply app t v'
end
Do note that (n + m) are flipped in the definition, so as to avoid plugging add_symm into the definition. Also, remember that add / + is defined on rhs in Lean. vector is a hand rolled nil / cons defined length indexed list.
So anyway, first we have a lemma that follows from definition:
theorem nil_app_v {α : Type} : ∀{n : ℕ} (v : vector α n),
v = [] ++ v := assume n v, rfl
Now we have a lemma that doesn't follow from definition, as such I use eq.rec to formulate it.
theorem app_nil_v {α : Type} : ∀{n : ℕ} (v : vector α n),
v = eq.rec (v ++ []) (zero_add n)
Note that eq.rec is just C y → Π {a : X}, y = a → C a.
The idea of a proof is trivial by induction on v. The base case follows immediately from definition, the recursive case should follow immediately from the inductive hypothesis and definition, but I can't convince Lean of this.
begin
intros n v,
induction v,
-- base case
refl,
-- inductive case
end
The inductive hypothesis I get from Lean is a_1 = eq.rec (a_1 ++ vector.nil) (zero_add n_1).
How do I use it with conclusion a :: a_1 = eq.rec (a :: a_1 ++ vector.nil) (zero_add (nat.succ n_1))? I can unfold app to reduce the term a :: a_1 ++ vector.nil to a :: (a_1 ++ vector.nil), and now I am stuck.
I'm still trying to wrap my head around agda, so I wrote a little tic-tac-toe game Type
data Game : Player -> Vec Square 9 -> Set where
start : Game x ( - ∷ - ∷ - ∷
- ∷ - ∷ - ∷
- ∷ - ∷ - ∷ [] )
xturn : {gs : Vec Square 9} -> (n : ℕ) -> Game x gs -> n < (#ofMoves gs) -> Game o (makeMove gs x n )
oturn : {gs : Vec Square 9} -> (n : ℕ) -> Game o gs -> n < (#ofMoves gs) -> Game x (makeMove gs o n )
Which will hold a valid game path.
Here #ofMoves gs would return the number of empty Squares,
n < (#ofMoves gs) would prove that the nth move is valid,
and makeMove gs x n replaces the nth empty Square in the game state vector.
After a few stimulating games against myself, I decided to shoot for something more awesome. The goal was to create a function that would take an x player and an o player and pit them against each other in an epic battle to the death.
--two programs enter, one program leaves
gameMaster : {p : Player } -> {gs : Vec Square 9} --FOR ALL
-> ({gs : Vec Square 9} -> Game x gs -> (0 < (#ofMoves gs)) -> Game o (makeMove gs x _ )) --take an x player
-> ({gs : Vec Square 9} -> Game o gs -> (0 < (#ofMoves gs)) -> Game x (makeMove gs o _ )) --take an o player
-> ( Game p gs) --Take an initial configuration
-> GameCondition --return a winner
gameMaster {_} {gs} _ _ game with (gameCondition gs)
... | xWin = xWin
... | oWin = oWin
... | draw = draw
... | ongoing with #ofMoves gs
... | 0 = draw --TODO: really just prove this state doesn't exist, it will always be covered by gameCondition gs = draw
gameMaster {x} {gs} fx fo game | ongoing | suc nn = gameMaster (fx) (fo) (fx game (s≤s z≤n)) -- x move
gameMaster {o} {gs} fx fo game | ongoing | suc nn = gameMaster (fx) (fo) (fo game (s≤s z≤n)) -- o move
Here (0 < (#ofMoves gs)) is "short hand" for a proof that that the game is ongoing,
gameCondition gs will return the game state as you would expect (one of xWin, oWin, draw, or ongoing)
I want to prove that there are valid moves (the s≤s z≤n part). This should be possible since suc nn <= #ofMoves gs. I have no idea how how to make this work in agda.
I'll try to answer some of your questions, but I don't think you're apporaching this from the right angle. While you certainly can work with bounded numbers using explicit proofs, you'll most likely be more successful with data type instead.
For your makeMove (I've renamed it to move in the rest of the answer), you need a number bounded by the available free squares. That is, when you have 4 free squares, you want to be able to call move only with 0, 1, 2 and 3. There's one very nice way to achieve that.
Looking at Data.Fin, we find this interesting data type:
data Fin : ℕ → Set where
zero : {n : ℕ} → Fin (suc n)
suc : {n : ℕ} (i : Fin n) → Fin (suc n)
Fin 0 is empty (both zero and suc construct Fin n for n greater or equal than 1). Fin 1 only has zero, Fin 2 has zero and suc zero, and so on. This represents exactly what we need: a number bounded by n. You might have seen this used in the implementation of safe vector lookup:
lookup : ∀ {a n} {A : Set a} → Fin n → Vec A n → A
lookup zero (x ∷ xs) = x
lookup (suc i) (x ∷ xs) = lookup i xs
The lookup _ [] case is impossible, because Fin 0 has no elements!
How to apply this nicely to your problem? Firstly, we'll have to track how many empty squares we have. This allows us to prove that gameMaster is indeed a terminating function (the number of empty squares is always decreasing). Let's write a variant of Vec which tracks not only length, but also the empty squares:
data Player : Set where
x o : Player
data SquareVec : (len : ℕ) (empty : ℕ) → Set where
[] : SquareVec 0 0
-∷_ : ∀ {l e} → SquareVec l e → SquareVec (suc l) (suc e)
_∷_ : ∀ {l e} (p : Player) → SquareVec l e → SquareVec (suc l) e
Notice that I got rid of the Square data type; instead, the empty square is baked directly into the -∷_ constructor. Instead of - ∷ rest we have -∷ rest.
We can now write the move function. What should its type be? Well, it'll take a SquareVec with at least one empty square, a Fin e (where e is the number of empty squares) and a Player. The Fin e guarantees us that this function can always find the appropriate empty square:
move : ∀ {l e} → Player → SquareVec l (suc e) → Fin (suc e) → SquareVec l e
move p ( -∷ sqs) zero = p ∷ sqs
move {e = zero} p ( -∷ sqs) (suc ())
move {e = suc e} p ( -∷ sqs) (suc fe) = -∷ move p sqs fe
move p (p′ ∷ sqs) fe = p′ ∷ move p sqs fe
Notice that this function gives us a SquareVec with exactly one empty square filled. This function couldn't have filled more than one or no empty squares at all!
We walk down the vector looking for an empty square; once we find it, the Fin argument tells us whether it's the square we want to fill in. If it's zero, we fill in the player; if it isn't, we continue searching the rest of the vector but with a smaller number.
Now, the game representation. Thanks to the extra work we did earlier, we can simplify the Game data type. The move-p constructor just tells us where the move happened and that's it! I got rid of the Player index for simplicity; but it would work just fine with it.
data Game : ∀ {e} → SquareVec 9 e → Set where
start : Game empty
move-p : ∀ {e} {gs} p (fe : Fin (suc e)) → Game gs → Game (move p gs fe)
Oh, what's empty? It's shortcut for your - ∷ - ∷ ...:
empty : ∀ {n} → SquareVec n n
empty {zero} = []
empty {suc _} = -∷ empty
Now, the states. I separated the states into a state of a possibly running game and a state of an ended game. Again, you can use your original GameCondition:
data State : Set where
win : Player → State
draw : State
going : State
data FinalState : Set where
win : Player → FinalState
draw : FinalState
For the following code, we'll need these imports:
open import Data.Empty
open import Data.Product
open import Relation.Binary.PropositionalEquality
And a function to determine the game state. Fill in with your actual implementation; this one just lets players play until the board is completly full.
-- Dummy implementation.
state : ∀ {e} {gs : SquareVec 9 e} → Game gs → State
state {zero} gs = draw
state {suc _} gs = going
Next, we need to prove that the State cannot be going when there are no empty squares:
zero-no-going : ∀ {gs : SquareVec 9 0} (g : Game gs) → state g ≢ going
zero-no-going g ()
Again, this is the proof for the dummy state, the proof for your actual implementation will be very different.
Now, we have all the tools we need to implement gameMaster. Firstly, we'll have to decide what its type is. Much like your version, we'll take two functions that represent the AI, one for o and other for x. Then we'll take the game state and produce FinalState. In this version, I'm actually returning the final board so we can see how the game progressed.
Now, the AI functions will return just the turn they want to make instead of returning whole new game state. This is easier to work with.
Brace yourself, here's the type signature I conjured up:
AI : Set
AI = ∀ {e} {sqv : SquareVec 9 (suc e)} → Game sqv → Fin (suc e)
gameMaster : ∀ {e} {sqv : SquareVec 9 e} (sp : Player)
(x-move o-move : AI) → Game sqv →
FinalState × (Σ[ e′ ∈ ℕ ] Σ[ sqv′ ∈ SquareVec 9 e′ ] Game sqv′)
Notice that the AI functions take a game state with at least one empty square and return a move. Now for the implementation.
gameMaster sp xm om g with state g
... | win p = win p , _ , _ , g
... | draw = draw , _ , _ , g
... | going = ?
So, if the current state is win or draw, we'll return the corresponding FinalState and the current board. Now, we have to deal with the going case. We'll pattern match on e (the number of empty squares) to figure out whether the game is at the end or not:
gameMaster {zero} sp xm om g | going = ?
gameMaster {suc e} x xm om g | going = ?
gameMaster {suc e} o xm om g | going = ?
The zero case cannot happen, we proved earlier that state cannot return going when the number of empty squares is zero. How to apply that proof here?
We have pattern matched on state g and we now know that state g ≡ going; but sadly Agda already forgot this information. This is what Dominique Devriese was hinting at: the inspect machinery allows us to retain the proof!
Instead of pattern matching on just state g, we'll also pattern matching on inspect state g:
gameMaster sp xm om g with state g | inspect state g
... | win p | _ = win p , _ , _ , g
... | draw | _ = draw , _ , _ , g
gameMaster {zero} sp xm om g | going | [ pf ] = ?
gameMaster {suc e} x xm om g | going | _ = ?
gameMaster {suc e} o xm om g | going | _ = ?
pf is now the proof that state g ≡ going, which we can feed to zero-no-going:
gameMaster {zero} sp xm om g | going | [ pf ]
= ⊥-elim (zero-no-going g pf)
The other two cases are easy: we just apply the AI function and recursively apply gameMaster to the result:
gameMaster {suc e} x xm om g | going | _
= gameMaster o xm om (move-p x (xm g) g)
gameMaster {suc e} o xm om g | going | _
= gameMaster x xm om (move-p o (om g) g)
I wrote some dumb AIs, the first one fills the first available empty square; the second one fills the last one.
player-lowest : AI
player-lowest _ = zero
max : ∀ {e} → Fin (suc e)
max {zero} = zero
max {suc e} = suc max
player-highest : AI
player-highest _ = max
Now, let's match player-lowest against player-lowest! In the Emacs, type C-c C-n gameMaster x player-lowest player-lowest start <RET>:
draw ,
0 ,
x ∷ (o ∷ (x ∷ (o ∷ (x ∷ (o ∷ (x ∷ (o ∷ (x ∷ [])))))))) ,
move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero start))))))))
We can see that all squares are filled and they alternate between x and o. Matching player-lowest with player-highest gives us:
draw ,
0 ,
x ∷ (x ∷ (x ∷ (x ∷ (x ∷ (o ∷ (o ∷ (o ∷ (o ∷ [])))))))) ,
move-p x zero
(move-p o (suc zero)
(move-p x zero
(move-p o (suc (suc (suc zero)))
(move-p x zero
(move-p o (suc (suc (suc (suc (suc zero)))))
(move-p x zero
(move-p o (suc (suc (suc (suc (suc (suc (suc zero)))))))
(move-p x zero start))))))))
If you really want to work with the proofs, then I suggest the following representation of Fin:
Fin₂ : ℕ → Set
Fin₂ n = ∃ λ m → m < n
fin⟶fin₂ : ∀ {n} → Fin n → Fin₂ n
fin⟶fin₂ zero = zero , s≤s z≤n
fin⟶fin₂ (suc n) = map suc s≤s (fin⟶fin₂ n)
fin₂⟶fin : ∀ {n} → Fin₂ n → Fin n
fin₂⟶fin {zero} (_ , ())
fin₂⟶fin {suc _} (zero , _) = zero
fin₂⟶fin {suc _} (suc _ , s≤s p) = suc (fin₂⟶fin (_ , p))
Not strictly related to the question, but inspect uses rather interesting trick which might be worth mentioning. To understand this trick, we'll have to take a look at how with works.
When you use with on an expression expr, Agda goes through the types of all arguments and replaces any occurence of expr with a fresh variable, let's call it w. For example:
test : (n : ℕ) → Vec ℕ (n + 0) → ℕ
test n v = ?
Here, the type of v is Vec ℕ (n + 0), as you would expect.
test : (n : ℕ) → Vec ℕ (n + 0) → ℕ
test n v with n + 0
... | w = ?
However, once we abstract over n + 0, the type of v suddenly changes to Vec ℕ w. If you later want to use something which contains n + 0 in its type, the substitution won't take place again - it's a one time deal.
In the gameMaster function, we applied with to state g and pattern matched to find out it's going. By the time we use zero-no-going, state g and going are two separate things which have no relation as far as Agda is concerned.
How do we preserve this information? We somehow need to get state g ≡ state g and have the with replace only state g on either side - this would give us the needed state g ≡ going.
What the inspect does is that it hides the function application state g. We have to write a function hide in a way that Agda cannot see hide state g and state g are in fact the same thing.
One possible way to hide something is to use the fact that for any type A, A and ⊤ → A are isomorphic - that is, we can freely go from one representation to the other without losing any information.
However, we cannot use the ⊤ as defined in the standard library. In a moment I'll show why, but for now, we'll define a new type:
data Unit : Set where
unit : Unit
And what it means for a value to be hidden:
Hidden : Set → Set
Hidden A = Unit → A
We can easily reveal the hidden value by applying unit to it:
reveal : {A : Set} → Hidden A → A
reveal f = f unit
The last step we need to take is the hide function:
hide : {A : Set} {B : A → Set} →
((x : A) → B x) → ((x : A) → Hidden (B x))
hide f x unit = f x
Why wouldn't this work with ⊤? If you declare ⊤ as record, Agda can figure out on its own that tt is the only value. So, when faced with hide f x, Agda won't stop at the third argument (because it already knows how it must look like) and automatically reduce it to λ _ → f x. Data types defined with the data keyword don't have these special rules, so hide f x remains that way until someone reveals it and the type checker cannot see that there's a f x subexpression inside hide f x.
The rest is just arranging stuff so we can get the proof later:
data Reveal_is_ {A : Set} (x : Hidden A) (y : A) : Set where
[_] : (eq : reveal x ≡ y) → Reveal x is y
inspect : {A : Set} {B : A → Set}
(f : (x : A) → B x) (x : A) → Reveal (hide f x) is (f x)
inspect f x = [ refl ]
So, there you have it:
inspect state g : Reveal (hide state g) is (state g)
-- pattern match on (state g)
inspect state g : Reveal (hide state g) is going
When you then reveal hide state g, you'll get state g and finally the proof that state g ≡ going.
I think you are looking for an Agda technique known by the name "inspect" or "inspect on steroids". It allows you to obtain an equality proof for the knowledge learned from a with pattern match. I recommend you read the code in the following mail and try to understand how it works. Focus on how the function foo at the bottom needs to remember that "f x = z" and does so by with-matching on "inspect (hide f x)" together with "f x":
https://lists.chalmers.se/pipermail/agda/2011/003286.html
To use this in actual code, I recommend you import Relation.Binary.PropositionalEquality from the Agda standard library and use its version of inspect (which is superficially different from the code above). It has the following example code:
f x y with g x | inspect g x
f x y | c z | [ eq ] = ...
Note: "Inspect on steroids" is an updated version of an older approach at the inspect idiom.
I hope this helps...