definition of a type doesn’t work in agda - proof

This definition of modular arithmetic doesn’t compile in agda :
data mod (n : nat): n → Set where
zeroM : mod n
S : mod n → mod n
equMod : { x : nat} → (x ≡ n) → (x ≡ zeroM)
Error: nat should be a sort, but isn’t
Can someone help me ?

When you write n -> Set you need n to be a type but it is a natural number. I guess you just want to write data mod (n : nat) : Set which means that mod : nat -> Set.

Related

The difference between dependent type signatures and proofs

With dependent types, you can either capture function properties in the type signature, like in concatenation with length-indexed lists
(++) : Vect m a -> Vect n a -> Vect (m + n) a
or you can not use dependent types in your signature, like concatenation with standard lists
(++) : List a -> List a -> List a
and write proofs about (++)
appendAddsLength : (xs, ys : List a) -> length (xs ++ ys) = length xs + length ys
lengthNil : length [] = 0
lengthCons : (x : a) -> (xs : List a) -> length (x :: xs) = length xs + 1
Is there any difference between these approaches beyond ergonomics?
The most obvious difference is that, with (++) on Vects, the length is statically known: you can operate on it at compile time. Moreover, you don't need to write any additional proofs in order to ensure that (++) has the expected behavior on Vects, whereas you need to do it for Lists.
That is, (++) on Vect is correct by construction. The compiler will always enforce the desired properties, whether you like it or not, and without the user taking any additional action.
It's important to note that length xs is not really interchangeable with the statically-known size in general. On these data types, length is a function which actually re-computes the length of a List or Vect by walking through it and incrementing a counter:
libs/prelude/Prelude/Types.idr:L403-L407
namespace List
||| Returns the length of the list.
public export
length : List a -> Nat
length [] = Z
length (x :: xs) = S (length xs)
libs/base/Data/Vect.idr:25-28
public export
length : (xs : Vect len elem) -> Nat
length [] = 0
length (_::xs) = 1 + length xs
Even with Vect, the length built into to the type by construction, but the result of applying the length function to a List or Vect is not fundamental at all. In fact, Data.Vect contains a proof that Data.Vect.length that applying length to a Vect n t always returns n:
libs/base/Data/Vect.idr:30-34
||| Show that the length function on vectors in fact calculates the length
export
lengthCorrect : (xs : Vect len elem) -> length xs = len
lengthCorrect [] = Refl
lengthCorrect (_ :: xs) = rewrite lengthCorrect xs in Refl
Using the above proof, we can assert statically, without actually executing length, that the result of length is propositionally equal to the statically-known length of the Vect. But this assurance is not available for List. And it's much more cumbersome to work with in general, likely requiring the use of with ... proof and rewrite a lot more than just using the correct-by-construction type.

Define a function inside of a proof in Lean

In Lean, we can define a function like this
def f (n : ℕ) : ℕ := n + 1
However, inside a proof this is no longer possible. The following code is invalid:
theorem exmpl (x : ℕ) : false :=
begin
def f (n : ℕ) : ℕ := n + 1,
end
I would assume that it is possible with have instead, but attempts like
theorem exmpl (x : ℕ) : false :=
begin
have f (n : ℕ) : n := n + 1,
have f : ℕ → ℕ := --some definition,
end
did not work for me. Is it possible to define a function inside of a proof in lean and how would you achive that?
(In the example above, it would be possible to define it before the proof, but you could also imagine a function like f (n : ℕ) : ℕ := n + x, which can only be defined after x is introduced)
Inside a tactic proof, you have the have and let tactics for new definitions. The have tactic immediately forgets everything but the type of the new definition, and it is generally used just for propositions. The let tactic in contrast remembers the value for the definition.
These tactics don't have the syntax for including arguments to the left of the colon, but you can make do with lambda expressions:
theorem exmpl (x : ℕ) : false :=
begin
let f : ℕ → ℕ := λ n, n + 1,
end
(Try changing that let to have to see how the context changes.)
Another way to do it is to use let expressions outside the tactic proof. These expressions do have syntax for arguments before the colon. For example,
theorem exmpl (x : ℕ) : false :=
let f (n : ℕ) : ℕ := n + x
in begin
end

Map list of text to their letter-counts in unison

I wonder why
use .base
use .List
> foo = ["abc", "abcdef", "a", "zzz"]
> map (x -> size x) foo
The last row errors. Ucm says:
I'm not sure what size means at line 5, columns 13-17
5 | > map (x -> size x) foo
Whatever it is, it has a type that conforms to a ->{𝕖} b.
I found some terms in scope that have matching names and types. Maybe you meant one of these:
- .base.Bytes.size : base.Bytes -> base.Nat
- .base.Heap.size : base.Heap k v -> base.Nat
- .base.List.Nonempty.size : base.List.Nonempty a -> base.Nat
- .base.List.size : [a] -> base.Nat
- .base.Map.size : base.Map k v -> base.Nat
- .base.Set.size : base.Set k -> base.Nat
- .base.Text.size : base.Text -> base.Nat
It's the bottom one I'm aiming for in the mapping function.
I've tried:
> map (x -> size "foobar") foo
> size "johndoe"
And both of those give me what I would expect.
Unison is confused somehow and you need to qualify which size-function you want by doing:
map (x -> Text.size x) b

Appending nil to a dependently typed length indexed vector in Lean

Assume the following definition:
def app {α : Type} : Π{m n : ℕ}, vector α m → vector α n → vector α (n + m)
| 0 _ [] v := by simp [add_zero]; assumption
| (nat.succ _) _ (h :: t) v' := begin apply vector.cons,
exact h,
apply app t v'
end
Do note that (n + m) are flipped in the definition, so as to avoid plugging add_symm into the definition. Also, remember that add / + is defined on rhs in Lean. vector is a hand rolled nil / cons defined length indexed list.
So anyway, first we have a lemma that follows from definition:
theorem nil_app_v {α : Type} : ∀{n : ℕ} (v : vector α n),
v = [] ++ v := assume n v, rfl
Now we have a lemma that doesn't follow from definition, as such I use eq.rec to formulate it.
theorem app_nil_v {α : Type} : ∀{n : ℕ} (v : vector α n),
v = eq.rec (v ++ []) (zero_add n)
Note that eq.rec is just C y → Π {a : X}, y = a → C a.
The idea of a proof is trivial by induction on v. The base case follows immediately from definition, the recursive case should follow immediately from the inductive hypothesis and definition, but I can't convince Lean of this.
begin
intros n v,
induction v,
-- base case
refl,
-- inductive case
end
The inductive hypothesis I get from Lean is a_1 = eq.rec (a_1 ++ vector.nil) (zero_add n_1).
How do I use it with conclusion a :: a_1 = eq.rec (a :: a_1 ++ vector.nil) (zero_add (nat.succ n_1))? I can unfold app to reduce the term a :: a_1 ++ vector.nil to a :: (a_1 ++ vector.nil), and now I am stuck.

Using the value of a computed function for a proof in agda

I'm still trying to wrap my head around agda, so I wrote a little tic-tac-toe game Type
data Game : Player -> Vec Square 9 -> Set where
start : Game x ( - ∷ - ∷ - ∷
- ∷ - ∷ - ∷
- ∷ - ∷ - ∷ [] )
xturn : {gs : Vec Square 9} -> (n : ℕ) -> Game x gs -> n < (#ofMoves gs) -> Game o (makeMove gs x n )
oturn : {gs : Vec Square 9} -> (n : ℕ) -> Game o gs -> n < (#ofMoves gs) -> Game x (makeMove gs o n )
Which will hold a valid game path.
Here #ofMoves gs would return the number of empty Squares,
n < (#ofMoves gs) would prove that the nth move is valid,
and makeMove gs x n replaces the nth empty Square in the game state vector.
After a few stimulating games against myself, I decided to shoot for something more awesome. The goal was to create a function that would take an x player and an o player and pit them against each other in an epic battle to the death.
--two programs enter, one program leaves
gameMaster : {p : Player } -> {gs : Vec Square 9} --FOR ALL
-> ({gs : Vec Square 9} -> Game x gs -> (0 < (#ofMoves gs)) -> Game o (makeMove gs x _ )) --take an x player
-> ({gs : Vec Square 9} -> Game o gs -> (0 < (#ofMoves gs)) -> Game x (makeMove gs o _ )) --take an o player
-> ( Game p gs) --Take an initial configuration
-> GameCondition --return a winner
gameMaster {_} {gs} _ _ game with (gameCondition gs)
... | xWin = xWin
... | oWin = oWin
... | draw = draw
... | ongoing with #ofMoves gs
... | 0 = draw --TODO: really just prove this state doesn't exist, it will always be covered by gameCondition gs = draw
gameMaster {x} {gs} fx fo game | ongoing | suc nn = gameMaster (fx) (fo) (fx game (s≤s z≤n)) -- x move
gameMaster {o} {gs} fx fo game | ongoing | suc nn = gameMaster (fx) (fo) (fo game (s≤s z≤n)) -- o move
Here (0 < (#ofMoves gs)) is "short hand" for a proof that that the game is ongoing,
gameCondition gs will return the game state as you would expect (one of xWin, oWin, draw, or ongoing)
I want to prove that there are valid moves (the s≤s z≤n part). This should be possible since suc nn <= #ofMoves gs. I have no idea how how to make this work in agda.
I'll try to answer some of your questions, but I don't think you're apporaching this from the right angle. While you certainly can work with bounded numbers using explicit proofs, you'll most likely be more successful with data type instead.
For your makeMove (I've renamed it to move in the rest of the answer), you need a number bounded by the available free squares. That is, when you have 4 free squares, you want to be able to call move only with 0, 1, 2 and 3. There's one very nice way to achieve that.
Looking at Data.Fin, we find this interesting data type:
data Fin : ℕ → Set where
zero : {n : ℕ} → Fin (suc n)
suc : {n : ℕ} (i : Fin n) → Fin (suc n)
Fin 0 is empty (both zero and suc construct Fin n for n greater or equal than 1). Fin 1 only has zero, Fin 2 has zero and suc zero, and so on. This represents exactly what we need: a number bounded by n. You might have seen this used in the implementation of safe vector lookup:
lookup : ∀ {a n} {A : Set a} → Fin n → Vec A n → A
lookup zero (x ∷ xs) = x
lookup (suc i) (x ∷ xs) = lookup i xs
The lookup _ [] case is impossible, because Fin 0 has no elements!
How to apply this nicely to your problem? Firstly, we'll have to track how many empty squares we have. This allows us to prove that gameMaster is indeed a terminating function (the number of empty squares is always decreasing). Let's write a variant of Vec which tracks not only length, but also the empty squares:
data Player : Set where
x o : Player
data SquareVec : (len : ℕ) (empty : ℕ) → Set where
[] : SquareVec 0 0
-∷_ : ∀ {l e} → SquareVec l e → SquareVec (suc l) (suc e)
_∷_ : ∀ {l e} (p : Player) → SquareVec l e → SquareVec (suc l) e
Notice that I got rid of the Square data type; instead, the empty square is baked directly into the -∷_ constructor. Instead of - ∷ rest we have -∷ rest.
We can now write the move function. What should its type be? Well, it'll take a SquareVec with at least one empty square, a Fin e (where e is the number of empty squares) and a Player. The Fin e guarantees us that this function can always find the appropriate empty square:
move : ∀ {l e} → Player → SquareVec l (suc e) → Fin (suc e) → SquareVec l e
move p ( -∷ sqs) zero = p ∷ sqs
move {e = zero} p ( -∷ sqs) (suc ())
move {e = suc e} p ( -∷ sqs) (suc fe) = -∷ move p sqs fe
move p (p′ ∷ sqs) fe = p′ ∷ move p sqs fe
Notice that this function gives us a SquareVec with exactly one empty square filled. This function couldn't have filled more than one or no empty squares at all!
We walk down the vector looking for an empty square; once we find it, the Fin argument tells us whether it's the square we want to fill in. If it's zero, we fill in the player; if it isn't, we continue searching the rest of the vector but with a smaller number.
Now, the game representation. Thanks to the extra work we did earlier, we can simplify the Game data type. The move-p constructor just tells us where the move happened and that's it! I got rid of the Player index for simplicity; but it would work just fine with it.
data Game : ∀ {e} → SquareVec 9 e → Set where
start : Game empty
move-p : ∀ {e} {gs} p (fe : Fin (suc e)) → Game gs → Game (move p gs fe)
Oh, what's empty? It's shortcut for your - ∷ - ∷ ...:
empty : ∀ {n} → SquareVec n n
empty {zero} = []
empty {suc _} = -∷ empty
Now, the states. I separated the states into a state of a possibly running game and a state of an ended game. Again, you can use your original GameCondition:
data State : Set where
win : Player → State
draw : State
going : State
data FinalState : Set where
win : Player → FinalState
draw : FinalState
For the following code, we'll need these imports:
open import Data.Empty
open import Data.Product
open import Relation.Binary.PropositionalEquality
And a function to determine the game state. Fill in with your actual implementation; this one just lets players play until the board is completly full.
-- Dummy implementation.
state : ∀ {e} {gs : SquareVec 9 e} → Game gs → State
state {zero} gs = draw
state {suc _} gs = going
Next, we need to prove that the State cannot be going when there are no empty squares:
zero-no-going : ∀ {gs : SquareVec 9 0} (g : Game gs) → state g ≢ going
zero-no-going g ()
Again, this is the proof for the dummy state, the proof for your actual implementation will be very different.
Now, we have all the tools we need to implement gameMaster. Firstly, we'll have to decide what its type is. Much like your version, we'll take two functions that represent the AI, one for o and other for x. Then we'll take the game state and produce FinalState. In this version, I'm actually returning the final board so we can see how the game progressed.
Now, the AI functions will return just the turn they want to make instead of returning whole new game state. This is easier to work with.
Brace yourself, here's the type signature I conjured up:
AI : Set
AI = ∀ {e} {sqv : SquareVec 9 (suc e)} → Game sqv → Fin (suc e)
gameMaster : ∀ {e} {sqv : SquareVec 9 e} (sp : Player)
(x-move o-move : AI) → Game sqv →
FinalState × (Σ[ e′ ∈ ℕ ] Σ[ sqv′ ∈ SquareVec 9 e′ ] Game sqv′)
Notice that the AI functions take a game state with at least one empty square and return a move. Now for the implementation.
gameMaster sp xm om g with state g
... | win p = win p , _ , _ , g
... | draw = draw , _ , _ , g
... | going = ?
So, if the current state is win or draw, we'll return the corresponding FinalState and the current board. Now, we have to deal with the going case. We'll pattern match on e (the number of empty squares) to figure out whether the game is at the end or not:
gameMaster {zero} sp xm om g | going = ?
gameMaster {suc e} x xm om g | going = ?
gameMaster {suc e} o xm om g | going = ?
The zero case cannot happen, we proved earlier that state cannot return going when the number of empty squares is zero. How to apply that proof here?
We have pattern matched on state g and we now know that state g ≡ going; but sadly Agda already forgot this information. This is what Dominique Devriese was hinting at: the inspect machinery allows us to retain the proof!
Instead of pattern matching on just state g, we'll also pattern matching on inspect state g:
gameMaster sp xm om g with state g | inspect state g
... | win p | _ = win p , _ , _ , g
... | draw | _ = draw , _ , _ , g
gameMaster {zero} sp xm om g | going | [ pf ] = ?
gameMaster {suc e} x xm om g | going | _ = ?
gameMaster {suc e} o xm om g | going | _ = ?
pf is now the proof that state g ≡ going, which we can feed to zero-no-going:
gameMaster {zero} sp xm om g | going | [ pf ]
= ⊥-elim (zero-no-going g pf)
The other two cases are easy: we just apply the AI function and recursively apply gameMaster to the result:
gameMaster {suc e} x xm om g | going | _
= gameMaster o xm om (move-p x (xm g) g)
gameMaster {suc e} o xm om g | going | _
= gameMaster x xm om (move-p o (om g) g)
I wrote some dumb AIs, the first one fills the first available empty square; the second one fills the last one.
player-lowest : AI
player-lowest _ = zero
max : ∀ {e} → Fin (suc e)
max {zero} = zero
max {suc e} = suc max
player-highest : AI
player-highest _ = max
Now, let's match player-lowest against player-lowest! In the Emacs, type C-c C-n gameMaster x player-lowest player-lowest start <RET>:
draw ,
0 ,
x ∷ (o ∷ (x ∷ (o ∷ (x ∷ (o ∷ (x ∷ (o ∷ (x ∷ [])))))))) ,
move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero
(move-p o zero
(move-p x zero start))))))))
We can see that all squares are filled and they alternate between x and o. Matching player-lowest with player-highest gives us:
draw ,
0 ,
x ∷ (x ∷ (x ∷ (x ∷ (x ∷ (o ∷ (o ∷ (o ∷ (o ∷ [])))))))) ,
move-p x zero
(move-p o (suc zero)
(move-p x zero
(move-p o (suc (suc (suc zero)))
(move-p x zero
(move-p o (suc (suc (suc (suc (suc zero)))))
(move-p x zero
(move-p o (suc (suc (suc (suc (suc (suc (suc zero)))))))
(move-p x zero start))))))))
If you really want to work with the proofs, then I suggest the following representation of Fin:
Fin₂ : ℕ → Set
Fin₂ n = ∃ λ m → m < n
fin⟶fin₂ : ∀ {n} → Fin n → Fin₂ n
fin⟶fin₂ zero = zero , s≤s z≤n
fin⟶fin₂ (suc n) = map suc s≤s (fin⟶fin₂ n)
fin₂⟶fin : ∀ {n} → Fin₂ n → Fin n
fin₂⟶fin {zero} (_ , ())
fin₂⟶fin {suc _} (zero , _) = zero
fin₂⟶fin {suc _} (suc _ , s≤s p) = suc (fin₂⟶fin (_ , p))
Not strictly related to the question, but inspect uses rather interesting trick which might be worth mentioning. To understand this trick, we'll have to take a look at how with works.
When you use with on an expression expr, Agda goes through the types of all arguments and replaces any occurence of expr with a fresh variable, let's call it w. For example:
test : (n : ℕ) → Vec ℕ (n + 0) → ℕ
test n v = ?
Here, the type of v is Vec ℕ (n + 0), as you would expect.
test : (n : ℕ) → Vec ℕ (n + 0) → ℕ
test n v with n + 0
... | w = ?
However, once we abstract over n + 0, the type of v suddenly changes to Vec ℕ w. If you later want to use something which contains n + 0 in its type, the substitution won't take place again - it's a one time deal.
In the gameMaster function, we applied with to state g and pattern matched to find out it's going. By the time we use zero-no-going, state g and going are two separate things which have no relation as far as Agda is concerned.
How do we preserve this information? We somehow need to get state g ≡ state g and have the with replace only state g on either side - this would give us the needed state g ≡ going.
What the inspect does is that it hides the function application state g. We have to write a function hide in a way that Agda cannot see hide state g and state g are in fact the same thing.
One possible way to hide something is to use the fact that for any type A, A and ⊤ → A are isomorphic - that is, we can freely go from one representation to the other without losing any information.
However, we cannot use the ⊤ as defined in the standard library. In a moment I'll show why, but for now, we'll define a new type:
data Unit : Set where
unit : Unit
And what it means for a value to be hidden:
Hidden : Set → Set
Hidden A = Unit → A
We can easily reveal the hidden value by applying unit to it:
reveal : {A : Set} → Hidden A → A
reveal f = f unit
The last step we need to take is the hide function:
hide : {A : Set} {B : A → Set} →
((x : A) → B x) → ((x : A) → Hidden (B x))
hide f x unit = f x
Why wouldn't this work with ⊤? If you declare ⊤ as record, Agda can figure out on its own that tt is the only value. So, when faced with hide f x, Agda won't stop at the third argument (because it already knows how it must look like) and automatically reduce it to λ _ → f x. Data types defined with the data keyword don't have these special rules, so hide f x remains that way until someone reveals it and the type checker cannot see that there's a f x subexpression inside hide f x.
The rest is just arranging stuff so we can get the proof later:
data Reveal_is_ {A : Set} (x : Hidden A) (y : A) : Set where
[_] : (eq : reveal x ≡ y) → Reveal x is y
inspect : {A : Set} {B : A → Set}
(f : (x : A) → B x) (x : A) → Reveal (hide f x) is (f x)
inspect f x = [ refl ]
So, there you have it:
inspect state g : Reveal (hide state g) is (state g)
-- pattern match on (state g)
inspect state g : Reveal (hide state g) is going
When you then reveal hide state g, you'll get state g and finally the proof that state g ≡ going.
I think you are looking for an Agda technique known by the name "inspect" or "inspect on steroids". It allows you to obtain an equality proof for the knowledge learned from a with pattern match. I recommend you read the code in the following mail and try to understand how it works. Focus on how the function foo at the bottom needs to remember that "f x = z" and does so by with-matching on "inspect (hide f x)" together with "f x":
https://lists.chalmers.se/pipermail/agda/2011/003286.html
To use this in actual code, I recommend you import Relation.Binary.PropositionalEquality from the Agda standard library and use its version of inspect (which is superficially different from the code above). It has the following example code:
f x y with g x | inspect g x
f x y | c z | [ eq ] = ...
Note: "Inspect on steroids" is an updated version of an older approach at the inspect idiom.
I hope this helps...