How can Isabelle prove the correlation lemma of graph record type? - proof

There is a record type definition for the graph
record ('a::linorder,'b::linorder) pre_digraph =
verts :: "'a set"
arcs :: "'b set"
tail :: "'b ⇒ 'a"
head :: "'b ⇒ 'a"
locale pre_digraph =
fixes G :: "('a, 'b) pre_digraph" (structure)
There are also some definitions of auxiliary lemmas
definition arc_to_ends :: "('a::linorder,'b::linorder) pre_digraph ⇒ 'b ⇒ 'a × 'a" where
"arc_to_ends G e ≡ (tail G e, head G e)"
definition arcs_ends :: "('a::linorder,'b::linorder) pre_digraph ⇒ ('a × 'a) set" where
"arcs_ends G ≡ arc_to_ends G ` arcs G"
definition set_to_list :: "('a::linorder×'a) set ⇒ ('a×'a) list"
where "set_to_list A = sorted_list_of_set A"
primrec nexts :: "[('a×'a) list,'a] ⇒ 'a list"
where
"nexts [] n = []"
| "nexts (e#es) n = (if fst e = n then snd e # nexts es n else nexts es n)"
definition nodes_of :: "('a::linorder,'b::linorder) pre_digraph ⇒ 'a set"
where "nodes_of G = set (map fst (set_to_list (arcs_ends G)) # map snd (set_to_list (arcs_ends G)))"
Now how do we prove the following lemma?
lemma [simp]: "x ∉ nodes_of G ⟹ nexts (set_to_list (arcs_ends G)) x = []" (* ???? *)

Related

In Haskell how to "apply" functions in nested context to a value in context?

nestedApply :: (Applicative f, Applicative g) => g (f (a -> b)) -> f a -> g (f b)
As the type indicates, how to get that (a->b) applied to that a in the context f?
Thanks for help.
This one of those cases where it's helpful to focus on types. I will try to keep it simple and explain the reasoning.
Let's start with describing the task. We have gfab :: g(f(a->b)) and fa :: f a, and we want to have g(f b).
gfab :: g (f (a -> b))
fa :: f a
??1 :: g (f b)
Since g is a functor, to obtain type g T we can start with a value ??2 of type g U and apply fmap to ??3 :: U -> T. In our case, we have T = f b, so we are looking for:
gfab :: g (f (a -> b))
fa :: f a
??2 :: g U
??3 :: U -> f b
??1 = fmap ??3 ??2 :: g (f b)
Now, it looks like we should pick ??2 = gfab. After all,that's the only value of type g Something we have. We obtain U = f (a -> b).
gfab :: g (f (a -> b))
fa :: f a
??3 :: f (a -> b) -> f b
??1 = fmap ??3 gfab :: g (f b)
Let's make ??3 into a lambda, \ (x :: f (a->b)) -> ??4 with ??4 :: f b. (The type of x can be omitted, but I decided to add it to explain what's going on)
gfab :: g (f (a -> b))
fa :: f a
??4 :: f b
??1 = fmap (\ (x :: f (a->b)) -> ??4) gfab :: g (f b)
How to craft ??4. Well, we have values of types f (a->b) and f a, so we can <*> those to get f b. We finally obtain:
gfab :: g (f (a -> b))
fa :: f a
??1 = fmap (\ (x :: f (a->b)) -> x <*> fa) gfab :: g (f b)
We can simplyfy that into:
nestedApply gfab fa = fmap (<*> fa) gfab
Now, this is not the most elegant way to do it, but understanding the process is important.
With
nestedApply :: (Applicative f, Applicative g)
=> g (f (a -> b))
-> f a
-> g (f b )
to get that (a->b) applied to that a in the context f, we need to operate in the context g.
And that's just fmap.
It's clearer with the flipped signature, focusing on its last part
flip nestedApply :: (Applicative f, Applicative g)
=> f a
-> g (f (a -> b)) --- from here
-> g (f b ) --- to here
So what we have here is
nestedApply gffun fx = fmap (bar fx) gffun
with bar fx being applied under the g wraps by fmap for us. Which is
bar fx :: f (a -> b)
-> f b
i.e.
bar :: f a
-> f (a -> b)
-> f b
and this is just <*> isn't it, again flipped. Thus we get the answer,
nestedApply gffun fx = fmap (<*> fx) gffun
As we can see only fmap capabilities of g are used, so we only need
nestedApply :: (Applicative f, Functor g) => ...
in the type signature.
It's easy when writing it on a sheet of paper, in 2D. Which we imitate here with the wild indentation to get that vertical alignment.
Yes we the humans learned to write first, on paper, and to type on a typewriter, much later. The last generation or two were forced into linear typing by the contemporary devices since the young age but now the scribbling and talking (and gesturing and pointing) will hopefully be taking over yet again. Inventive input modes will eventually include 3D workflows and that will be a definite advancement. 1D bad, 2D good, 3D even better. For instance many category theory diagrams are much easier to follow (and at least imagine) when drawn in 3D. The rule of thumb is, it should be easy, not hard. If it's too busy, it probably needs another dimension.
Just playing connect the wires under the wraps. A few self-evident diagrams, and it's done.
Here's some type mandalas for you (again, flipped):
-- <$> -- <*> -- =<<
f a f a f a
(a -> b) f (a -> b) (a -> f b)
f b f b f ( f b) -- fmapped, and
f b -- joined
and of course the mother of all applications,
-- $
a
a -> b
b
a.k.a. Modus Ponens (yes, also flipped).

Haskell function composition - two one-dimensional functions to a two dimensional function

Let's assume I have a function f in Haskell, It takes a Double and returns a Double, and I have function g that also takes a Double and returns a Double.
Now, I can apply f to g like this: f . g.
Now, let's take a higher-dimensional function f, that
takes two Doubles and outputs one:
f :: Double -> Double -> Double
or
f :: (Double, Double) -> Double
And I have two g functions as well:
g1 :: Double -> Double, g2 :: Double -> Double
Now, I want to compose the functions to get something like:
composition x = f (g1 x) (g2 x)
Can this be achieved just by using the dot (.) operator?
You can make use of liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c for this:
composition = liftA2 f g1 g2
Since a function is an applicative [src]:
instance Applicative ((->) r) where
pure = const
(<*>) f g x = f x (g x)
liftA2 q f g x = q (f x) (g x)
and liftA2 is implemented as [src]:
liftA2 :: (a -> b -> c) -> f a -> f b -> f c
liftA2 f x = (<*>) (fmap f x)
this will thus be resolved to:
liftA2 f g1 g2 x = (<*>) (fmap f g1) g2 x
= (fmap f g1 <*> g2) x
= (f . g1 <*> g2) x
= (\fa ga xa -> fa xa (ga xa)) (f . g1) g2 x
= (f . g1) x (g2 x)
= f (g1 x) (g2 x)
You can try this:
import Control.Arrow
composition = f . (g1 &&& g2)
(&&&) turns g1 :: a -> b and g2 :: a -> c into g1 &&& g2 :: a -> (b, c). Then you can apply normal composition.

Can I extract a proof of bounds from an enumeration expression?

Consider this trivial program:
module Study
g : Nat -> Nat -> Nat
g x y = x - y
f : Nat -> List Nat
f x = map (g x) [1, 2 .. x]
It gives an obvious error:
|
4 | g x y = x - y
| ^
When checking right hand side of g with expected type
Nat
When checking argument smaller to function Prelude.Nat.-:
Can't find a value of type
LTE y x
— Saying I should offer some proof that this subtraction is safe to perform.
Surely, in the given context, g is always invoked safely. This follows from the way enumerations behave. How can I extract a proof of that fact so that I can give it to the invocation of g?
I know that I can use isLTE to obtain the proof:
g : Nat -> Nat -> Nat
g x y = case y `isLTE` x of
(Yes prf) => x - y
(No contra) => ?s_2
This is actually the only way I know of, and it seems to me that in a situation such as we have here, where x ≥ y by construction, there should be a way to avoid a superfluous case statement. Is there?
For map (\y = x - y) [1, 2 .. x] there needs to be a proof \y => LTE y x for every element of [1, 2 .. x]. There is Data.List.Quantifiers.All for this: All (\y => LTE y x) [1, 2 .. x].
But constructing and applying this proof is not so straight-forward. You could either build a proof about the range function lteRange : (x : Nat) -> All (\y => LTE y x) (natRange x) or define a function that returns a range and its proof lteRange : (x : Nat) -> (xs : List Nat ** All (\y => LTE y x) xs). For simplicity, I'll show an example with the second type.
import Data.List.Quantifiers
(++) : All p xs -> All p ys -> All p (xs ++ ys)
(++) [] ys = ys
(++) (x :: xs) ys = x :: (xs ++ ys)
lteRange : (x : Nat) -> (xs : List Nat ** All (\y => LTE y x) xs)
lteRange Z = ([] ** [])
lteRange (S k) = let (xs ** ps) = lteRange k in
(xs ++ [S k] ** weakenRange ps ++ [lteRefl])
where
weakenRange : All (\y => LTE y x) xs -> All (\y => LTE y (S x)) xs
weakenRange [] = []
weakenRange (y :: z) = lteSuccRight y :: weakenRange z
Also, map only applies one argument, but (-) needs the proof, too. So with a little helper function …
all_map : (xs : List a) -> All p xs -> (f : (x : a) -> p x -> b) -> List b
all_map [] [] f = []
all_map (x :: xs) (p :: ps) f = f x p :: all_map xs ps f
We can roughly do what you wanted without checking for LTE during the run-time:
f : Nat -> List Nat
f x = let (xs ** prfs) = lteRange x in all_map xs prfs (\y, p => x - y)

Haskell: arrow precedence with function arguments

I'm a relatively experienced Haskell programmer with a few hours of experience, so the answer might be obvious.
After watching A taste of Haskell, I got lost when Simon explained how the append (++) function really works with its arguments.
So, here's the part where he talks about this.
First, he says that (++) :: [a] -> [a] -> [a] can be understood as a function which gets two lists as arguments, and returns a list after the last arrow). However, he adds that actually, something like this happens: (++) :: [a] -> ([a] -> [a]), the function takes only one argument and returns a function.
I'm not sure to understand how the returned function closure gets the first list as it expects one argument as well.
On the next slide of the presentation, we have the following implementation:
(++) :: [a] -> [a] -> [a]
[] ++ ys = ys
(x:xs) ++ ys = x : (xs ++ ys)
If I think that (++) receives two arguments and return a list, this piece of code along with the recursion is clear enough.
If we consider that (++) receives only one argument and returns a list, where does ys come from? Where is the returned function ?
The trick to understanding this is that all haskell functions only take 1 argument at most, it's just that the implicit parentheses in the type signature and syntax sugar make it appear as if there are more arguments. To use ++ as an example, the following transformations are all equivalent
xs ++ ys = ...
(++) xs ys = ...
(++) xs = \ys -> ...
(++) = \xs -> (\ys -> ...)
(++) = \xs ys -> ...
Another quick example:
doubleList :: [Int] -> [Int]
doubleList = map (*2)
Here we have a function of one argument doubleList without any explicit arguments. It would have been equivalent to write
doubleList x = map (*2) x
Or any of the following
doubleList = \x -> map (*2) x
doubleList = \x -> map (\y -> y * 2) x
doubleList x = map (\y -> y * 2) x
doubleList = map (\y -> y * 2)
The first definition of doubleList is written in what is commonly called point-free notation, so called because in the mathematical theory backing it the arguments are referred to as "points", so point-free is "without arguments".
A more complex example:
func = \x y z -> x * y + z
func = \x -> \y z -> x * y + z
func x = \y z -> x * y + z
func x = \y -> \z -> x * y + z
func x y = \z -> x * y + z
func x y z = x * y + z
Now if we wanted to completely remove all references to the arguments we can make use of the . operator which performs function composition:
func x y z = (+) (x * y) z -- Make the + prefix
func x y = (+) (x * y) -- Now z becomes implicit
func x y = (+) ((*) x y) -- Make the * prefix
func x y = ((+) . ((*) x)) y -- Rewrite using composition
func x = (+) . ((*) x) -- Now y becomes implicit
func x = (.) (+) ((*) x) -- Make the . prefix
func x = ((.) (+)) ((*) x) -- Make implicit parens explicit
func x = (((.) (+)) . (*)) x -- Rewrite using composition
func = ((.) (+)) . (*) -- Now x becomes implicit
func = (.) ((.) (+)) (*) -- Make the . prefix
So as you can see there are lots of different ways to write a particular function with a varying number of explicit "arguments", some of which are very readable (i.e. func x y z = x * y + z) and some which are just a jumble of symbols with little meaning (i.e. func = (.) ((.) (+)) (*))
Maybe this will help. First let's write it without operator notation which might be confusing.
append :: [a] -> [a] -> [a]
append [] ys = ys
append (x:xs) ys = x : append xs ys
We can apply one argument at a time:
appendEmpty :: [a] -> [a]
appendEmpty = append []
we could equivalently could have written that
appendEmpty ys = ys
from the first equation.
If we apply a non-empty first argument:
-- Since 1 is an Int, the type gets specialized.
appendOne :: [Int] -> [Int]
appendOne = append (1:[])
we could have equivalently have written that
appendOne ys = 1 : append [] ys
from the second equation.
You are confused about how Function Currying works.
Consider the following function definitions of (++).
Takes two arguments, produces one list:
(++) :: [a] -> [a] -> [a]
[] ++ ys = ys
(x:xs) ++ ys = x : (xs ++ ys)
Takes one argument, produces a function taking one list and producing a list:
(++) :: [a] -> ([a] -> [a])
(++) [] = id
(++) (x:xs) = (x :) . (xs ++)
If you look closely, these functions will always produce the same output. By removing the second parameter, we have changed the return type from [a] to [a] -> [a].
If we supply two parameters to (++) we get a result of type [a]
If we supply only one parameter we get a result of type [a] -> [a]
This is called function currying. We don't need to provide all the arguments to a function with multiple arguments. If we supply fewer then the total number of arguments, instead of getting a "concrete" result ([a]) we get a function as a result which can take the remaining parameters ([a] -> [a]).

Idris proof by definition

I can write the function
powApply : Nat -> (a -> a) -> a -> a
powApply Z f = id
powApply (S k) f = f . powApply k f
and prove trivially:
powApplyZero : (f : _) -> (x : _) -> powApp Z f x = x
powApplyZero f x = Refl
So far, so good. Now, I try to generalize this function to work with negative exponents. Of course, an inverse must be provided:
import Data.ZZ
-- Two functions, f and g, with a proof that g is an inverse of f
data Invertible : Type -> Type -> Type where
MkInvertible : (f : a -> b) -> (g : b -> a) ->
((x : _) -> g (f x) = x) -> Invertible a b
powApplyI : ZZ -> Invertible a a -> a -> a
powApplyI (Pos Z) (MkInvertible f g x) = id
powApplyI (Pos (S k)) (MkInvertible f g x) =
f . powApplyI (Pos k) (MkInvertible f g x)
powApplyI (NegS Z) (MkInvertible f g x) = g
powApplyI (NegS (S k)) (MkInvertible f g x) =
g . powApplyI (NegS k) (MkInvertible f g x)
I then try to prove a similar statement:
powApplyIZero : (i : _) -> (x : _) -> powApplyI (Pos Z) i x = x
powApplyIZero i x = ?powApplyIZero_rhs
However, Idris refuses to evaluate the application of powApplyI, leaving the type of ?powApplyIZero_rhs as powApplyI (Pos 0) i x = x (yes, Z is changed to 0). I've tried writing powApplyI in a non-pointsfree style, and defining my own ZZ with the %elim modifier (which I don't understand), but neither of these worked. Why isn't the proof handled by inspecting the first case of powApplyI?
Idris version: 0.9.15.1
Here are some things:
powApplyNI : Nat -> Invertible a a -> a -> a
powApplyNI Z (MkInvertible f g x) = id
powApplyNI (S k) (MkInvertible f g x) = f . powApplyNI k (MkInvertible f g x)
powApplyNIZero : (i : _) -> (x : _) -> powApplyNI 0 i x = x
powApplyNIZero (MkInvertible f g y) x = Refl
powApplyZF : ZZ -> (a -> a) -> a -> a
powApplyZF (Pos Z) f = id
powApplyZF (Pos (S k)) f = f . powApplyZF (Pos k) f
powApplyZF (NegS Z) f = f
powApplyZF (NegS (S k)) f = f . powApplyZF (NegS k) f
powApplyZFZero : (f : _) -> (x : _) -> powApplyZF 0 f x = x
powApplyZFZero f x = ?powApplyZFZero_rhs
The first proof went fine, but ?powApplyZFZero_rhs stubbornly keeps the type powApplyZF (Pos 0) f x = x. Clearly, there's some problem with ZZ (or my use of it).
The problem: powApplyI was not provably total, according to Idris. Idris' totality checker relies on being able to reduce parameters to structurally smaller forms, and with raw ZZs, this doesn't work.
The answer is to delegate the recursion to plain old powApply (which is proven total):
total
powApplyI : ZZ -> a <~ a -> a -> a
powApplyI (Pos k) (MkInvertible f g x) = powApply k f
powApplyI (NegS k) (MkInvertible f g x) = powApply (S k) g
Then, with a case split on i, powApplyIZero is proven trivially.
Thanks to Melvar from the #idris IRC channel.
powApplyI (Pos Z) i x doesn't reduce further because i is not in weak head normal form.
I don't have an Idris compiler, so I rewrote your code in Agda. It's pretty similar:
open import Function
open import Relation.Binary.PropositionalEquality
open import Data.Nat
open import Data.Integer
data Invertible : Set -> Set -> Set where
MkInvertible : {a b : Set} (f : a -> b) -> (g : b -> a) ->
(∀ x -> g (f x) ≡ x) -> Invertible a b
powApplyI : {a : Set} -> ℤ -> Invertible a a -> a -> a
powApplyI ( + 0 ) (MkInvertible f g x) = id
powApplyI ( + suc k ) (MkInvertible f g x) = f ∘ powApplyI ( + k ) (MkInvertible f g x)
powApplyI -[1+ 0 ] (MkInvertible f g x) = g
powApplyI -[1+ suc k ] (MkInvertible f g x) = g ∘ powApplyI -[1+ k ] (MkInvertible f g x)
Now you can define your powApplyIZero as
powApplyIZero : {a : Set} (i : Invertible a a) -> ∀ x -> powApplyI (+ 0) i x ≡ x
powApplyIZero (MkInvertible _ _ _) _ = refl
Pattern-matching on i induces unification and powApplyI (+ 0) i x becomes replaced with powApplyI (+ 0) i (MkInvertible _ _ _), so powApplyI can proceed further.
Or you could write this explicitly:
powApplyIZero : {a : Set} (f : a -> a) (g : a -> a) (p : ∀ x -> g (f x) ≡ x)
-> ∀ x -> powApplyI (+ 0) (MkInvertible f g p) x ≡ x
powApplyIZero _ _ _ _ = refl