Keeping track of "state" when writing equality proofs that are long chains of transitively linked steps - proof

I was writing the following proof in Idris:
n : Nat
n = S (k + k)
lemma: n * n = ((k * n) + k) + (1 + (((k * n) + k) + 0))
lemma = sym $
rewrite plusZeroRightNeutral ((k * n) + k) in
rewrite plusAssociative ((k * n) + k) 1 ((k * n) + k) in
rewrite plusCommutative ((k * n) + k) 1 in
rewrite mult2 ((k * n) + k) in
rewrite multDistributesOverPlusRight 2 (k * n) k in
rewrite multAssociative 2 k n in
rewrite sym (mult2 k) in
rewrite plusCommutative ((k + k) * n) (k + k) in
Refl
But of course that's not really what I wrote. What I wrote instead is this:
lemma: n * n = ((k * n) + k) + (1 + (((k * n) + k) + 0))
lemma = sym $
-- ((k * n) + k) + (1 + ((k * n) + k) + 0) =
rewrite plusZeroRightNeutral ((k * n) + k) in
-- ((k * n) + k) + (1 + (k * n) + k) =
rewrite plusAssociative ((k * n) + k) 1 ((k * n) + k) in
-- (((k * n) + k) + 1) + (k * n) + k) =
rewrite plusCommutative ((k * n) + k) 1 in
-- 1 + ((k * n) + k)) + ((k * n) + k) =
rewrite mult2 ((k * n) + k) in
-- 1 + 2 * ((k * n) + k) =
rewrite multDistributesOverPlusRight 2 (k * n) k in
-- 1 + 2 * (k * n) + 2 * k
rewrite multAssociative 2 k n in
-- 1 + (2 * k) * n + 2 * k =
rewrite sym (mult2 k) in
-- 1 + (k + k) * n + (k + k) =
rewrite plusCommutative ((k + k) * n) (k + k) in
-- (k + k) * n + (1 + k + k) =
-- (k + k) * n + n =
-- (1 + k + k) * n =
-- n * n
Refl
If I were writing this in Agda, I could use the ≡-Reasoning module to keep track of where I am; for example, the above can be done like this (omitting the actual proof steps, since they'd be exactly the same):
lemma : ((k * n) + k) + (1 + (((k * n) + k) + 0)) ≡ n * n
lemma =
begin
((k * n) + k) + (1 + (((k * n) + k) + 0)) ≡⟨ {!!} ⟩
((k * n) + k) + (1 + (((k * n) + k))) ≡⟨ {!!} ⟩
((k * n) + k) + 1 + ((k * n) + k) ≡⟨ {!!} ⟩
1 + ((k * n) + k) + ((k * n) + k) ≡⟨ {!!} ⟩
1 + 2 * ((k * n) + k) ≡⟨ {!!} ⟩
1 + 2 * (k * n) + 2 * k ≡⟨ {!!} ⟩
1 + (2 * k) * n + 2 * k ≡⟨ {!!} ⟩
1 + (k + k) * n + (k + k) ≡⟨ {!!} ⟩
(k + k) * n + (1 + k + k) ≡⟨⟩
(k + k) * n + n ≡⟨ {!!} ⟩
n + (k + k) * n ≡⟨⟩
(1 + k + k) * n ≡⟨⟩
n * n
∎
where
open ≡-Reasoning
Is there a way to do similarly in Idris?
(Note: of course, in Agda I wouldn't hand-prove this: I'd just use the semiring solver and be done with it; but the Idris semiring solver at https://github.com/FranckS/RingIdris seems to be targeting Idris 0.11 and I'm using 1.1.1...)

the is your friend, and avoids the need for any comments. Also use let so that the proof can proceed in a forwards direction.
I couldn't easily rewrite your example (because I didn't have all the lemmas available), so here is my own code example, which successfully compiles (with two holes because I've left out the proofs of plus_assoc and plus_comm):
%default total
plus_assoc : (x : Nat) -> (y : Nat) -> (z : Nat) -> (x + y) + z = x + (y + z)
plus_comm : (x : Nat) -> (y : Nat) -> x + y = y + x
abcd_to_acbd_lemma : (a : Nat) -> (b : Nat) -> (c : Nat) -> (d : Nat) ->
(a + b) + (c + d) = (a + c) + (b + d)
abcd_to_acbd_lemma a b c d =
let e1 = the ((a + b) + (c + d) = ((a + b) + c) + d) $ sym (plus_assoc (a + b) c d)
e2 = the (((a + b) + c) + d = (a + (b + c)) + d) $ rewrite (plus_assoc a b c) in Refl
e3 = the ((a + (b + c)) + d = (a + (c + b)) + d) $ rewrite (plus_comm b c) in Refl
e4 = the ((a + (c + b)) + d = ((a + c) + b) + d) $ rewrite (plus_assoc a c b) in Refl
e5 = the ((((a + c) + b) + d) = (a + c) + (b + d)) $ plus_assoc (a + c) b d
in trans e1 $ trans e2 $ trans e3 $ trans e4 e5

Related

Getting better inference for agda proofs

Suppose I have the following Agda code which compiles (where +-assoc and +-comm are taken from the plf course. It seems labourious to require the explicit sub expressions in each step in order for the code to simplify; however removing any one of the explicit expressions causes the code to fail to compile.
How can I get better type inference when writing proofs?
+-swap : ∀ (m n p : ℕ) -> m + (n + p) ≡ n + (m + p)
+-swap zero n p = refl
+-swap (suc m) n p =
begin
suc m + (n + p)
≡⟨ +-comm (suc m) (n + p) ⟩
(n + p) + (suc m)
≡⟨ +-assoc n p (suc m) ⟩
n + (p + suc m)
≡⟨ cong (n +_) (+-comm p (suc m)) ⟩
n + (suc m + p)
∎
If my understanding of your requirement is correct, you would like to be able to omit the intermediate expressions such as suc m + (n + p)
The whole point of the ≡-Reasoning module which provides the user with operators such as _∎, _≡⟨_⟩_ and begin_ is to make these quantities explicit so that the proof is more readable, and each reasoning step can be shown and understood clearly.
This means that if you want to omit these quantities, you should not use this library, and instead use one of the following methods to prove such equality proofs.
In a purpose of self-containment, here are the required imports:
module EqProofs where
open import Data.Nat
open import Data.Nat.Properties
open import Relation.Binary.PropositionalEquality
open ≡-Reasoning
A first possibility is to use the equality reasoning module, as you did:
+-swap₁ : ∀ (m n p : ℕ) → m + (n + p) ≡ n + (m + p)
+-swap₁ zero n p = refl
+-swap₁ (suc m) n p = begin
suc m + (n + p) ≡⟨ +-comm (suc m) (n + p) ⟩
(n + p) + (suc m) ≡⟨ +-assoc n p (suc m) ⟩
n + (p + suc m) ≡⟨ cong (n +_) (+-comm p (suc m)) ⟩
n + (suc m + p) ∎
But you can also give the term explicitly using the transitivity of the equality:
+-swap₂ : ∀ (m n p : ℕ) → m + (n + p) ≡ n + (m + p)
+-swap₂ zero _ _ = refl
+-swap₂ (suc m) n p =
trans
(+-comm (suc m) (n + p))
(trans
(+-assoc n p (suc m))
(cong (n +_) (+-comm p (suc m))))
Or you can use rewrite to simplify the goal using equality proofs:
+-swap₃ : ∀ (m n p : ℕ) → m + (n + p) ≡ n + (m + p)
+-swap₃ zero _ _ = refl
+-swap₃ (suc m) n p
rewrite +-comm (suc m) (n + p)
| +-assoc n p (suc m)
| cong (n +_) (+-comm p (suc m)) = refl
In the two last possibilities, the intermediate quantity are hidded (and easily inferred by Agda), as you wanted.
Edit:
You can omit some of (not all of them though, since the type checker needs information to solve constraints) the required parameters using underscores. This can be done in all cases except when using rewrite, as follows:
+-swap₁ : ∀ (m n p : ℕ) → m + (n + p) ≡ n + (m + p)
+-swap₁ zero n p = refl
+-swap₁ (suc m) n p = begin
suc m + (n + p) ≡⟨ +-comm _ (n + p) ⟩
(n + p) + (suc m) ≡⟨ +-assoc n p _ ⟩
n + (p + suc m) ≡⟨ cong (n +_) (+-comm p _) ⟩
n + (suc m + p) ∎
+-swap₂ : ∀ (m n p : ℕ) → m + (n + p) ≡ n + (m + p)
+-swap₂ zero _ _ = refl
+-swap₂ (suc m) n p =
trans
(+-comm _ (n + p))
(trans
(+-assoc n p _)
(cong (n +_) (+-comm p _)))
+-swap₃ : ∀ (m n p : ℕ) → m + (n + p) ≡ n + (m + p)
+-swap₃ zero _ _ = refl
+-swap₃ (suc m) n p
rewrite +-comm (suc m) (n + p)
| +-assoc n p (suc m)
| cong (n +_) (+-comm p (suc m)) = refl
Edit n°2:
As a side note, you can prove this property without having to pattern match on either of its arguments. Here is this proof, using chained equality. You can noticed that only a few parameters have to be explicitly provided, while the others can be replaced by underscores:
+-swap₄ : ∀ (m n p : ℕ) → m + (n + p) ≡ n + (m + p)
+-swap₄ m n p = begin
m + (n + p) ≡⟨ +-assoc m _ _ ⟩
(m + n) + p ≡⟨ cong (_+ p) (+-comm m _) ⟩
(n + m) + p ≡⟨ +-assoc n _ _ ⟩
n + (m + p) ∎

Octave goes in Waiting... when solve() function is used

I've installed and loaded the symbolic package that becomes available from optim package to obtain the syms function (like in MATLAB) but when I use solve() function the command window goes in Waiting mode like
Waiting..........
My code is given below:
syms s T K D1 D2 D3 theta1 theta2 theta3 J1 J2 J3
eq1 = (s * D1 + K + J1 * s ^ 2)* theta1 - K * theta2 == T;
eq2 = -K * theta1 + (J2 * s ^ 2 + K + D2 * s) * theta2 - D2 * s * theta3 == 0;
eq3 = -D2 * s * theta2 + (D3 * s + J3 * s ^ 2 + D2 * s) * theta3 == 0;
S = solve(eq1, eq2, eq3)
but if I manually solve it by inverse method, it gives the answer instantly. Kindly help to solve this bug.

Prove that n times an even number yields an even number in Agda

I am trying to define the sum of 1..n ∈ ℕ as n * (n + 1) / 2 in Agda
and need a proof that n*(n + 1) is even for that.
The proof is pretty simple, but there seems to be a concept I don't understand, as I am new to Agda (though neither to maths nor haskell) and learned it from http://www.cse.chalmers.se/~ulfn/papers/afp08/tutorial.pdf
(pointers to more advanced tutorials more than welcome!).
open import Data.Nat
open import Relation.Binary.PropositionalEquality
open import Data.Sum
-- A natural number is even, if there is a k ∈ ℕ with k * 2 = n.
data IsEven : ℕ → Set where
even : (k : ℕ) → IsEven (k * 2)
-- A product is even, if one of the factors is even.
even-product : {n m : ℕ} → IsEven n ⊎ IsEven m → IsEven (m * n)
even-product {n} {m} (inj₁ (even k)) = even (m * k)
even-product {n} {m} (inj₂ (even k)) = even (n * k)
The code returns
m != 2 of type ℕ
when checking that the expression even (k * m) has type
IsEven (k * 2 * m)
I already tried using with patterns to convince the compiler that k * 2 is in fact n, but to no avail. Switching m * k to k * m gives
k * m != m of type ℕ
when checking that the expression even (k * m) has type
IsEven (m * (k * 2))
You can find out what the problem is by putting {! !} markers around your attempted solution and using the C-c C-. shortcut.
even-product : {n m : ℕ} → IsEven n ⊎ IsEven m → IsEven (m * n)
even-product {n} {m} (inj₁ (even k)) = {!even (m * k)!}
even-product {n} {m} (inj₂ (even k)) = {!even (n * k)!}
Reloading the file and then pressing C-c C-. with your cursor in the first hole gives the following response:
Goal: IsEven (m * (k * 2))
Have: IsEven (m * k * 2)
————————————————————————————————————————————————————————————
n : ℕ
m : ℕ
k : ℕ
Now the problem is clear: the goal is to prove that (m * (k * 2)) is even, but you have a proof that (m * k * 2) is even.
To fix this problem, you have to use the fact that * is associative. I'll postulate it here by means of example, but obviously you'd want to give it an actual proof later.
postulate
*-assoc : (k l m : ℕ) → k * (l * m) ≡ (k * l) * m
Now we can use the rewrite keyword with *-assoc to fix the first case:
even-product : {n m : ℕ} → IsEven n ⊎ IsEven m → IsEven (m * n)
even-product {n} {m} (inj₁ (even k)) rewrite *-assoc m k 2 = even (m * k)
even-product {n} {m} (inj₂ (even k)) = {!even (n * k)!}
In the second case, C-c C-. gives the following response:
Goal: IsEven (k * 2 * n)
Have: IsEven (n * k * 2)
————————————————————————————————————————————————————————————
m : ℕ
n : ℕ
k : ℕ
So now you need to use commutativity of * as well as associativity. I'll leave the full solution as an exercise to the reader.
Isn't it easier to prove that the 2 * sum(1..n) = n * (n+1)? Which shows that n*(n+1) is even?

Flattened matrix vs 2D matrix lookup equivalence (proof) - seeking more elegance

I have a proof of the (obvious) statement that looking up elements in a flattened representation of a matrix as an m * n length vector is the same as a Vector-of-Vector representation. But my proof feels clunky. [I won't give the proof of it here, as doing so would bias the search!]. To make this question self-contained, below I give a self-contained Agda module with a few lemmas that are helpful. [Some of these lemmas should probably be in the standard library, but are not.]
Basically, I am looking for an elegant way to fill the hole at the bottom, the proof of lookup∘concat. If you can make my lemmas more elegant as well, do feel free!
module NNN where
open import Data.Nat
open import Data.Nat.Properties.Simple
open import Data.Nat.Properties
open import Data.Vec
open import Data.Fin using (Fin; inject≤; fromℕ; toℕ)
open import Data.Fin.Properties using (bounded)
open import Data.Product using (_×_; _,_)
open import Relation.Binary.PropositionalEquality
-- some useful lemmas
cong+r≤ : ∀ {i j} → i ≤ j → (k : ℕ) → i + k ≤ j + k
cong+r≤ {0} {j} z≤n k = n≤m+n j k
cong+r≤ {suc i} {0} () k -- absurd
cong+r≤ {suc i} {suc j} (s≤s i≤j) k = s≤s (cong+r≤ {i} {j} i≤j k)
cong+l≤ : ∀ {i j} → i ≤ j → (k : ℕ) → k + i ≤ k + j
cong+l≤ {i} {j} i≤j k =
begin (k + i
≡⟨ +-comm k i ⟩
i + k
≤⟨ cong+r≤ i≤j k ⟩
j + k
≡⟨ +-comm j k ⟩
k + j ∎)
where open ≤-Reasoning
cong*r≤ : ∀ {i j} → i ≤ j → (k : ℕ) → i * k ≤ j * k
cong*r≤ {0} {j} z≤n k = z≤n
cong*r≤ {suc i} {0} () k -- absurd
cong*r≤ {suc i} {suc j} (s≤s i≤j) k = cong+l≤ (cong*r≤ i≤j k) k
sinj≤ : ∀ {i j} → suc i ≤ suc j → i ≤ j
sinj≤ {0} {j} _ = z≤n
sinj≤ {suc i} {0} (s≤s ()) -- absurd
sinj≤ {suc i} {suc j} (s≤s p) = p
i*n+k≤m*n : ∀ {m n} → (i : Fin m) → (k : Fin n) →
(suc (toℕ i * n + toℕ k) ≤ m * n)
i*n+k≤m*n {0} {_} () _
i*n+k≤m*n {_} {0} _ ()
i*n+k≤m*n {suc m} {suc n} i k =
begin (suc (toℕ i * suc n + toℕ k)
≡⟨ cong suc (+-comm (toℕ i * suc n) (toℕ k)) ⟩
suc (toℕ k + toℕ i * suc n)
≡⟨ refl ⟩
suc (toℕ k) + (toℕ i * suc n)
≤⟨ cong+r≤ (bounded k) (toℕ i * suc n) ⟩
suc n + (toℕ i * suc n)
≤⟨ cong+l≤ (cong*r≤ (sinj≤ (bounded i)) (suc n)) (suc n) ⟩
suc n + (m * suc n)
≡⟨ refl ⟩
suc m * suc n ∎)
where open ≤-Reasoning
fwd : {m n : ℕ} → (Fin m × Fin n) → Fin (m * n)
fwd {m} {n} (i , k) = inject≤ (fromℕ (toℕ i * n + toℕ k)) (i*n+k≤m*n i k)
lookup∘concat : ∀ {m n} {A : Set} (i : Fin m) (j : Fin n)
(xss : Vec (Vec A n) m) →
lookup (fwd (i , j)) (concat xss) ≡ lookup j (lookup i xss)
lookup∘concat i j xss = {!!}
It's better to define fwd by induction, then the rest follows.
open import Data.Nat.Base
open import Data.Fin hiding (_+_)
open import Data.Vec
open import Data.Vec.Properties
open import Relation.Binary.PropositionalEquality
fwd : ∀ {m n} -> Fin m -> Fin n -> Fin (m * n)
fwd {suc m} {n} zero j = inject+ (m * n) j
fwd {n = n} (suc i) j = raise n (fwd i j)
-- This should be in the standard library.
lookup-++-raise : ∀ {m n} {A : Set} (j : Fin n) (xs : Vec A m) (ys : Vec A n)
-> lookup (raise m j) (xs ++ ys) ≡ lookup j ys
lookup-++-raise j [] ys = refl
lookup-++-raise j (x ∷ xs) ys = lookup-++-raise j xs ys
lookup∘concat : ∀ {m n} {A : Set} i j (xss : Vec (Vec A n) m)
-> lookup (fwd i j) (concat xss) ≡ lookup j (lookup i xss)
lookup∘concat zero j (xs ∷ xss) = lookup-++-inject+ xs (concat xss) j
lookup∘concat (suc i) j (xs ∷ xss)
rewrite lookup-++-raise (fwd i j) xs (concat xss) = lookup∘concat i j xss
The soundness proof for fwd:
module Soundness where
open import Data.Nat.Properties.Simple
open import Data.Fin.Properties
soundness : ∀ {m n} (i : Fin m) (j : Fin n) -> toℕ (fwd i j) ≡ toℕ i * n + toℕ j
soundness {suc m} {n} zero j = sym (inject+-lemma (m * n) j)
soundness {n = n} (suc i) j rewrite toℕ-raise n (fwd i j)
| soundness i j
= sym (+-assoc n (toℕ i * n) (toℕ j))

Performance issue with CUDA code, using one variable takes 100 times longer to execute

I have this: (it is a mess because i'm changing every bit to see what's wrong, and what the compiler is doing)
__device__ inline int f(int i, int j, int value)
{
int x;
int y;
int delta;
int* p = p_new_solution;
int pitch = p_new_solution_pitch;
int* p_row_i = (int*)((char*)p + i * pitch);
int p_i = p_row_i[threadIdx.x + blockIdx.x * blockDim.x];
int* p_row_j = (int*)((char*)p + j * pitch);
int p_j = p_row_j[threadIdx.x + blockIdx.x * blockDim.x];
delta = (tex2D(A_matrix, i, i) - tex2D(A_matrix, j, j)) * (tex2D(B_matrix, p_j, p_j) - tex2D(B_matrix, p_i, p_i));
delta += (tex2D(A_matrix, i, j) - tex2D(A_matrix, j, i)) * (tex2D(B_matrix, p_j, p_i) - tex2D(B_matrix, p_i, p_j));
for(int k = 0 ; k < n ; k++)
{
int* p_row = (int*)((char*)p + k * pitch);
int p_k = p_row[threadIdx.x + blockIdx.x * blockDim.x];
int A_ki = tex2D(A_matrix, k, i);
int A_kj = tex2D(A_matrix, k, j);
int A_ik = tex2D(A_matrix, i, k);
int A_jk = tex2D(A_matrix, j, k);
int B_pkpj = tex2D(B_matrix, p_k, p_j);
int B_pkpi = tex2D(B_matrix, p_k, p_i);
int B_pjpk = tex2D(B_matrix, p_j, p_k);
int B_pipk = tex2D(B_matrix, p_i, p_k);
x = (A_ki - A_kj);
x *= (B_pkpj - B_pkpi);
y = (A_ik - A_jk);
y *= (B_pjpk - B_pipk);
x += y;
}
x -= ( (tex2D(A_matrix, i, i) - tex2D(A_matrix, i, j)) * (tex2D(B_matrix, p_i, p_j) - tex2D(B_matrix, p_i, p_i)) ) +
( (tex2D(A_matrix, i, i) - tex2D(A_matrix, j, i)) * (tex2D(B_matrix, p_j, p_i) - tex2D(B_matrix, p_j, p_i)) );
x -= ( (tex2D(A_matrix, j, i) - tex2D(A_matrix, j, j)) * (tex2D(B_matrix, p_j, p_j) - tex2D(B_matrix, p_j, p_i)) ) +
( (tex2D(A_matrix, i, j) - tex2D(A_matrix, j, j)) * (tex2D(B_matrix, p_j, p_j) - tex2D(B_matrix, p_j, p_j)) );
x += delta;
x *= 2;
return value;
//return x;
}
The problem is with those two return statements.. if i return value, the whole kernel takes like 300ms, if i return x it takes approximately 33000 ms. What is the problem with this? I've tried some __syncthreads(), but still got the same bad time.
Those return functions aren't the final code, i need an if else statement to choose a return value, it is either gonna be value or value + x, and this if else statement is taking too long too.
Thanks for now.
The time you are measuring is not the time to return a variable, it's the time to compute x. NVCC detects that you have a lot of code that is doing absolutely nothing, since its results are never used if you don't return x. It removes the useless code, making the function faster.
return value just returns one of the arguments of the function, making the entire function a no-op. I would guess it is entirely optimized away. When you return x it does the actual work and takes 33s.