Proving correctness of algorithm - proof

I was wondering if anyone could help me answer this question. It is from a previous exam paper and I could do with knowing the answer ready for this years exam.
This question seems so simple that I am getting completely lost, what exactly is it asking for?
Is the following algorithm to find maximum value correct?
{P: x≥0 ∧ y≥0 ∧ z≥0 }
if (x > y && x > z)
max = x;
else if (y > x && y > z)
max = y;
else
max = z;
{Q: max≥x ∧ max≥y ∧ max≥z ∧ ( max=x ∨ max=y ∨ max=z )}
The answer must be based on calculation of the weakest precondition for the algorithm.
How do you verify this? It seems to simple.
Thanks.

This question seems so simple that I am getting completely lost, what exactly is it asking for?
The question is asking for you to formally prove that the program behaves as specified, by the rigorous application of a set of rules decided on in advance (as opposed to reading the program and saying that it obviously works).
How do you verify this?
The program is as follows:
if (x > y && x > z)
max = x;
else P1
with P1 a shorthand for if (y > x && y > z) max = y; else max = z;
So the program is basically an if-then-else. Hoare logic provides a rule for the if-then-else construct:
{B ∧ P} S {Q} , {¬B ∧ P } T {Q}
----------------------------------
{P} if B then S else T {Q}
Instanciating the general if-then-else rule for the program at hand:
{???} max = x; {Q} , {???} P1 {Q}
-------------------------------------------------------------------------------------
{true} if (x > y && x > z) max = x; else P1 {Q: max≥x ∧ max≥y ∧ max≥z ∧ ( max=x ∨ max=y ∨ max=z)}
Can you complete the ??? placeholders?

Related

If statement doesn't evaluate

I know that questions like these seem to be looked down upon, but I haven't been able to find answers on the internet. I have the following function:
fun count :: "'a ⇒ 'a list ⇒ nat" where
"count a [] = 0"
| "count a (b # xs) = (count a xs) + (if a = b then 1 else 0)"
It counts the number of elements in a list that match with the given item. Simple enough, however, when I do the following:
value "count x [x,x,y,x,y]"
I get this as the output
"(if x = y then 1 else 0) + 1 + (if x = y then 1 else 0) + 1 + 1" :: "nat"
So you can see that there are hanging "if" statements and unevaluated additions in the output. Is it possible to make Isabelle simplify this?
I don't think so. The value command is more of a purely diagnostic tool and it is mostly meant for evaluation of ground terms (i.e. no free variables). The reason why you get a result at all is that it falls back from its standard method (compiling to ML, running the ML code, and converting the result back to HOL terms) to NBE (normalisation by evaluation, which is much slower and, at least in my experience, not that useful most of the time).
One trick that I do sometimes is to set up a lemma
lemma "count x [x, x, y, x, y] = myresult"
where the myresult on the right-hand side is just a dummy variable. Then I do
apply simp
and look at the resulting proof state (in case you don't see anything: try switching on "Editor Output State" in the options):
proof (prove)
goal (1 subgoal):
1. (x = y ⟶ Suc (Suc (Suc (Suc (Suc 0)))) = myresult) ∧
(x ≠ y ⟶ Suc (Suc (Suc 0)) = myresult)
It's a bit messy, but you can read of the result fairly well: if x = y, then the result is 5, otherwise it's 3. A simple hack to get rid of the Suc in the output is to cast to int, i.e. lemma "int (count x [x, x, y, x, y]) = myresult". Then you get:
(x = y ⟶ myresult = 5) ∧ (x ≠ y ⟶ myresult = 3)

How To simplify boolean equations

Note that this is an example question to represent the sum of all similar questions, please don't answer only the question below, but the general problem with optimizing boolean expressions
I have this boolean Equation:[boolean equation] e.g. (!B && A) || A
is there any better way for this?
A boolean equation follows simple calculations rules, known as the Boolean Algebra.
With those rules, you can simplify any boolean equation with hard work:
Associativity of ∨ : x ∨ ( y ∨ z ) = ( x ∨ y ) ∨ z
Associativity of ∧ : x ∧ ( y ∧ z ) = ( x ∧ y ) ∧ z
Commutativity of ∨ : x ∨ y = y ∨ x
Commutativity of ∧ : x ∧ y = y ∧ x
Distributivity of ∧ over ∨ : x ∧ ( y ∨ z ) = ( x ∧ y ) ∨ ( x ∧ z )
Identity for ∨ : x ∨ 0 = x
Identity for ∧ : x ∧ 1 = x
Annihilator for ∧ : x ∧ 0 = 0
The following laws hold in Boolean Algebra, but not in ordinary algebra:
Annihilator for ∨ : x ∨ 1 = 1
Idempotence of ∨ : x ∨ x = x
Idempotence of ∧ : x ∧ x = x
Absorption 1: x ∧ ( x ∨ y ) = x
Absorption 2: x ∨ ( x ∧ y ) = x
Distributivity of ∨ over ∧ : x ∨ ( y ∧ z ) = ( x ∨ y ) ∧ ( x ∨ z )
Complementation 1 : x ∧ ¬x = 0
Complementation 2 : x ∨ ¬x = 1
Double negation : ¬(¬x) = x
De Morgan 1 : ¬x ∧ ¬y = ¬(x ∨ y)
De Morgan 2 : ¬x ∨ ¬y = ¬(x ∧ y)
Note that
∨ represents OR (||)
∧ represents AND (&&)
¬ represents NOT (!)
= represents EQUALS (==)
But with more complexity of your equation, this done by hand is almost impossible. The first step to the completion is the truth table.
It should look something like this:
You can create truth tables also online, for example with this tool.
From the truth table, you can create a KV-Map.
Those can look like this:
There also online tools to create KV-Maps (I recommend this one)).
How to fill in those maps according to your truth table is not the topic here.
How to get boolean equations from the KV-Map is also not the topic, but the recommended tool is calculating it for you:
In conclusion for the problem: if you want to optimize your boolean equations, create a truth table with your equation:
Fill in a KV-Map:
and replace your equation with the calculated shortest way possible:
Supplement: the equation calculated with the KV-Map is the shortest way possible. There are still some transformations you can do with the boolean algebra, but that will not make these equations look easier.

Proving if n = m and m = o, then n + m = m + o in Idris?

I am trying to improve my Idris skill by looking at some of the exercises Software Foundations (originally for Coq, but I am hoping the translation to Idris not too bad). I am having trouble with the "Exercise: 1 star (plus_id_exercise)" which reads:
Remove "Admitted." and fill in the proof.
Theorem plus_id_exercise : ∀ n m o : nat,
n = m → m = o → n + m = m + o.
Proof.
(* FILL IN HERE *) Admitted.
I have translated to the following problem in Idris:
plusIdExercise : (n : Nat) ->
(m : Nat) ->
(o : Nat) ->
(n == m) = True ->
(m == o) = True ->
(n + m == m + o) = True
I am trying to perform a case by case analysis and I am having a lot of issues. The first case:
plusIdExercise Z Z Z n_eq_m n_eq_o = Refl
seems to work, but then I want to say for instance:
plusIdExercise (S n) Z Z n_eq_m n_eq_o = absurd
But this doesn't work and gives:
When checking right hand side of plusIdExercise with expected type
S n + 0 == 0 + 0 = True
Type mismatch between
t -> a (Type of absurd)
and
False = True (Expected type)
Specifically:
Type mismatch between
\uv => t -> uv
and
(=) FalseUnification failure
I am trying to say this case can never happen because n == m, but Z (= m) is never the successor of any number (n). Is there anything I can do to fix this? Am I approaching this correctly? I am somewhat confused.
I would argue that the translation is not entirely correct. The lemma stated in Coq does not use boolean equality on natural numbers, it uses the so-called propositional equality. In Coq you can ask the system to give you more information about things:
Coq < About "=".
eq : forall A : Type, A -> A -> Prop
The above means = (it is syntactic sugar for eq type) takes two arguments of some type A and produces a proposition, not a boolean value.
That means that a direct translation would be the following snippet
plusIdExercise : (n = m) -> (m = o) -> (n + m = m + o)
plusIdExercise Refl Refl = Refl
And when you pattern-match on values of the equality type, Idris essentially rewrites terms according to the corresponding equation (it's roughly equivalent to Coq's rewrite tactic).
By the way, you might find the Software Foundations in Idris project useful.

Project and restrict in relational algebra

The definition of the SELECT and PROJECT operators used below may be found in Chapter 6 of "Relational Database Design and Implementation", 4th Edition, Harrington, Jan L.
The SQL equivalent of the PROJECT (resp. RESTRICT) is SELECT (resp. WHERE with a predicate to reduce the number of elements in a relation). In the notation you propose (thank you for doing that) let us use b_pred for the application of the predicate "pred" to a relation to reduce its elements. Then a(b_pred(relation))=b_pred(a(relation)) iff b_pred does not eliminate the column(s) supporting the predicate "pred". Furthermore if b_pred uses a column which is removed by a, the RHS expression is incorrect.
Question: is the result always correct when the RESTRICT operation is performed first? It would be great to have a formal proof of that statement.
Follow-up: why would we ever be interested at all in considering the opposite order of the operations? I would guess that performance is the only possible reason, but I am not sure.
Thanks for your responses!
The two rules that can be applied to change the order of restrictions and projections maintaining the semantics of the expression are the following:
πY(σΦX(E)) = σΦX(πY(E)), if X ⊆ Y
otherwise, if the condition concerns attributes X ⊈ Y:
πY(σΦX(E)) = πY(σΦX(πXY(E)))
where E is any relational expression producing a relation with a set of attributes that includes X and Y, πX(E) is the projection of E over the set of attributes X and σΦX(E) is the restriction over E with a condition ΦX over the set of attributes X.
These two rules are equivalence rules, so they can be applied in both directions. In general the optimizer tries to apply the restrictions before any other operation, if possible, and than to apply the projections before the joins.
Added
The first rule says that if you have a relation with attributes Z = Y ∪ W, performing a restriction over a subset of the attributes of Y, and then projecting the result on Y, is equivalent to perform first the projection, and then the restriction.
This equivalence can be proved in the following way.
Given E a relation with attributes Z = Y ∪ W, the definition of restriction is:
σΦX(E) = { t | t ∈ E ∧ X ⊆ Z ∧ ΦX(t) }
that is, the set of all the tuples of E such that ΦX(t) is true.
The definition of projection is:
πY(E) = { t1 | t ∈ E ∧ Y ⊆ Z ∧ t1 = t[Y] }
that is the set of tuples obtained by considering, for each tuple t of E, a (sub)tuple containing only the attributes Y of t.
So,
πY(σΦX(E)) = πY(E') =
{ t1 | t ∈ E' ∧ Y ⊆ Z ∧ t1 = t[Y] }
where E' = σΦX(E) = { t | t ∈ E ∧ X ⊆ Z ∧ ΦX(t) }
Combining these two formulas, we get:
πY(σΦX(E)) = { t1 | t ∈ E ∧ X ⊆ Z ∧ ΦX(t) ∧ Y ⊆ Z ∧ t1 = t[Y] }
But since we know that X ⊆ Y, we can rewrite the formula as:
πY(σΦX(E)) = { t1 | t ∈ E ∧ X ⊆ Y ⊆ Z ∧ ΦX(t) ∧ t1 = t[Y] } [1]
Starting from the other term,
σΦX(πY(E)) = σΦX(E'') = { t | t ∈ E'' ∧ X ⊆ Z ∧ ΦX(t) }
where E'' = πY(E) = { t1 | t ∈ E ∧ Y ⊆ Z ∧ t1 = t[Y] }
Again, combining these two formulas and noting that X ⊆ Y, we get:
σΦX(πY(E)) = { t1 | t ∈ E ∧ X ⊆ Y ⊆ Z ∧ ΦX(t1) ∧ t1 = t[Y] } [2]
[1] = [2] if we can show that ΦX(t) = ΦX(t[Y]), and this is true since both conditions are true or false at the same time, given that the condition is concerns only the attributes X, which are present both in t and in t[Y] (since X ⊆ Y).
The second rule says that, if you have a relation with attributes Z = X ∪ Y ∪ W, with X - Y ≠ ∅ performing a restriction over the attributes of X, and then projecting the result on Y, is equivalent to perform first a projection over the attributes X ∪ Y, then perform the restriction, and finally perform a new projection over the attributes X.
Also in this case a formal proof can be given, by reasoning in an analogous way to the above proof, but it is omitted here for brevity.

Proof on less than and less or equal on nat

Assuming the following definitions (the first two are taken from http://www.cis.upenn.edu/~bcpierce/sf/Basics.html):
Fixpoint beq_nat (n m : nat) : bool :=
match n with
| O => match m with
| O => true
| S m' => false
end
| S n' => match m with
| O => false
| S m' => beq_nat n' m'
end
end.
Fixpoint ble_nat (n m : nat) : bool :=
match n with
| O => true
| S n' =>
match m with
| O => false
| S m' => ble_nat n' m'
end
end.
Definition blt_nat (n m : nat) : bool :=
if andb (ble_nat n m) (negb (beq_nat n m)) then true else false.
I would like to prove the following:
Lemma blt_nat_flip0 : forall (x y : nat),
blt_nat x y = false -> ble_nat y x = true.
Lemma blt_nat_flip : forall (x y : nat),
blt_nat x y = false -> beq_nat x y = false -> blt_nat y x = true.
The furthest I was able to get to is to prove blt_nat_flip assuming blt_nat_flip0. I spent a lot of time and I am almost there but overall it seems more complex than it should be. Anybody has a better idea on how to prove the two lemmas?
Here is my attempt so far:
Lemma beq_nat_symmetric : forall (x y : nat),
beq_nat x y = beq_nat y x.
Proof.
intros x. induction x.
intros y. simpl. destruct y.
reflexivity. reflexivity.
intros y. simpl. destruct y.
reflexivity.
simpl. apply IHx.
Qed.
Lemma and_negb_false : forall (b1 b2 : bool),
b2 = false -> andb b1 (negb b2) = b1.
Proof.
intros. rewrite -> H. unfold negb. destruct b1.
simpl. reflexivity.
simpl. reflexivity.
Qed.
Lemma blt_nat_flip0 : forall (x y : nat),
blt_nat x y = false -> ble_nat y x = true.
Proof.
intros x.
induction x.
intros. destruct y.
simpl. reflexivity.
simpl. inversion H.
intros. destruct y. simpl. reflexivity.
simpl. rewrite -> IHx. reflexivity.
(* I am giving up for now at this point ... *)
Admitted.
Lemma blt_nat_flip : forall (x y : nat),
blt_nat x y = false -> beq_nat x y = false ->
blt_nat y x = true.
Proof.
intros.
unfold blt_nat.
rewrite -> beq_nat_symmetric. rewrite -> H0.
rewrite -> and_negb_false.
replace (ble_nat y x) with true.
reflexivity.
rewrite -> blt_nat_flip0. reflexivity. apply H. reflexivity.
Qed.
coq seems to have trouble doing an inversion on H in the last case of your induction, but if you unfold blt_nat before, it seems to work as intended.