I have these three tables:
Customer
Rent
Book
The cardinality between the Customer and the Rent table is (1,6) and the cardinality between the Rent and the Book table is (1,infinity).
Using relational calculus's syntax, I would define a (0,1) cardinality like this:
∀x∀y∀z(rent(x,y)∧rent(x,z) → y =z)
But how can I define a (1,6) cardinality?
You could express it (in predicate calculus, as you have expressed your question) in this way:
∀ x (x ∈ Customers → ∃ y rent(x,y))
∧
∀ x (x ∈ Customers → cardinality ({ y | rent(x,y)}) ≤ 6)
If you prefer, you can write the condition cardinality(set) ≤ n with a complex logical expression of the form:
∀y1∀y2 ... ∀yn (rent(x,y1) ∧ rent(x,y2) ∧ ... ∧ rent(x,yn)
∧ y1 ≠ y2 ^ ... (all the possible pairs) ...
→ ∄ ys (rent(x,ys) ∧ ys ≠ y1 ^ ys ≠ y2 ^ ... ^ ys ≠ yn)
or in a more concise way (see the note of #philipxy):
∀y0∀y1 ... ∀yn (rent(x,y0) ∧ rent(x,y1) ∧ ... ∧ rent(x,yn) → y0 = y1 ∨ ...
Related
I have the relation tables as :
DRINKS (PERSON,COFFEE)
HAS_BEAN (COFFEE,BEAN_NAME)
BEAN (BEAN_NAME, FROM_LOCATION)
Using Relational Calculus to represent the queries:
a) Which locations contribute a bean to the coffee named “Black”?
My Solution : {B.Location | B ∈ Beans ∧ HB ∈ Has_Bean ∧ HB.Coffee=”Black” ∧ HB.Bean_Name=B.Bean_Name }
b) Who has never tried a coffee containing a bean from the location “Canada”?
My Solution : {D.Drinker | D ∈ Drinker ∧ B ∈ Beans ∧ HB ∈ Has_Bean ∧ B.From_Location=”Canada” ∧ HB.Bean_Name = B.Bean_Name ∧ D.Coffee != HB.Coffee }
c) Which people drink all of the coffees containing the bean “Mocha”?
My Solution: { D.Drinker | D ∈ Drinker ∧ HB ∈ Has_Bean ∧ B.Bean_Name=”Mocha” ∧ D.Coffee = HB.Coffee}
Using Relational Algebra to represent the queries:
a) Which beans are in the coffee named “Mocha”?
My Solution: ΠBean_Name (σCoffee=”Mocha” (HAS_BEAN))
b) Which people drink a coffee that contains the “Bourbon” coffee bean?
My Solution: ΠPersons (σD.Coffee=HB.Coffee ^ HB.Bean_Name=”Bourbon” (DRINKER*HAS_BEAN))
c) Who has never drunk a coffee containing a bean from the location “Colombia”?
My Solution:
RB1<- ΠHB.Coffee ( σHB.Bean=B.Bean ^ B.Location=”Colombia” (HAS_BEAN*BEAN))
ΠPersons (σD.Coffee != RB1.Coffee (DRINKER*RB1))
Am I correct ? or am I missing any cases ?
Note that this is an example question to represent the sum of all similar questions, please don't answer only the question below, but the general problem with optimizing boolean expressions
I have this boolean Equation:[boolean equation] e.g. (!B && A) || A
is there any better way for this?
A boolean equation follows simple calculations rules, known as the Boolean Algebra.
With those rules, you can simplify any boolean equation with hard work:
Associativity of ∨ : x ∨ ( y ∨ z ) = ( x ∨ y ) ∨ z
Associativity of ∧ : x ∧ ( y ∧ z ) = ( x ∧ y ) ∧ z
Commutativity of ∨ : x ∨ y = y ∨ x
Commutativity of ∧ : x ∧ y = y ∧ x
Distributivity of ∧ over ∨ : x ∧ ( y ∨ z ) = ( x ∧ y ) ∨ ( x ∧ z )
Identity for ∨ : x ∨ 0 = x
Identity for ∧ : x ∧ 1 = x
Annihilator for ∧ : x ∧ 0 = 0
The following laws hold in Boolean Algebra, but not in ordinary algebra:
Annihilator for ∨ : x ∨ 1 = 1
Idempotence of ∨ : x ∨ x = x
Idempotence of ∧ : x ∧ x = x
Absorption 1: x ∧ ( x ∨ y ) = x
Absorption 2: x ∨ ( x ∧ y ) = x
Distributivity of ∨ over ∧ : x ∨ ( y ∧ z ) = ( x ∨ y ) ∧ ( x ∨ z )
Complementation 1 : x ∧ ¬x = 0
Complementation 2 : x ∨ ¬x = 1
Double negation : ¬(¬x) = x
De Morgan 1 : ¬x ∧ ¬y = ¬(x ∨ y)
De Morgan 2 : ¬x ∨ ¬y = ¬(x ∧ y)
Note that
∨ represents OR (||)
∧ represents AND (&&)
¬ represents NOT (!)
= represents EQUALS (==)
But with more complexity of your equation, this done by hand is almost impossible. The first step to the completion is the truth table.
It should look something like this:
You can create truth tables also online, for example with this tool.
From the truth table, you can create a KV-Map.
Those can look like this:
There also online tools to create KV-Maps (I recommend this one)).
How to fill in those maps according to your truth table is not the topic here.
How to get boolean equations from the KV-Map is also not the topic, but the recommended tool is calculating it for you:
In conclusion for the problem: if you want to optimize your boolean equations, create a truth table with your equation:
Fill in a KV-Map:
and replace your equation with the calculated shortest way possible:
Supplement: the equation calculated with the KV-Map is the shortest way possible. There are still some transformations you can do with the boolean algebra, but that will not make these equations look easier.
The definition of the SELECT and PROJECT operators used below may be found in Chapter 6 of "Relational Database Design and Implementation", 4th Edition, Harrington, Jan L.
The SQL equivalent of the PROJECT (resp. RESTRICT) is SELECT (resp. WHERE with a predicate to reduce the number of elements in a relation). In the notation you propose (thank you for doing that) let us use b_pred for the application of the predicate "pred" to a relation to reduce its elements. Then a(b_pred(relation))=b_pred(a(relation)) iff b_pred does not eliminate the column(s) supporting the predicate "pred". Furthermore if b_pred uses a column which is removed by a, the RHS expression is incorrect.
Question: is the result always correct when the RESTRICT operation is performed first? It would be great to have a formal proof of that statement.
Follow-up: why would we ever be interested at all in considering the opposite order of the operations? I would guess that performance is the only possible reason, but I am not sure.
Thanks for your responses!
The two rules that can be applied to change the order of restrictions and projections maintaining the semantics of the expression are the following:
πY(σΦX(E)) = σΦX(πY(E)), if X ⊆ Y
otherwise, if the condition concerns attributes X ⊈ Y:
πY(σΦX(E)) = πY(σΦX(πXY(E)))
where E is any relational expression producing a relation with a set of attributes that includes X and Y, πX(E) is the projection of E over the set of attributes X and σΦX(E) is the restriction over E with a condition ΦX over the set of attributes X.
These two rules are equivalence rules, so they can be applied in both directions. In general the optimizer tries to apply the restrictions before any other operation, if possible, and than to apply the projections before the joins.
Added
The first rule says that if you have a relation with attributes Z = Y ∪ W, performing a restriction over a subset of the attributes of Y, and then projecting the result on Y, is equivalent to perform first the projection, and then the restriction.
This equivalence can be proved in the following way.
Given E a relation with attributes Z = Y ∪ W, the definition of restriction is:
σΦX(E) = { t | t ∈ E ∧ X ⊆ Z ∧ ΦX(t) }
that is, the set of all the tuples of E such that ΦX(t) is true.
The definition of projection is:
πY(E) = { t1 | t ∈ E ∧ Y ⊆ Z ∧ t1 = t[Y] }
that is the set of tuples obtained by considering, for each tuple t of E, a (sub)tuple containing only the attributes Y of t.
So,
πY(σΦX(E)) = πY(E') =
{ t1 | t ∈ E' ∧ Y ⊆ Z ∧ t1 = t[Y] }
where E' = σΦX(E) = { t | t ∈ E ∧ X ⊆ Z ∧ ΦX(t) }
Combining these two formulas, we get:
πY(σΦX(E)) = { t1 | t ∈ E ∧ X ⊆ Z ∧ ΦX(t) ∧ Y ⊆ Z ∧ t1 = t[Y] }
But since we know that X ⊆ Y, we can rewrite the formula as:
πY(σΦX(E)) = { t1 | t ∈ E ∧ X ⊆ Y ⊆ Z ∧ ΦX(t) ∧ t1 = t[Y] } [1]
Starting from the other term,
σΦX(πY(E)) = σΦX(E'') = { t | t ∈ E'' ∧ X ⊆ Z ∧ ΦX(t) }
where E'' = πY(E) = { t1 | t ∈ E ∧ Y ⊆ Z ∧ t1 = t[Y] }
Again, combining these two formulas and noting that X ⊆ Y, we get:
σΦX(πY(E)) = { t1 | t ∈ E ∧ X ⊆ Y ⊆ Z ∧ ΦX(t1) ∧ t1 = t[Y] } [2]
[1] = [2] if we can show that ΦX(t) = ΦX(t[Y]), and this is true since both conditions are true or false at the same time, given that the condition is concerns only the attributes X, which are present both in t and in t[Y] (since X ⊆ Y).
The second rule says that, if you have a relation with attributes Z = X ∪ Y ∪ W, with X - Y ≠ ∅ performing a restriction over the attributes of X, and then projecting the result on Y, is equivalent to perform first a projection over the attributes X ∪ Y, then perform the restriction, and finally perform a new projection over the attributes X.
Also in this case a formal proof can be given, by reasoning in an analogous way to the above proof, but it is omitted here for brevity.
I'm trying to prove the following:
1-pow : ∀ {n : ℕ} → 1 pow n ≡ 1
1-pow {zero} = refl
1-pow {suc x} = {!!}
I'm brand new to Adga and don't even really know where to start. Any suggestions or guidance? Obviously very easy to prove on paper but I am unsure of what to tell Agda.
I defined my pow function as follows:
_pow_ : ℕ → ℕ → ℕ
x pow zero = 1
x pow (suc zero) = x
x pow (suc y) = x * (x pow y)
When you pattern match on n in 1-pow and find out it is zero, Agda will take a look at the definition of _pow_ and check if one of the function clauses matches. The first one does, so it will apply that definition and 1 pow zero becomes 1. 1 is obviously equal to 1, so refl will work for the proof.
What about the case when n was suc x? Here's the problem: Agda cannot commit to the second clause (because x could be zero) nor the third clause (because x could be suc y for some y). So you have to go one step further to make sure Agda applies the definition of _pow_:
1-pow : ∀ {n : ℕ} → 1 pow n ≡ 1
1-pow {zero} = refl
1-pow {suc zero} = {!!}
1-pow {suc (suc x)} = {!!}
Let's check out what is the type of the first hole. Agda tells us it is 1 ≡ 1, so we can use refl again. The last one is a bit trickier, we are supposed to produce something of type 1 * 1 pow (suc x) ≡ 1. Assuming your are using the standard definition of _*_ (i.e. recursion on the left argument and repeated addition on the left side, such as the one in the standard library), this should reduce to 1 pow (suc x) + 0 ≡ 1. Induction hypothesis (that is, 1-pow applied to suc x) tells us that 1 pow (suc x) ≡ 1.
So we are almost there, but we don't know that n + 0 ≡ n (that's because addition is defined by recursion on the left argument, so we can't simplify this expression). One option is to prove this fact, which I leave as an exercise. Here's a hint, though: you might find this function useful.
cong : ∀ {a b} {A : Set a} {B : Set b}
(f : A → B) {x y} → x ≡ y → f x ≡ f y
cong f refl = refl
It's already part of the Relation.Binary.PropositionalEquality module, so you don't need to define it yourself.
So, to recap: we know that n + 0 ≡ n and 1 pow (suc x) ≡ 1 and we need 1 pow (suc x) + 0 ≡ 1. These two facts fit together quite nicely - the equality is transitive, so we should be able to merge 1 pow (suc x) + 0 ≡ 1 pow (suc x) and 1 pow (suc x) ≡ 1 into one proof and indeed, this is the case:
1-pow {suc (suc x)} = trans (+0 (1 pow suc x)) (1-pow {suc x})
And that's it!
Let me mention few other approaches.
The whole proof could also be done using a proof that 1 * x ≡ x, though this is hardly different from what we did before.
You could simplify _pow_ to:
_pow_ : ℕ → ℕ → ℕ
x pow zero = 1
x pow (suc y) = x * (x pow y)
This is slightly more convenient to work with. The proof would be changed accordingly (i.e. it wouldn't have the second clause of the original proof).
And lastly, you could do this:
1-pow : ∀ {n : ℕ} → 1 pow n ≡ 1
1-pow {zero} = refl
1-pow {suc zero} = refl
1-pow {suc (suc x)} = cong (λ x → x + 0) (1-pow {suc x})
Try to figure out why that works! If you have any problems, let me know in the comments and I'll help you.
Assuming the following definitions (the first two are taken from http://www.cis.upenn.edu/~bcpierce/sf/Basics.html):
Fixpoint beq_nat (n m : nat) : bool :=
match n with
| O => match m with
| O => true
| S m' => false
end
| S n' => match m with
| O => false
| S m' => beq_nat n' m'
end
end.
Fixpoint ble_nat (n m : nat) : bool :=
match n with
| O => true
| S n' =>
match m with
| O => false
| S m' => ble_nat n' m'
end
end.
Definition blt_nat (n m : nat) : bool :=
if andb (ble_nat n m) (negb (beq_nat n m)) then true else false.
I would like to prove the following:
Lemma blt_nat_flip0 : forall (x y : nat),
blt_nat x y = false -> ble_nat y x = true.
Lemma blt_nat_flip : forall (x y : nat),
blt_nat x y = false -> beq_nat x y = false -> blt_nat y x = true.
The furthest I was able to get to is to prove blt_nat_flip assuming blt_nat_flip0. I spent a lot of time and I am almost there but overall it seems more complex than it should be. Anybody has a better idea on how to prove the two lemmas?
Here is my attempt so far:
Lemma beq_nat_symmetric : forall (x y : nat),
beq_nat x y = beq_nat y x.
Proof.
intros x. induction x.
intros y. simpl. destruct y.
reflexivity. reflexivity.
intros y. simpl. destruct y.
reflexivity.
simpl. apply IHx.
Qed.
Lemma and_negb_false : forall (b1 b2 : bool),
b2 = false -> andb b1 (negb b2) = b1.
Proof.
intros. rewrite -> H. unfold negb. destruct b1.
simpl. reflexivity.
simpl. reflexivity.
Qed.
Lemma blt_nat_flip0 : forall (x y : nat),
blt_nat x y = false -> ble_nat y x = true.
Proof.
intros x.
induction x.
intros. destruct y.
simpl. reflexivity.
simpl. inversion H.
intros. destruct y. simpl. reflexivity.
simpl. rewrite -> IHx. reflexivity.
(* I am giving up for now at this point ... *)
Admitted.
Lemma blt_nat_flip : forall (x y : nat),
blt_nat x y = false -> beq_nat x y = false ->
blt_nat y x = true.
Proof.
intros.
unfold blt_nat.
rewrite -> beq_nat_symmetric. rewrite -> H0.
rewrite -> and_negb_false.
replace (ble_nat y x) with true.
reflexivity.
rewrite -> blt_nat_flip0. reflexivity. apply H. reflexivity.
Qed.
coq seems to have trouble doing an inversion on H in the last case of your induction, but if you unfold blt_nat before, it seems to work as intended.