I have the relation tables as :
DRINKS (PERSON,COFFEE)
HAS_BEAN (COFFEE,BEAN_NAME)
BEAN (BEAN_NAME, FROM_LOCATION)
Using Relational Calculus to represent the queries:
a) Which locations contribute a bean to the coffee named “Black”?
My Solution : {B.Location | B ∈ Beans ∧ HB ∈ Has_Bean ∧ HB.Coffee=”Black” ∧ HB.Bean_Name=B.Bean_Name }
b) Who has never tried a coffee containing a bean from the location “Canada”?
My Solution : {D.Drinker | D ∈ Drinker ∧ B ∈ Beans ∧ HB ∈ Has_Bean ∧ B.From_Location=”Canada” ∧ HB.Bean_Name = B.Bean_Name ∧ D.Coffee != HB.Coffee }
c) Which people drink all of the coffees containing the bean “Mocha”?
My Solution: { D.Drinker | D ∈ Drinker ∧ HB ∈ Has_Bean ∧ B.Bean_Name=”Mocha” ∧ D.Coffee = HB.Coffee}
Using Relational Algebra to represent the queries:
a) Which beans are in the coffee named “Mocha”?
My Solution: ΠBean_Name (σCoffee=”Mocha” (HAS_BEAN))
b) Which people drink a coffee that contains the “Bourbon” coffee bean?
My Solution: ΠPersons (σD.Coffee=HB.Coffee ^ HB.Bean_Name=”Bourbon” (DRINKER*HAS_BEAN))
c) Who has never drunk a coffee containing a bean from the location “Colombia”?
My Solution:
RB1<- ΠHB.Coffee ( σHB.Bean=B.Bean ^ B.Location=”Colombia” (HAS_BEAN*BEAN))
ΠPersons (σD.Coffee != RB1.Coffee (DRINKER*RB1))
Am I correct ? or am I missing any cases ?
Related
I'm having difficulty in converting the below propositional sentence to CNF.
( 𝑝 ∨ 𝑞 ) → 𝑟 ∧ ( 𝑟 → 𝑝 )
I tried using Implication law and De Morgan's law for the first few steps, and I got this result:
( ~𝑝 ∧ ~𝑞 ) ∨ ( 𝑟 ∧ ( ~𝑟 ∨ 𝑝 )
It'll be best if you can list down the steps with the law that is being implemented
Note that this is an example question to represent the sum of all similar questions, please don't answer only the question below, but the general problem with optimizing boolean expressions
I have this boolean Equation:[boolean equation] e.g. (!B && A) || A
is there any better way for this?
A boolean equation follows simple calculations rules, known as the Boolean Algebra.
With those rules, you can simplify any boolean equation with hard work:
Associativity of ∨ : x ∨ ( y ∨ z ) = ( x ∨ y ) ∨ z
Associativity of ∧ : x ∧ ( y ∧ z ) = ( x ∧ y ) ∧ z
Commutativity of ∨ : x ∨ y = y ∨ x
Commutativity of ∧ : x ∧ y = y ∧ x
Distributivity of ∧ over ∨ : x ∧ ( y ∨ z ) = ( x ∧ y ) ∨ ( x ∧ z )
Identity for ∨ : x ∨ 0 = x
Identity for ∧ : x ∧ 1 = x
Annihilator for ∧ : x ∧ 0 = 0
The following laws hold in Boolean Algebra, but not in ordinary algebra:
Annihilator for ∨ : x ∨ 1 = 1
Idempotence of ∨ : x ∨ x = x
Idempotence of ∧ : x ∧ x = x
Absorption 1: x ∧ ( x ∨ y ) = x
Absorption 2: x ∨ ( x ∧ y ) = x
Distributivity of ∨ over ∧ : x ∨ ( y ∧ z ) = ( x ∨ y ) ∧ ( x ∨ z )
Complementation 1 : x ∧ ¬x = 0
Complementation 2 : x ∨ ¬x = 1
Double negation : ¬(¬x) = x
De Morgan 1 : ¬x ∧ ¬y = ¬(x ∨ y)
De Morgan 2 : ¬x ∨ ¬y = ¬(x ∧ y)
Note that
∨ represents OR (||)
∧ represents AND (&&)
¬ represents NOT (!)
= represents EQUALS (==)
But with more complexity of your equation, this done by hand is almost impossible. The first step to the completion is the truth table.
It should look something like this:
You can create truth tables also online, for example with this tool.
From the truth table, you can create a KV-Map.
Those can look like this:
There also online tools to create KV-Maps (I recommend this one)).
How to fill in those maps according to your truth table is not the topic here.
How to get boolean equations from the KV-Map is also not the topic, but the recommended tool is calculating it for you:
In conclusion for the problem: if you want to optimize your boolean equations, create a truth table with your equation:
Fill in a KV-Map:
and replace your equation with the calculated shortest way possible:
Supplement: the equation calculated with the KV-Map is the shortest way possible. There are still some transformations you can do with the boolean algebra, but that will not make these equations look easier.
I have these three tables:
Customer
Rent
Book
The cardinality between the Customer and the Rent table is (1,6) and the cardinality between the Rent and the Book table is (1,infinity).
Using relational calculus's syntax, I would define a (0,1) cardinality like this:
∀x∀y∀z(rent(x,y)∧rent(x,z) → y =z)
But how can I define a (1,6) cardinality?
You could express it (in predicate calculus, as you have expressed your question) in this way:
∀ x (x ∈ Customers → ∃ y rent(x,y))
∧
∀ x (x ∈ Customers → cardinality ({ y | rent(x,y)}) ≤ 6)
If you prefer, you can write the condition cardinality(set) ≤ n with a complex logical expression of the form:
∀y1∀y2 ... ∀yn (rent(x,y1) ∧ rent(x,y2) ∧ ... ∧ rent(x,yn)
∧ y1 ≠ y2 ^ ... (all the possible pairs) ...
→ ∄ ys (rent(x,ys) ∧ ys ≠ y1 ^ ys ≠ y2 ^ ... ^ ys ≠ yn)
or in a more concise way (see the note of #philipxy):
∀y0∀y1 ... ∀yn (rent(x,y0) ∧ rent(x,y1) ∧ ... ∧ rent(x,yn) → y0 = y1 ∨ ...
The definition of the SELECT and PROJECT operators used below may be found in Chapter 6 of "Relational Database Design and Implementation", 4th Edition, Harrington, Jan L.
The SQL equivalent of the PROJECT (resp. RESTRICT) is SELECT (resp. WHERE with a predicate to reduce the number of elements in a relation). In the notation you propose (thank you for doing that) let us use b_pred for the application of the predicate "pred" to a relation to reduce its elements. Then a(b_pred(relation))=b_pred(a(relation)) iff b_pred does not eliminate the column(s) supporting the predicate "pred". Furthermore if b_pred uses a column which is removed by a, the RHS expression is incorrect.
Question: is the result always correct when the RESTRICT operation is performed first? It would be great to have a formal proof of that statement.
Follow-up: why would we ever be interested at all in considering the opposite order of the operations? I would guess that performance is the only possible reason, but I am not sure.
Thanks for your responses!
The two rules that can be applied to change the order of restrictions and projections maintaining the semantics of the expression are the following:
πY(σΦX(E)) = σΦX(πY(E)), if X ⊆ Y
otherwise, if the condition concerns attributes X ⊈ Y:
πY(σΦX(E)) = πY(σΦX(πXY(E)))
where E is any relational expression producing a relation with a set of attributes that includes X and Y, πX(E) is the projection of E over the set of attributes X and σΦX(E) is the restriction over E with a condition ΦX over the set of attributes X.
These two rules are equivalence rules, so they can be applied in both directions. In general the optimizer tries to apply the restrictions before any other operation, if possible, and than to apply the projections before the joins.
Added
The first rule says that if you have a relation with attributes Z = Y ∪ W, performing a restriction over a subset of the attributes of Y, and then projecting the result on Y, is equivalent to perform first the projection, and then the restriction.
This equivalence can be proved in the following way.
Given E a relation with attributes Z = Y ∪ W, the definition of restriction is:
σΦX(E) = { t | t ∈ E ∧ X ⊆ Z ∧ ΦX(t) }
that is, the set of all the tuples of E such that ΦX(t) is true.
The definition of projection is:
πY(E) = { t1 | t ∈ E ∧ Y ⊆ Z ∧ t1 = t[Y] }
that is the set of tuples obtained by considering, for each tuple t of E, a (sub)tuple containing only the attributes Y of t.
So,
πY(σΦX(E)) = πY(E') =
{ t1 | t ∈ E' ∧ Y ⊆ Z ∧ t1 = t[Y] }
where E' = σΦX(E) = { t | t ∈ E ∧ X ⊆ Z ∧ ΦX(t) }
Combining these two formulas, we get:
πY(σΦX(E)) = { t1 | t ∈ E ∧ X ⊆ Z ∧ ΦX(t) ∧ Y ⊆ Z ∧ t1 = t[Y] }
But since we know that X ⊆ Y, we can rewrite the formula as:
πY(σΦX(E)) = { t1 | t ∈ E ∧ X ⊆ Y ⊆ Z ∧ ΦX(t) ∧ t1 = t[Y] } [1]
Starting from the other term,
σΦX(πY(E)) = σΦX(E'') = { t | t ∈ E'' ∧ X ⊆ Z ∧ ΦX(t) }
where E'' = πY(E) = { t1 | t ∈ E ∧ Y ⊆ Z ∧ t1 = t[Y] }
Again, combining these two formulas and noting that X ⊆ Y, we get:
σΦX(πY(E)) = { t1 | t ∈ E ∧ X ⊆ Y ⊆ Z ∧ ΦX(t1) ∧ t1 = t[Y] } [2]
[1] = [2] if we can show that ΦX(t) = ΦX(t[Y]), and this is true since both conditions are true or false at the same time, given that the condition is concerns only the attributes X, which are present both in t and in t[Y] (since X ⊆ Y).
The second rule says that, if you have a relation with attributes Z = X ∪ Y ∪ W, with X - Y ≠ ∅ performing a restriction over the attributes of X, and then projecting the result on Y, is equivalent to perform first a projection over the attributes X ∪ Y, then perform the restriction, and finally perform a new projection over the attributes X.
Also in this case a formal proof can be given, by reasoning in an analogous way to the above proof, but it is omitted here for brevity.
I'm doing a past Function Programming exam paper and have this question:
Here are two ways of writing essentially the same expression:
f (g(x,y),z,h(t))
f (g x y) z (h t)
(a) Illustrate the different structures of the two expressions by drawing them as two different kinds of tree.
(b) Define Haskell data types Bush a and Tree a to capture the two different structures.
I'm kind of stuck because I've never done any thing like this in my course. It's pretty obvious from a later part that the first expression should be represented by Tree a and the second by Bush a, but I don't really know where to go from here. I guessed something like:
data Tree a = Leaf a | Node (Tree a) (Tree a)
data Bush a = Node a [Bush a]
But I don't think the Binary tree type is the right one to use. Could someone point me in the right direction?
Actually, the first expression is represented by Bush and the second by Tree.
In Haskell, g x y means that g x is applied to y; in C, g(x, y) means that g is applied to a collection of arguments — {x, y}. Therefore, in C:
f(g(x,y),z,h(t)) = Bush f [Bush g [Bush x [], Bush y []], Bush z [], Bush h [Bush t []]]
f
+--g
| +--x
| +--y
|
+--z
|
+--h
+--t
And in Haskell:
f (g x y) z (h t) = App (App (App f (App (App g x) y)) z) (App h t)
+
/ \
/ /\
+ h t
/ \
/\ z
f +
/ \
/\ y
g x