How can I prove that {F,→} is functionally complete?
I am trying to write p∧q using only those symbols but I really have no idea how to solve it.
Any ideas?
Look at the truth table of implication:
If you fix input Q to F (false), the output is the inverse of input P.
Therefore, implication and F can be combined to an inverter.
P implies Q can be written as Q or not P. Both have the same truth tables.
This demonstrates, that implication is equivalent to a disjunction with one inverted input. Using the inverter shown above, we get a disjunction (inclusive or).
Apply De Morgan's laws to see that P implies Q is also equivalent to not (P and not Q). This shows that we can turn an implication into a conjunction.
Disjunction plus negation as well as conjunction combined with negation are functionally complete. Hence, implication combined with a false constant is also functionally complete. Look here for a formal proof.
Related
While using relation algebra (DBMS), what is the order of evaluation of the predicate?
For eg.
σA = B ^ D > 5(r)
Is A = B evaluated first (left to right) or D > 5 (right to left)?
Also, is there a precedence table in relation algebra?
Also, is there a precedence table in relation algebra?
Philip is correct to ask which version/which definition of RA you are using.
In Codd's original 1972 RA you couldn't combine conditions with an AND (you've used ^) like that. You'd have to write that restriction as
σA = B(r) ∩ σD > 5(r)
If you're asking these questions because you think RA is some sort of execution engine for SQL: it isn't; in fact the semantics of RA are different to SQL in several important respects.
So if you're really asking about an execution plan for a query in SQL, I would look at the execution plan in your SQL.
There is no single RA (relational algebra) or associated algebra-oriented query language. What is your textbook/tool name & edition/version? What are its relevant definitions?
Binding tells you how to parse into arguments. When there are no state changes & no undefined names, it doesn't make sense to talk about "evaluated first" or "left to right" between arguments. When names can be undefined then there might be operators that allow defined results despite undefined arguments. Eg C's && (aka CAND aka conditional AND) that is like AND but only evaluates its second argument if the first is false.
An algebra is a collection of values & operators. Many presentations of relational algebras confuse & conflate their relational algebra with their language/notation for writing nested calls to relational algebra operators. Sometimes the language/notation has assignment, which has nothing to do with relational algebra per se.
In relational query languages a restriction/selection formula evaluates to a mapping from a tuple to a boolean. It doesn't evaluate to a boolean. We can reasonably say that the mapping is applied to each tuple in a relation to get a boolean from it.
The standard convention in logic for formulas is that binding gets weaker from parentheses to function calls to AND to OR to IMPLIES.
Many relational algebra-oriented query languages reduce ambiguity by forcing parentheses to be present around relations in calls of some relational operators. Eg Typically the parentheses in notation like σ A = B AND D > 5 (r) are obligatory.
I am trying to prove ~s=>~p (not s implies not p) given the following 2 premises.
r=>s [r implies s]
(p|q)=>(r|s) [(p or q) implies (r or s)]
I have tried several ways, trying to use OR elimination or Negation Introduction, but I can't even visualize which assumptions I will need to use. Would appreciate any and all help that can be provided.
Maybe you're missing that you can combine the two givens before anything else, to eliminate an r term. I don't think you need negation introduction, contrapositing a statement is sufficient.
(p|q)=>(r|s)
(p|q)=>(s|s) //r=>s
(p|q)=>s //simplify
~s=>~(p|q) //by contraposition
~s=>~p and ~s=>~q
~s=>~p
I will prove this by contradiction.
~S=>~P is logically equivalent to P=>S.
P=>S is logically equivalent to ~PvS.
Let v mean "or" and & mean "and".
Suppose ~PvS is false.
Therefore, ~(~PvS) is true. (This just means that the negation of it will be true.)
~(~PvS) = P&~S (De Morgan's Law) -----------(1)
So, if our assumption is correct, then all three statements that we have: P&~S, R=>S,
and (PvQ)=>(RvS) should be all true.
(PvQ)=>(RvS) is logically equivalent to ~(PvQ)v(RvS).
Which is equivalent to (~P&~Q)v(RvS).-------------------(2)
The other premise R=>S is equivalent to ~RvS. ----------(3)
If (1) is true from out assumption, then both P and ~S have to be true. This is because of the nature of the & logical connective.
~S is true, so S must be false. Now we substitute P=True and S=False into (2).
On the Left hand side: If P is true, then ~P must be false. Because of the nature of the & connective, (~P&~Q) must be false regardless of what ~Q is.
So now the Right hand side: (RvS) must be true if we need (2) to be true. Since S is false, then R must be true.
We have now deduced that: S is false, R is true, P is true.
Now we can substitute these truth values into (3). Since S is false. Then ~R must be true.
Hence, ~(~R) is false. Thus, R is false.
However, contradiction with the fact that R is true. So, our original assumption that ~S=>~P is false was wrong. Therefore, ~S=>~P is true.
At the end of the day, the logical equivalences that were mentioned previously can be verified by using a truth table. But it is good to memorize them. Cheers.
I've gone through internet and books and still have some difficulties on how to determine the normal form of this relation
R(a, b, c, d, e, f, g, h, i)
FDs =
B→G
BI→CD
EH→AG
G→DE
So far I've got that the only candidate key is BHI (If I should count with F, then BFHI).
Since the attribute F is not in use at all. Totally independent from the given FDs.
What am I supposed to do with the attribute F then?
How to determine the highest normal form for the realation R?
What am I supposed to do with the attribute F then?
You could observe the fact that the only FD in which F gets mentioned, is the trivial one F->F. It's not explicitly mentioned precisely because it is trivial. Nonetheless, all of Armstrong's axioms apply to trivial ones equally well. So, you can use this trivial one, e.g. applying augmentation, to go from B->G to BF->GF;
How to determine the highest normal form for the relation R?
first, test the condition of first normal form. If satisfied, NF is at least 1. Check the condition of second normal form. If satisfied, NF is at least 2. Check the condition of third normal form. If satisfied, NF is at least three.
Note :
"checking the condition of first normal form", is a bit of a weird thing to do in a formal process, because there exists no such thing as a formal definition of that condition, unless you go by Date's, but I have little doubt that your course does not follow that definition.
Hint :
Given that the sole key is BFHI, which is the first clause of "the key, the whole key, and nothing but the key" that gets violated by, say, B->G ?
Pattern matching (as found in e.g. Prolog, the ML family languages and various expert system shells) normally operates by matching a query against data element by element in strict order.
In domains like automated theorem proving, however, there is a requirement to take into account that some operators are associative and commutative. Suppose we have data
A or B or C
and query
C or $X
Going by surface syntax this doesn't match, but logically it should match with $X bound to A or B because or is associative and commutative.
Is there any existing system, in any language, that does this sort of thing?
Associative-Commutative pattern matching has been around since 1981 and earlier, and is still a hot topic today.
There are lots of systems that implement this idea and make it useful; it means you can avoid write complicated pattern matches when associtivity or commutativity could be used to make the pattern match. Yes, it can be expensive; better the pattern matcher do this automatically, than you do it badly by hand.
You can see an example in a rewrite system for algebra and simple calculus implemented using our program transformation system. In this example, the symbolic language to be processed is defined by grammar rules, and those rules that have A-C properties are marked. Rewrites on trees produced by parsing the symbolic language are automatically extended to match.
The maude term rewriter implements associative and commutative pattern matching.
http://maude.cs.uiuc.edu/
I've never encountered such a thing, and I just had a more detailed look.
There is a sound computational reason for not implementing this by default - one has to essentially generate all combinations of the input before pattern matching, or you have to generate the full cross-product worth of match clauses.
I suspect that the usual way to implement this would be to simply write both patterns (in the binary case), i.e., have patterns for both C or $X and $X or C.
Depending on the underlying organisation of data (it's usually tuples), this pattern matching would involve rearranging the order of tuple elements, which would be weird (particularly in a strongly typed environment!). If it's lists instead, then you're on even shakier ground.
Incidentally, I suspect that the operation you fundamentally want is disjoint union patterns on sets, e.g.:
foo (Or ({C} disjointUnion {X})) = ...
The only programming environment I've seen that deals with sets in any detail would be Isabelle/HOL, and I'm still not sure that you can construct pattern matches over them.
EDIT: It looks like Isabelle's function functionality (rather than fun) will let you define complex non-constructor patterns, except then you have to prove that they are used consistently, and you can't use the code generator anymore.
EDIT 2: The way I implemented similar functionality over n commutative, associative and transitive operators was this:
My terms were of the form A | B | C | D, while queries were of the form B | C | $X, where $X was permitted to match zero or more things. I pre-sorted these using lexographic ordering, so that variables always occurred in the last position.
First, you construct all pairwise matches, ignoring variables for now, and recording those that match according to your rules.
{ (B,B), (C,C) }
If you treat this as a bipartite graph, then you are essentially doing a perfect marriage problem. There exist fast algorithms for finding these.
Assuming you find one, then you gather up everything that does not appear on the left-hand side of your relation (in this example, A and D), and you stuff them into the variable $X, and your match is complete. Obviously you can fail at any stage here, but this will mostly happen if there is no variable free on the RHS, or if there exists a constructor on the LHS that is not matched by anything (preventing you from finding a perfect match).
Sorry if this is a bit muddled. It's been a while since I wrote this code, but I hope this helps you, even a little bit!
For the record, this might not be a good approach in all cases. I had very complex notions of 'match' on subterms (i.e., not simple equality), and so building sets or anything would not have worked. Maybe that'll work in your case though and you can compute disjoint unions directly.
I've got to the section on operators in The Ruby Programming Language, and it's made me think about operator associativity. This isn't a Ruby question by the way - it applies to all languages.
I know that operators have to associate one way or the other, and I can see why in some cases one way would be preferable to the other, but I'm struggling to see the bigger picture. Are there some criteria that language designers use to decide what should be left-to-right and what should be right-to-left? Are there some cases where it "just makes sense" for it to be one way over the others, and other cases where it's just an arbitrary decision? Or is there some grand design behind all of this?
Typically it's so the syntax is "natural":
Consider x - y + z. You want that to be left-to-right, so that you get (x - y) + z rather than x - (y + z).
Consider a = b = c. You want that to be right-to-left, so that you get a = (b = c), rather than (a = b) = c.
I can't think of an example of where the choice appears to have been made "arbitrarily".
Disclaimer: I don't know Ruby, so my examples above are based on C syntax. But I'm sure the same principles apply in Ruby.
Imagine to write everything with brackets for a century or two.
You will have the experience about which operator will most likely bind its values together first, and which operator last.
If you can define the associativity of those operators, then you want to define it in a way to minimize the brackets while writing the formulas in easy-to-read terms. I.e. (*) before (+), and (-) should be left-associative.
By the way, Left/Right-Associative means the same as Left/Right-Recursive. The word associative is the mathematical perspective, recursive the algorihmic. (see "end-recursive", and look at where you write the most brackets.)
Most of operator associativities in comp sci is nicked directly from maths. Specifically symbolic logic and algebra.