Prove ~s=>~p given (r=>s) and (p|q)=>(r|s) - proof

I am trying to prove ~s=>~p (not s implies not p) given the following 2 premises.
r=>s [r implies s]
(p|q)=>(r|s) [(p or q) implies (r or s)]
I have tried several ways, trying to use OR elimination or Negation Introduction, but I can't even visualize which assumptions I will need to use. Would appreciate any and all help that can be provided.

Maybe you're missing that you can combine the two givens before anything else, to eliminate an r term. I don't think you need negation introduction, contrapositing a statement is sufficient.

(p|q)=>(r|s)
(p|q)=>(s|s) //r=>s
(p|q)=>s //simplify
~s=>~(p|q) //by contraposition
~s=>~p and ~s=>~q
~s=>~p

I will prove this by contradiction.
~S=>~P is logically equivalent to P=>S.
P=>S is logically equivalent to ~PvS.
Let v mean "or" and & mean "and".
Suppose ~PvS is false.
Therefore, ~(~PvS) is true. (This just means that the negation of it will be true.)
~(~PvS) = P&~S (De Morgan's Law) -----------(1)
So, if our assumption is correct, then all three statements that we have: P&~S, R=>S,
and (PvQ)=>(RvS) should be all true.
(PvQ)=>(RvS) is logically equivalent to ~(PvQ)v(RvS).
Which is equivalent to (~P&~Q)v(RvS).-------------------(2)
The other premise R=>S is equivalent to ~RvS. ----------(3)
If (1) is true from out assumption, then both P and ~S have to be true. This is because of the nature of the & logical connective.
~S is true, so S must be false. Now we substitute P=True and S=False into (2).
On the Left hand side: If P is true, then ~P must be false. Because of the nature of the & connective, (~P&~Q) must be false regardless of what ~Q is.
So now the Right hand side: (RvS) must be true if we need (2) to be true. Since S is false, then R must be true.
We have now deduced that: S is false, R is true, P is true.
Now we can substitute these truth values into (3). Since S is false. Then ~R must be true.
Hence, ~(~R) is false. Thus, R is false.
However, contradiction with the fact that R is true. So, our original assumption that ~S=>~P is false was wrong. Therefore, ~S=>~P is true.
At the end of the day, the logical equivalences that were mentioned previously can be verified by using a truth table. But it is good to memorize them. Cheers.

Related

Prove that {F,→} is functionally complete

How can I prove that {F,→} is functionally complete?
I am trying to write p∧q using only those symbols but I really have no idea how to solve it.
Any ideas?
Look at the truth table of implication:
If you fix input Q to F (false), the output is the inverse of input P.
Therefore, implication and F can be combined to an inverter.
P implies Q can be written as Q or not P. Both have the same truth tables.
This demonstrates, that implication is equivalent to a disjunction with one inverted input. Using the inverter shown above, we get a disjunction (inclusive or).
Apply De Morgan's laws to see that P implies Q is also equivalent to not (P and not Q). This shows that we can turn an implication into a conjunction.
Disjunction plus negation as well as conjunction combined with negation are functionally complete. Hence, implication combined with a false constant is also functionally complete. Look here for a formal proof.

how to find the highest normal form for a given relation

I've gone through internet and books and still have some difficulties on how to determine the normal form of this relation
R(a, b, c, d, e, f, g, h, i)
FDs =
B→G
BI→CD
EH→AG
G→DE
So far I've got that the only candidate key is BHI (If I should count with F, then BFHI).
Since the attribute F is not in use at all. Totally independent from the given FDs.
What am I supposed to do with the attribute F then?
How to determine the highest normal form for the realation R?
What am I supposed to do with the attribute F then?
You could observe the fact that the only FD in which F gets mentioned, is the trivial one F->F. It's not explicitly mentioned precisely because it is trivial. Nonetheless, all of Armstrong's axioms apply to trivial ones equally well. So, you can use this trivial one, e.g. applying augmentation, to go from B->G to BF->GF;
How to determine the highest normal form for the relation R?
first, test the condition of first normal form. If satisfied, NF is at least 1. Check the condition of second normal form. If satisfied, NF is at least 2. Check the condition of third normal form. If satisfied, NF is at least three.
Note :
"checking the condition of first normal form", is a bit of a weird thing to do in a formal process, because there exists no such thing as a formal definition of that condition, unless you go by Date's, but I have little doubt that your course does not follow that definition.
Hint :
Given that the sole key is BFHI, which is the first clause of "the key, the whole key, and nothing but the key" that gets violated by, say, B->G ?

Boolean simplification with some known term combinations

I am doing boolean simplification using Quine-McCluskey which works well.
However, I now need to perform the simplification with some known term combinations.
For example, I want to simplify:
(A+B)C
If I know that:
A+B == true
then this simplifies to:
C
Or if I know that:
BC == false
then it simplifies to
AC
Is there an algorithm that can simplify boolean expressions given a list of known terms?
I've discovered a nice solution to this problem.
Quine-McCluskey is able to handle a truth-table where some of the terms are marked as "don't care", which means that the term will never occur, so the minimized expression can return true or false.
For example:
A B result
0 0 0
0 1 don't care
1 0 don't care
1 1 con't care
It can clearly be seen that the above function can be minimized to just return 'false', as that is the only result that we care about.
So to deal with known terms, all that has to be done is set the result to "don't care" for any terms in the truth table where a known term evaluates to false. The Quine-McCluskey algorithm then generates the minimized function taking the known terms into account.
For example, if we have a function of A and B, and we know that A == false, then any line on the truth-table where A is true can be marked as "don't care" because we know it will never occur.

Hoare logic: how does a strictly decreasing loop variant by itself prove termination?

Referring to the while rule for total correctness, WP seems to tell me that just finding a loop variant that strictly decreases is enough to prove termination. I can't accept that, either because I'm missing something or the rule is wrong. Consider
int i = 1000;
while(true) i--;
in which the value of variable i is a strictly decreasing loop variant, but the loop certainly doesn't terminate.
Surely the rule needs to have an additional precondition, something like i<0 → ¬B (where B is the loop condition in the axiom schema) so that the loop condition eventually 'catches' the loop variant and exits.
Or have I missed something?
The loop-variant must be a natural number. A natural number cannot decrease past zero. Using big words, the loop variant is a value that is monotonically decreasing with respect to a well-founded relation. It's the well-foundedness that's missing from your reasoning.
As noted in the Wikipedia article:
[...] the condition B must imply that t is
not a minimal element of its range,
for otherwise the premise of this rule
would be false.
In the case at hand, B is true and t is i. true makes no implication about the minimality of i, so the premise of the rule is not met.
The usual ordering "<" is well-founded on the natural numbers, but not on the integers. In order for a relation to be well-founded, every non-empty subset of its domain must have a minimal element. Since it can be shown that there is no infinite descending chain with respect to a well-founded relation, it follows that a loop with a variant must terminate.
Of course the condition of the loop must be false in the case of a minimal element!
A variant need not be restricted to the natural numbers, however. Transfinite ordinals are also well-ordered.

Boolean Implication

I need some help with this Boolean Implication.
Can someone explain how this works in simple terms:
A implies B = B + A' (if A then B). Also equivalent to A >= B
Boolean implication A implies B simply means "if A is true, then B must be true". This implies (pun intended) that if A isn't true, then B can be anything. Thus:
False implies False -> True
False implies True -> True
True implies False -> False
True implies True -> True
This can also be read as (not A) or B - i.e. "either A is false, or B must be true".
Here's how I think about it:
if(A)
return B;
else
return True;
if A is true, then b is relevant and should be checked, otherwise, ignore B and return true.
I think I see where Serge is coming from, and I'll try to explain the difference. This is too long for a comment, so I'll post it as an answer.
Serge seems to be approaching this from the perspective of questioning whether or not the implication applies. This is somewhat like a scientist trying to determine the relationship between two events. Consider the following story:
A scientist visits four different countries on four different days. In each country she wants to determine if rain implies that people will use umbrellas. She generates the following truth table:
Did it rain? Did people Does rain => umbrellas? Comment
use umbrellas?
No No ?? It didn't rain, so I didn't get to observe
No Yes ?? People were shielding themselves from the hot sun; I don't know what they would do in the rain
Yes No No Perhaps the local government banned umbrellas and nobody can use them. There is definitely no implication here.
Yes Yes ?? Perhaps these people use umbrellas no matter what weather it is
In the above, the scientist doesn't know the relationship between rain and umbrellas and she is trying to determine what it is. Only on one of the days in one of the countries can she definitively say that implies is not the correct relationship.
Similarly, it seems that Serge is trying to test whether A=>B, and is only able to determine it in one case.
However, when we are evaluating boolean logic we know the relationship ahead of time, and want to test whether the relationship was adhered to. Another story:
A mother tells her son, "If you get dirty, take a bath" (dirty=>bath). On four separate days, when the mother comes home from work, she checks to see if the rule was followed. She generates the following truth table:
Get dirty? Take a bath? Follow rule? Comment
No No Yes Son didn't get dirty, so didn't need to take a bath. Give him a cookie.
No Yes Yes Son didn't need to take a bath, but wanted to anyway. Extra clean! Give him a cookie.
Yes No No Son didn't follow the rule. No cookie and no TV tonight.
Yes Yes Yes He took a bath to clean up after getting dirty. Give him a cookie.
The mother has set the rule ahead of time. She knows what the relationship between dirt and baths are, and she wants to make sure that the rule is followed.
When we work with boolean logic, we are like the mother: we know the operators ahead of time, and we want to work with the statement in that form. Perhaps we want to transform the statement into a different form (as was the original question, he or she wanted to know if two statements are equivalent). In computer programming we often want to plug a set of variables into the statement and see if the entire statement evaluates to true or false.
It's not a matter of knowing whether implies applies - it wouldn't have been written there if it shouldn't be. Truth tables are not about determining whether a rule applies, they are about determining whether a rule was adhered to.
I like to use the example: If it is raining, then it is cloudy.
Raining => Cloudy
Contrary to what many beginners might think, this in no way suggests that rain causes cloudiness, or that cloudiness causes rain. (EDIT: It means only that, at the moment, it is not both raining and not cloudy. See my recent blog posting on material implication here. There I develop, among other things, a rationale for the usual "definition" for material implication. The reader will require some familiarity with basic methods of proof, e.g. direct proof and proof by contradiction.)
~[Raining & ~Cloudy]
Judging from the truth tables, it is possible to infer the value of a=>b only for a=1 and b=0. In this case the value of a=>b is 0. For the rest of values (a,b), the value of a=>b is undefined: both (a=>b)=0 ("a doesn't imply b") and (a=>b)=1 ("a implies b") are possible:
a b a=>b comment
0 0 ? it is not possible to infer whether a implies b because a=0
0 1 ? --"--
1 0 0 b is 0 when a is 1, so it is possible to conclude
that a does not imply b
1 1 ? whether a implies b is undefined because it is not known
whether b can be 0 when a=1 .
For a to imply b it is necessary and sufficient that b=1 always when a=1, so that there is no counterexample when a=1 and b=0. For the rows 1, 2 and 4 in the truth table it is not known whether there is counterexample: these rows do not contradict to (a=>b)=1, but they also do not prove (a=>b)=1 . In contrast, row 3 immediately disproves (a=>b)=1 because it provides a counterexample when a=1 and b=0.
I guess I may shock some readers with these explanations, but it seems there are severe errors somewhere in the basics of the logic we are taught, and that is one of the reasons for such problems as Boolean Satisfiability being not solved yet.
The best contribution on this question is given by Serge Rogatch.
Boolean logic applies only where the result of quantifying(or evaluation) is either true or false and the relationship between boolean logic propositions is based on this fact.
So there must exist a relationship or connection between the propositions.
In higher order logic, the relationship is not just a case of on/off, 1/0 or +voltage/-voltage, the evaluation of a worded proposition is more complex. If no relationship exists between the worded propositions, then implication for worded propositions is not equivalent to boolean logic propositions.
While the implication truth table always yields correct results for binary propositions, this is not the case with worded propositions which may not be related in any way at all.
~A V B truth table:
A B Result/Evaluation
1 1 1
1 0 0
0 1 1
0 0 1
Worded proposition A: The moon is made of sour cream.
Worded proposition B: Tomorrow I will win the lotto.
A B Result/Evaluation
1 ? ?
As you can see, in this case, you can't even determine the state of B which will decide the result. Does this make sense now?
In this truth table, proposition ~A always evaluates to 1, therefore, the last two rows don't apply. However, the last two rows always apply in boolean logic.
http://thenewcalculus.weebly.com
Here's a compact statement:
Suppose we have two statements, A and B, each of which could either be true or false. Without any further information, there are 2 x 2 = 4 possibilities: "A and not B", "B and not A", "neither A nor B", and "both A and B".
Now impose the additional restriction that "if A, then also B". After imposing this restriction, the expression "x -> y", where -> is the "implication" operator, denotes whether it is still possible for A == x and B == y. The only outcome that is no longer possible after this additional restriction is A == 1 and B == 0, since that contradicts the restriction itself. Hence, we have 1 -> 0 is zero, and every other pair is 1.