What does Exclusive in XOR really mean? - language-agnostic

Maybe this is just obvious to everyone but can someone explain where XOR (or Exclusive-OR) got its name from? What does the word Exclusive really mean? Not that it matters, but its just stuck in my head since morning.
OR:
0 0 0
0 1 1
1 0 1
1 1 1
XOR:
0 0 0
0 1 1
1 0 1
1 1 0
Is it "exclusively 0 for inputs 1,1", "special version of OR" or something else?

It's what children understand as OR
You can have chocolate OR you can
have ice cream
But a programmer would regard this as having both!
Q: "Would you like tea or coffee"
Annoying programmer answer = yes

XOR is an "exclusive OR" because it only returns a "true" value of 1 if the two values are exclusive, i.e. they are both different.

According to Knuth in Vol. 4A of TAOCP, George Boole "...wrote x+y to stand for disjunction, but he took pains to never use this notation unless x and y were mutually exclusive (not both 1). If necessary, he wrote x+(1-x)y to ensure that the result of a disjunction would never be 2."
XOR is addition with carries being lost.

It's exclusive in the sense that the two operands must be mutually exclusive (in other words, different).

This comes from set theory. Consider that you have two sets A and B, and an element which may or may not be in those sets. The first boolean input is true if the element is in set A. The second boolean input is true if the element is in set B.
If the element is "exclusive" to one set (as in "not shared" with the other) then the XOR operator will return true. Illustration from wikipedia:

Exclusive in XOR means exactly what it says - one of the two has to be excluded. That is, either one or the other. Neither both, nor none - only one. At least that's how I've understood it :)

It's exclusive as in "only one." In other words, it's "one of the two, but not both."

I read a nice 'plain English' example today:
Consider, for example, the English
sentence, "You pay me by Tuesday or
I'll sue." If that "or" were the
logical connective, then the sentence
is true when either you do pay me by
Tuesday or I sue you; so you could pay
me on Monday and I still might sue
you. But this particular use of "or"
would normally be taken to mean that
either you pay me by Tuesday and I
don't sue you, or you do not pay me by
Tuesday and I do sue you - the
so-called "exclusive or".
Hugh Darwen, "An Introduction to Relational Database Theory", p76.

Related

Boolean algebra when used

I am new to Boolean algebra, I want to know when is Boolean algebra used. I am getting confused with it. Please, anyone can clear it so I can understand? Is Boolean algebra using to simplify mathematical calculation during execution of user program or already used in designed circuit board of computer?
Boolean algebra is used for elementary digital circuits composed of logical gates (AND, OR, NOT, XOR, ...). But it also is applied in many other fields.
Examples include philosophy, literature/search retrieval, software engineering, software verification, logic synthesis, automatic test pattern generation, artificial intelligence, logic in general, discrete mathematics, combinatorics, discrete optimization, constraint programming, game theory, information theory, coding theory ... to name a few.
An overview of Boolean algebra applications is listed in Wikipedia.
The short answer is: both.
Boolean algebra is a science, derived from the name of George Boole, a mathematician, logician and philosopher, who laid the foundation of two-valued logic.
Algebra is the study of mathematical symbols and the rules of manipulating those symbols via formulas.
Therefore, Boolean Algebra is the field of study that focuses on mathematical symbols and the rules of manipulating those symbols via formulas on the domain of 2-valued logical expressions whose foundations were laid down by George Boole.
The most atomic principle of Boolean Algebra is that a statement may be true or false and nothing else. True is represented by 1 and false is represented by 0. Let's see a few expressions
A and B = A * B
that is, A and B is true if and only if both values are true, that is, 1 * 1 = 1, but 1 * 0 <> 1, 0 * 1 <> 1 and 0 * 0 <> 1.
A or B = A + B - A * B
Since we are in the domain of two values, A or B has to be either 0 or 1. So, we exclude A * B from the result of their addition to cover the case when both A and B is true, which would now be evaluated to 1 + 1 - 1 = 1.
These logical formulas and many others are covered in electronic circuits via logical gates, but this field of study is used by you at every day of your life as well, but you do not necessarily realize it.
Whenever you assume that a statement is fully true or fully false, you are in line with Boolean algebra. When you draw a conclusion, you are operating with the logical operator called implication.
Basically, more often than not, when you use logic, you partially or fully apply Boolean standards (hopefully).
So, Boolean Algebra is the field if study that operates with statements. And since science and everyday life is all about thoughts/statements, Boolean Algebra is everywhere, you cannot escape it.
The more interesting question is: if Boolean Algebra is so prevalent, then what is the context of non-Boolean logic?
And the answer is simple. Whenever we operate with the probability of the unknown or partial truths.
If you toss a coin, you know for sure that it will be either a head or a tail, but, before tossing the coin you do not know which one, so you compute the probability, which is 0.5 for each case.
If you throw a dice, then you know that it will either be a six or something else. You do not know in advance whether it will be a six or something else, but you know that there is a 1/6 probability that it will be a six.
Probability is a value between 0 and 1. You can convert the raw probability value that is between 0 and 1 to the more popular % value by multiplying its value with 100. So, the probability of a heads as a result of a coin toss would be (0.5 * 100) % = 50%.
So, probability calculation deals with the unkown, even if the statements would eventually become boolean statements, before some events they cannot be fully evaluated, but they can be predicted to some extend. So, this is the field of probability calculation.
Statistics is the field that assumes that the frequency of events in a larg input pattern that's known from the past can predict the future. It's a case of applied probabilities. This is also a non-boolean approach to statements.
Fuzzy logic (no, this is not a joke, there is actually a field of study that goes by this name) deals with partial truths. If you are during the process of eating your lunch, then it is not true that you ate your lunch, neither that you did not eat your lunch, because you have eaten a part of it and you have completed it to a certain extent. So, Fuzzy logic is also a multi-valued field of study, which deals with partial truths.
Conclusion
Boolean algebra is used everywhere, including the areas you have been wondering about and much more. So, Boolean algebra is used both in the circuits and mathematical computations. The very bits that comprise any information (except for the esotheric qbits, but in order not to confuse you I will not delve into that area here) are statements on their own. A 0 means lack of current and a 1 means the presence of current. So, if we have a number represented on 8 bits (for the sake of simplicity), that's a set of 8 statements that ultimately results in the representation of a number.
For example, a 15 is represented (on 8 bits) as
00001111
because
15
= 0 * 1^7 + 0 * 1^6 + 0 * 1^5 + 0 * 1^4 + 1* 1^3 + 1 * 1^2 + 1 * 1^1 + 1 * 1^1
= 0 * 128 + 0 * 64 + 0 * 32 + 0 * 16 + 1 * 8 + 1 * 4 + 1 * 2 + 1 * 1
So even the numbers you see in front of you on the computer are evaluated with Boolean algebra.

When two variables is logically compared, the logic gate that tests the equivalence is? Using logic gate

When two variables is logically compared, the logic gate that tests the equivalence ..
If XOR please explain Why ?
if XNOR Please explain Why?
The answer is XNOR. As for why, just look at the truth table for 2 inputs:
You see that it returns 1 if either both inputs are 1 or both inputs are 0, or in different words, when the inputs are of the same value. This can be described as checking for equivalence.
This can also be seen by looking at what XNOR means: "eXclusive Not OR". That means that it is the opposite of checking whether exactly one input is 1 (since "exclusive OR" means either one of the outputs may be 1 but not both), i.e. checking whether either none or both of the inputs are 1, i.e. checking whether either both inputs are 1 or both inputs are 0, i.e. whether both inputs are equivalent.
(It can also be called NXOR, which in my opinion is clearer. Because an exclusive OR of two inverted values would give the same result as without the negation, but this is the inversion of an exclusive OR.)

In 0-based indexing system, do people call the element at index 0 the "first" or the "zeroth" element?

In Java/C++, for example, do you casually say that 'a' is the first character of "abc", or the zeroth?
Do people say both and it's always going to be ambiguous, or is there an actual convention?
A quote from wikipedia on Zeroth article:
In computer science, array references also often start at 0, so computer programmers might use zeroth in situations where others might use first, and so forth.
This would seem support the hypothesis that it's always going to be ambiguous.
Thanks to Alexandros Gezerlis (see his answer below) for finding this quote, from How to Think Like a Computer Scientist: Learning with Python by Allen B. Downey, Jeffrey Elkner and Chris Meyers, chapter 7:
The first letter of "banana" is not a. Unless you are a computer scientist. For perverse reasons, computer scientists always start counting from zero. The 0th letter (zero-eth) of "banana" is b. The 1th letter (one-eth) is a, and the 2th (two-eth) letter is n.
This seems to suggest that we as computer scientists should reject the natural semantics of "first", "second", etc when dealing with 0-based indexing systems.
This quote suggests that perhaps there ARE official rulings for certain languages, so I've made this question [language-agnostic].
It is the first character or element in the array, but it is at index zero.
The term "first" has nothing to do with the absolute index of the array, but simply it's relative position as the lowest indexed element. Turbo Pascal, for example, allows arbitrary indexes in arrays (say from 5 to 15). The element located at array[5] would still be referred to as the first element.
To quote from this wikipedia article:
While the term "zeroth" is not itself
ambiguous, it creates ambiguity for
all subsequent elements of the
sequence when lacking context, since
they are called "first", "second",
etc. in conflict with the normal
everyday meanings of those words.
So I would say "first".
Probably subjective but I call it the first element or element zero. It is the first and, Isaac Asimov's laws of robotics aside, I'm not even confident that zeroth is a real word :-)
By definition, anything preceding the first becomes the first, and pushes everything else out by one.
Definitely first, never heard zeroth until today!
I would agree with most answers here, which say first character which is at zero index, but just for the record, the following is from Allen Downey's book "Python for Software Design":
So b is the 0th letter (“zero-eth”) of
'banana', a is the 1th letter
(“one-eth”), and n is the 2th
(“two-eth”) letter.
Thus, he removes the ambiguity by either using:
a) a number and then "th", or
b) a word and then "-eth".
The C and C++ standards say "initial element" and "first element", meaning the same thing. If I remember to be unambiguous, I say "initial", "zeroth", or "first counting from zero". But normally I say "first". That banana stuff is either a humorous exaggeration or a bit bonkers (I suspect the former - it's just a way to explain 0-indexing). I don't think I know anyone who would actually say "first" to mean "index 1 of a 0-indexed array" unless they had first said "zeroth" in the same paragraph in order to make it clear what they mean.
It depends of whether or not you are a fan of Isaac Asimov's robot series.

2's complement example, why not carry?

I'm watching some great lectures from David Malan (here) that is going over binary. He talked about signed/unsigned, 1's compliment, and 2's complement representations. There was an addition done of 4 + (-3) which lined up like this:
0100
1101 (flip 0011 to 1100, then add "1" to the end)
----
0001
But he waved his magical hands and threw away the last carry. I did some wikipedia research bit didn't quite get it, can someone explain to me why that particular carry (in the 8's ->16's columns) was dropped, but he kept the one just prior to it?
Thanks!
The last carry was dropped because it does not fit in the target space. It would be the fifth bit.
If he had carried out the same addition, but with for example 8 bit storage, it would have looked like this:
00000100
11111101
--------
00000001
In this situation we would also be stuck with an "unused" carry.
We have to treat carries this way to make addition with two's compliment work properly, but that's all good, because this is the easiest way of treating carries when you have limited storage. Anyway, we get the correct result, right :)
x86-processors store such an additional carry in the carry flag (CF), which is possible to test with certain instructions.
A carry is not the same as an overflow
In the example you do have a carry out of the MSB. By definition, this carry ends up on the floor. (If there was someplace for it to go, then it would not have been out of the MSB.)
But adding two numbers with different signs cannot overflow. An overflow can only happen when two numbers with the same sign produce a result with a different sign.
If you extend the left-hand side by adding more digit positions, you'll see that the carry rolls over into an infinite number of bit positions towards the left, so you never really get a final carry of 1. So the answer is positive.
...000100
+...111101
----------
....000001
At some point you have to set the number of bits to represent the numbers. He chose 4 bits. Any carry into the 5th bit is lost. But that's OK because he decided to represent the number in just 4 bits.
If he decided to use 5 bits to represent the numbers he would have gotten the same result.
That's the beauty of it... Your result will be the same size as the terms you are adding. So the fifth bit is thrown out
In 2's complement you use the carry bit to signal if there was an overflow in the last operation.
You must look at the LAST two carry bits to see if there was overflow. In your example, the last two carry bits were 11 meaning that there was no overflow.
If the last two carry bits are 11 or 00 then no overflow occurred. If the last two carry bits are 10 or 01 then there was overflow. That is why he sometimes cared about the carry bit and other times he ignored it.
The first row below is the carry row. The left-most bits in this row are used to determine if there was overflow.
1100
0100
1101
----
0001
Looks like you're only using 4 bits, so there is no 16's column.
If you were using more than 4 bits then the -3 representation would be different, and the carry of the math would still be thrown out the end. For example, with 6 bits you'd have:
000100
111101
------
1000001
and since the carry is outside the bit range of your representation it's gone, and you only have 000001
Consider 25 + 15:
5+5 = 10, we keep the 0 and let the 1 go to the tens-column. Then it's 2 + 1 (+ 1) = 4. Hence the result is 40 :)
It's the same thing with binaries. 0 + 1 = 1, 0 + 0 = 0, 1 + 1 = 10 => send the 1 the 8-column, 0 + 1 ( + 1 ) = 10 => send the 1 to the next column - Here's the overflow and why we just throw the 1 away.
This is why 2's complement is so great. It allows you to add / substract just like you do with base-10, because you (ab)use the fact that the sign-bit is the MSB, which will cascade operations all the way to overflows, when nessecary.
Hope I made myself understood. Quite hard to explan this when english is not you native tongue :)
When performing 2's complement addition, the only time that a carry indicates a problem is when there's an overflow condition - that can't happen if the 2 operands have a different sign.
If they have the same sign, then the overflow condition is when the sign bit changes from the 2 operands, ie., there's a carry into the most significant bit.
If I remember my computer architecture learnin' this is often detected at the hardware level by a flag that's set when the carry into the most significant bit is different than the carry out of the most significant bit. Which is not the case in your example (there's a carry into the msb as well as out of the msb).
One simple way to think of it is as "the sign not changing". If the carry into the msb is different than the carry out, then the sign has improperly changed.
The carry was dropped because there wasn't anything that could be done with it. If it's important to the result, it means that the operation overflowed the range of values that could be stored in the result. In assembler, there's usually an instruction that can test for the carry beyond the end of the result, and you can explicitly deal with it there - for example, carrying it into the next higher part of a multiple precision value.
Because you are talking about 4 bit representations. It's unussual compared to an actual machine, but if we were to take for granted that a computer has 4 bits in each byte for a moment, then we have the following properties: a byte wraps at 15 to -15. Anything outside that range cannot be stored. Besides, what would you do with an extra 5th bit beyond the sign bit anyway?
Now, given that, we can see from everyday math that 4 + (-3) = 1, which is exactly what you got.

Boolean Implication

I need some help with this Boolean Implication.
Can someone explain how this works in simple terms:
A implies B = B + A' (if A then B). Also equivalent to A >= B
Boolean implication A implies B simply means "if A is true, then B must be true". This implies (pun intended) that if A isn't true, then B can be anything. Thus:
False implies False -> True
False implies True -> True
True implies False -> False
True implies True -> True
This can also be read as (not A) or B - i.e. "either A is false, or B must be true".
Here's how I think about it:
if(A)
return B;
else
return True;
if A is true, then b is relevant and should be checked, otherwise, ignore B and return true.
I think I see where Serge is coming from, and I'll try to explain the difference. This is too long for a comment, so I'll post it as an answer.
Serge seems to be approaching this from the perspective of questioning whether or not the implication applies. This is somewhat like a scientist trying to determine the relationship between two events. Consider the following story:
A scientist visits four different countries on four different days. In each country she wants to determine if rain implies that people will use umbrellas. She generates the following truth table:
Did it rain? Did people Does rain => umbrellas? Comment
use umbrellas?
No No ?? It didn't rain, so I didn't get to observe
No Yes ?? People were shielding themselves from the hot sun; I don't know what they would do in the rain
Yes No No Perhaps the local government banned umbrellas and nobody can use them. There is definitely no implication here.
Yes Yes ?? Perhaps these people use umbrellas no matter what weather it is
In the above, the scientist doesn't know the relationship between rain and umbrellas and she is trying to determine what it is. Only on one of the days in one of the countries can she definitively say that implies is not the correct relationship.
Similarly, it seems that Serge is trying to test whether A=>B, and is only able to determine it in one case.
However, when we are evaluating boolean logic we know the relationship ahead of time, and want to test whether the relationship was adhered to. Another story:
A mother tells her son, "If you get dirty, take a bath" (dirty=>bath). On four separate days, when the mother comes home from work, she checks to see if the rule was followed. She generates the following truth table:
Get dirty? Take a bath? Follow rule? Comment
No No Yes Son didn't get dirty, so didn't need to take a bath. Give him a cookie.
No Yes Yes Son didn't need to take a bath, but wanted to anyway. Extra clean! Give him a cookie.
Yes No No Son didn't follow the rule. No cookie and no TV tonight.
Yes Yes Yes He took a bath to clean up after getting dirty. Give him a cookie.
The mother has set the rule ahead of time. She knows what the relationship between dirt and baths are, and she wants to make sure that the rule is followed.
When we work with boolean logic, we are like the mother: we know the operators ahead of time, and we want to work with the statement in that form. Perhaps we want to transform the statement into a different form (as was the original question, he or she wanted to know if two statements are equivalent). In computer programming we often want to plug a set of variables into the statement and see if the entire statement evaluates to true or false.
It's not a matter of knowing whether implies applies - it wouldn't have been written there if it shouldn't be. Truth tables are not about determining whether a rule applies, they are about determining whether a rule was adhered to.
I like to use the example: If it is raining, then it is cloudy.
Raining => Cloudy
Contrary to what many beginners might think, this in no way suggests that rain causes cloudiness, or that cloudiness causes rain. (EDIT: It means only that, at the moment, it is not both raining and not cloudy. See my recent blog posting on material implication here. There I develop, among other things, a rationale for the usual "definition" for material implication. The reader will require some familiarity with basic methods of proof, e.g. direct proof and proof by contradiction.)
~[Raining & ~Cloudy]
Judging from the truth tables, it is possible to infer the value of a=>b only for a=1 and b=0. In this case the value of a=>b is 0. For the rest of values (a,b), the value of a=>b is undefined: both (a=>b)=0 ("a doesn't imply b") and (a=>b)=1 ("a implies b") are possible:
a b a=>b comment
0 0 ? it is not possible to infer whether a implies b because a=0
0 1 ? --"--
1 0 0 b is 0 when a is 1, so it is possible to conclude
that a does not imply b
1 1 ? whether a implies b is undefined because it is not known
whether b can be 0 when a=1 .
For a to imply b it is necessary and sufficient that b=1 always when a=1, so that there is no counterexample when a=1 and b=0. For the rows 1, 2 and 4 in the truth table it is not known whether there is counterexample: these rows do not contradict to (a=>b)=1, but they also do not prove (a=>b)=1 . In contrast, row 3 immediately disproves (a=>b)=1 because it provides a counterexample when a=1 and b=0.
I guess I may shock some readers with these explanations, but it seems there are severe errors somewhere in the basics of the logic we are taught, and that is one of the reasons for such problems as Boolean Satisfiability being not solved yet.
The best contribution on this question is given by Serge Rogatch.
Boolean logic applies only where the result of quantifying(or evaluation) is either true or false and the relationship between boolean logic propositions is based on this fact.
So there must exist a relationship or connection between the propositions.
In higher order logic, the relationship is not just a case of on/off, 1/0 or +voltage/-voltage, the evaluation of a worded proposition is more complex. If no relationship exists between the worded propositions, then implication for worded propositions is not equivalent to boolean logic propositions.
While the implication truth table always yields correct results for binary propositions, this is not the case with worded propositions which may not be related in any way at all.
~A V B truth table:
A B Result/Evaluation
1 1 1
1 0 0
0 1 1
0 0 1
Worded proposition A: The moon is made of sour cream.
Worded proposition B: Tomorrow I will win the lotto.
A B Result/Evaluation
1 ? ?
As you can see, in this case, you can't even determine the state of B which will decide the result. Does this make sense now?
In this truth table, proposition ~A always evaluates to 1, therefore, the last two rows don't apply. However, the last two rows always apply in boolean logic.
http://thenewcalculus.weebly.com
Here's a compact statement:
Suppose we have two statements, A and B, each of which could either be true or false. Without any further information, there are 2 x 2 = 4 possibilities: "A and not B", "B and not A", "neither A nor B", and "both A and B".
Now impose the additional restriction that "if A, then also B". After imposing this restriction, the expression "x -> y", where -> is the "implication" operator, denotes whether it is still possible for A == x and B == y. The only outcome that is no longer possible after this additional restriction is A == 1 and B == 0, since that contradicts the restriction itself. Hence, we have 1 -> 0 is zero, and every other pair is 1.