Is there any errors in this Hamming code 10101011110? - binary

Suppose we are working with an error-correcting code that will allow all single-bit errors to be
corrected for memory words of length 7. We have already calculated that we need 4 check bits,
and the length of all code words will be 11. Code words are created according to the Hamming
algorithm presented in the text. We now receive the following code word:
1 0 1 0 1 0 1 1 1 1 0
Assuming even parity, is this a legal code word? If not, according to our error-correcting code,
where is the error?
P.s need a bit help with this Hamming code problem ,it's a book question .
Thanks in advance :)

Code words are created according to the Hamming algorithm presented in the text.
This is an important piece of information, don't you think? :-)
Without the algorithm, we can't be sure of validity so I'll give you a general approach.
Each check bit will generally apply to some subset of bits. Let's say the seven bits are b6 through b0. The four check bits are calculated to give even parity across the following subsets:
1 0 1 0 1 0 1
ca b6 b5 b3 b2 b1 b0 1+0+0+1+0+1 + (ca = 0)
cb b6 b4 b2 b0 1+1+1+1 + (cb = 0)
cc b6 b3 b0 1+0+1 + (cc = 0)
cd b5 b1 b0 0+0+1 + (cd = 1)
Now, if you didn't get a check bit sequence matching the data, you would ideally be able to work out which single data or check bit would need to change in order to repair it. This would only work if you could ensure that each check bit calculation was carefully crafted to add maximal extra information.
That probably a failing to my above calculation since I just pulled it out of thin air. But, since my intent was just to explain the concept in the absence of your algorithm, that shouldn't matter.
One way to ensure that an algorithm works is to brute force it:
Get a list of all possible eleven-bit values. There's only 2048 of these so it's not onerous. For now, discard the ones that are already okay (we're only interested in invalid ones).
In turn, toggle each bit (of the 11) and see if the item becomes valid.
If no bit toggles made it valid, this is not a single-bit error so can safely be ignored.
If one bit toggle made it valid, then you have enough information to fix this case.
If more than one bit toggle made it valid, you don't have enough information to fix this case.
Bottom line, if every one-bit error possibility can be made valid with a single bit toggle (the second last bullet point above), the correction code is perfect.
You should also check to ensure that every valid one will become invalid if a single bit is toggled, but I'll leave that an an exercise since you should now have enough information on how to do this.

Related

Need some assistance understanding binary addition/subtraction using 2's complement

If A = 01110011, B = 10010100, how would I add these?
I did this:
i.e: 01110011 + 10010100 = 100000111
Though, isn't it essentially 115 + (-108) = 7, whereas, I'm getting -249
Edit: I see that removing the highest order bit (overflow) I get 7 which is what I'm looking for but I'm not getting why you wouldn't have the extra bit.
Edit**: Ok, I figured it out. There was no overflow as I had assumed there was because 7 is within [-128, 127] (8-bits). Instead, like Omar hinted at I was supposed to drop the "extra" 1 from addition.
Your calculation is correct and the result is correct.
You stated that the second number is -108, so both your numbers are interpreted as signed 8-bit values. Thus, you should also interpret your result as an 8-bit signed value, this is why the 9th bit must be dropped, and so the result is 7 (00000111).
On a real hardware, like an 8-bit CPU for example, as all the registers are 8-bit wide, you are only be able to store the lowest 8-bit of the result, which here is 7 (00000111).
In some cases, the 9th bit may also be put inside a carry/overflow flag so it's not completely "dropped".

Turing machine for addition and comparison of binary numbers

Good Day everyone!
I am trying to solve this Exercise for learning purpose. Can someone guide me in solving these 3 questions?
Like I tried the 1st question for addition of 2 binary numbers separated by '+'. where I tried 2 numbers addition by representing each number with respective number of 1's or zeros e.g 5 = 1 1 1 1 1 or 0 0 0 0 0 and then add them and the result will also be in the same format as represented but how to add or represent 2 binaries and separating them by +, not getting any clue. Will be head of Turing machine move from left and reach plus sign and then move left and right of + sign? But how will the addition be performed. As far as my little knowledge is concerned TM can not simply add binaries we have to make some logic to represent its binaries like in the case of simple addition of 2 numbers. Similar is the case with comparison of 2 binaries?
Regards
The following program, inspired by the edX / MITx course Paradox and Infinity, shows how to perform binary addition with a Turing machine, where the numbers to be added are input to the Turing machine and are separated by a blank.
The Turing Machine
uses the second number as a counter
decrements the second number by one
increments the first number by one
till the second number becomes 0.
The following animation of the simulation of the Turing machine shows how 13 (binary 1101) and 5 (binary 101) are added to yield 18 (binary 10010).
I'll start with problems 2 and 3 since they are actually easier than problem 1.
We'll assume we have valid input (non-empty binary strings on both sides with no leading zeroes), so we don't need to do any input validation. To check whether the numbers are equal, we can simply bounce back and forth across the = symbol and cross off one digit at a time. If we find a mismatch at any point, we reject. If we have a digit remaining on the left and can't find one on the right, we reject. If we run out of digits on the left and still have some on the right, we reject. Otherwise, we accept.
Q T Q' T' D
q0 0 q1 X right // read the next (or first) symbol
q0 1 q2 X right // of the first binary number, or
q0 = q7 = right // recognize no next is available
q1 0 q1 0 right // skip ahead to the = symbol while
q1 1 q1 1 right // using state to remember which
q1 = q3 = right // symbol we need to look for
q2 0 q2 0 right
q2 1 q2 1 right
q2 = q4 = right
q3 X q3 X right // skip any crossed-out symbols
q3 0 q5 X left // in the second binary number
q3 1,b rej 1 left // then, make sure the next
q4 X q4 X,b right // available digit exists and
q4 0,b rej 0,b left // matches the one remembered
q4 1 q5 X left // otherwise, reject
q5 X q5 X left // find the = while ignoring
q5 = q6 = left // any crossed-out symbols
q6 0 q6 0 left // find the last crossed-out
q6 1 q6 1 left // symbol in the first binary
q6 X q0 X right // number, then move right
// and start over
q7 X q7 X right // we ran out of symbols
q7 b acc b left // in the first binary number,
q7 0,1 rej 0,1 left // make sure we already ran out
// in the second as well
This TM could first sanitize input by ensuring both binary strings are non-empty and contain no leading zeroes (crossing off any it finds).
Do to "greater than", you could easily do the following:
check to see if the length of the first binary number (after removing leading zeroes) is greater than, equal to, or less than the length of the second binary number (after removing leading zeroes). If the first one is longer than the second, accept. If the first one is shorter than the second, reject. Otherwise, continue to step 2.
check for equality as in the other problem, but accept if at any point you have a 1 in the first number and find a 0 in the second. This works because we know there are no leading zeroes, the numbers have the same number of digits, and we are checking digits in descending order of significance. Reject if you find the other mismatch or if you determine the numbers are equal.
To add numbers, the problem says to increment and decrement, but I feel like just adding with carry is going to be not significantly harder. An outline of the procedure is this:
Begin with carry = 0.
Go to least significant digit of first number. Go to state (dig=X, carry=0)
Go to least significant digit of second number. Go to state (sum=(X+Y+carry)%2, carry=(X+Y+carry)/2)
Go after the second number and write down the sum digit.
Go back and continue the process until one of the numbers runs out of digits.
Then, continue with whatever number still has digits, adding just those digits and the carry.
Finally, erase the original input and copy the sum backwards to the beginning of the tape.
An example of the distinct steps the tape might go through:
#1011+101#
#101X+101#
#101X+10X#
#101X+10X=#
#101X+10X=0#
#10XX+10X=0#
#10XX+1XX=0#
#10XX+1XX=00#
#1XXX+1XX=00#
#1XXX+XXX=00#
#1XXX+XXX=000#
#XXXX+XXX=000#
#XXXX+XXX=0000#
#XXXX+XXX=00001#
#XXXX+XXX=0000#
#1XXX+XXX=0000#
#1XXX+XXX=000#
#10XX+XXX=000#
#10XX+XXX=00#
#100X+XXX=00#
#100X+XXX=0#
#1000+XXX=0#
#1000+XXX=#
#10000XXX=#
#10000XXX#
#10000XX#
#10000X#
#10000#
There are two ways to solve the addition problem. Assume your input tape is in the form ^a+b$, where ^ and $ are symbols telling you you've reached the front and back of the input.
You can increment b and decrement a by 1 each step until a is 0, at which point b will be your answer. This is assuming you're comfortable writing a TM that can increment and decrement.
You can implement a full adding TM, using carries as you would if you were adding binary numbers on paper.
For either option, you need code to find the least significant bit of both a and b. The problem specifies that the most significant bit is first, so you'll want to start at + for a and $ for b.
For example, let's say we want to increment 1011$. The algorithm we'll use is find the least significant unmarked digit. If it's a 0, replace it with a 1. If it's a 1, move left.
Start by finding $, moving the read head there. Move the read head to the left.
You see a 1. Move the read head to the left.
You see a 1. Move the read head to the left.
You see a 0. write 1.
Return the read head to $. The binary number is now 1111$.
To compare two numbers, you need to keep track of which values you've already looked at. This is done by extending the alphabet with "marked" characters. 0 could be marked as X, 1 as Y, for example. X means "there's a 0 here, but I've seen it already.
So, for equality, we can start at ^ for a and = for b. (Assuming the input looks like ^a=b$.) The algorithm is to find the start of a and b, comparing the first unmarked bit of each. The first time you get to a different value, halt and reject. If you get to = and $, halt and reject.
Let's look at input ^11=10$:
Read head starts at ^.
Move the head right until we find an unmarked bit.
Read a 1. Write Y. Tape reads ^Y1=10$. We're in a state that represents having read a 1.
Move the head right until we find =.
Move the head right until we find an unmarked bit.
Read a 1. This matches the bit we read before. Write a Y.
Move the head left until we find ^.
Go to step 2.
This time, we'll read a 1 in a and read the 0 in b. We'll halt and reject.
Hope this helps to get you started.

Reverse engineering ambiguous syntax

What I often see online, when the topic is reversing, is this syntax
*(_WORD *)(a1 + 6) = *(_WORD *)(a2 + 2);
I think this code is from an IDA plugin (right?), but I can't understand it .. can someone explain me a little bit, or indicate something where to study this code nature ?
Thanxs in advance =)
This code copies 2 bytes from the address pointed to by a2 + 2 into the address pointed to by a1 + 6.
In more detail, the code does the following:
advance 2 bytes from a2.
treat the result as a WORD pointer, i.e. a pointer to a value made up of two bytes. This is the (_WORD *) part on the right.
read the 2 bytes referenced by the above pointer. This is the * at the very left of the expression on the right.
We now have a 16-bit value. Now we:
advance 6 bytes from a1.
treat the result as a WORD pointer. Again, this is the (_WORD *) part.
write the 2 bytes we read in the first part into the address pointed to by the pointer that we have.
If you've never seen such code before, you may think that it's superfluous to use the (_WORD*) on both sides of the expression - but it is not. For example, we can read a 16 bit value and write it into a pointer to a 32-bit value (e.g. by sign-extending it).
I suggest that you also look at the assembly code where you will see the steps making up this assignment. If you don't have it available then just write a C program on your own that does such manipulation and then decompile it.

What are "magic numbers" in computer programming?

When people talk about the use of "magic numbers" in computer programming, what do they mean?
Magic numbers are any number in your code that isn't immediately obvious to someone with very little knowledge.
For example, the following piece of code:
sz = sz + 729;
has a magic number in it and would be far better written as:
sz = sz + CAPACITY_INCREMENT;
Some extreme views state that you should never have any numbers in your code except -1, 0 and 1 but I prefer a somewhat less dogmatic view since I would instantly recognise 24, 1440, 86400, 3.1415, 2.71828 and 1.414 - it all depends on your knowledge.
However, even though I know there are 1440 minutes in a day, I would probably still use a MINS_PER_DAY identifier since it makes searching for them that much easier. Whose to say that the capacity increment mentioned above wouldn't also be 1440 and you end up changing the wrong value? This is especially true for the low numbers: the chance of dual use of 37197 is relatively low, the chance of using 5 for multiple things is pretty high.
Use of an identifier means that you wouldn't have to go through all your 700 source files and change 729 to 730 when the capacity increment changed. You could just change the one line:
#define CAPACITY_INCREMENT 729
to:
#define CAPACITY_INCREMENT 730
and recompile the lot.
Contrast this with magic constants which are the result of naive people thinking that just because they remove the actual numbers from their code, they can change:
x = x + 4;
to:
#define FOUR 4
x = x + FOUR;
That adds absolutely zero extra information to your code and is a total waste of time.
"magic numbers" are numbers that appear in statements like
if days == 365
Assuming you didn't know there were 365 days in a year, you'd find this statement meaningless. Thus, it's good practice to assign all "magic" numbers (numbers that have some kind of significance in your program) to a constant,
DAYS_IN_A_YEAR = 365
And from then on, compare to that instead. It's easier to read, and if the earth ever gets knocked out of alignment, and we gain an extra day... you can easily change it (other numbers might be more likely to change).
There's more than one meaning. The one given by most answers already (an arbitrary unnamed number) is a very common one, and the only thing I'll say about that is that some people go to the extreme of defining...
#define ZERO 0
#define ONE 1
If you do this, I will hunt you down and show no mercy.
Another kind of magic number, though, is used in file formats. It's just a value included as typically the first thing in the file which helps identify the file format, the version of the file format and/or the endian-ness of the particular file.
For example, you might have a magic number of 0x12345678. If you see that magic number, it's a fair guess you're seeing a file of the correct format. If you see, on the other hand, 0x78563412, it's a fair guess that you're seeing an endian-swapped version of the same file format.
The term "magic number" gets abused a bit, though, referring to almost anything that identifies a file format - including quite long ASCII strings in the header.
http://en.wikipedia.org/wiki/File_format#Magic_number
Wikipedia is your friend (Magic Number article)
Most of the answers so far have described a magic number as a constant that isn't self describing. Being a little bit of an "old-school" programmer myself, back in the day we described magic numbers as being any constant that is being assigned some special purpose that influences the behaviour of the code. For example, the number 999999 or MAX_INT or something else completely arbitrary.
The big problem with magic numbers is that their purpose can easily be forgotten, or the value used in another perfectly reasonable context.
As a crude and terribly contrived example:
while (int i != 99999)
{
DoSomeCleverCalculationBasedOnTheValueOf(i);
if (escapeConditionReached)
{
i = 99999;
}
}
The fact that a constant is used or not named isn't really the issue. In the case of my awful example, the value influences behaviour, but what if we need to change the value of "i" while looping?
Clearly in the example above, you don't NEED a magic number to exit the loop. You could replace it with a break statement, and that is the real issue with magic numbers, that they are a lazy approach to coding, and without fail can always be replaced by something less prone to either failure, or to losing meaning over time.
Anything that doesn't have a readily apparent meaning to anyone but the application itself.
if (foo == 3) {
// do something
} else if (foo == 4) {
// delete all users
}
Magic numbers are special value of certain variables which causes the program to behave in an special manner.
For example, a communication library might take a Timeout parameter and it can define the magic number "-1" for indicating infinite timeout.
The term magic number is usually used to describe some numeric constant in code. The number appears without any further description and thus its meaning is esoteric.
The use of magic numbers can be avoided by using named constants.
Using numbers in calculations other than 0 or 1 that aren't defined by some identifier or variable (which not only makes the number easy to change in several places by changing it in one place, but also makes it clear to the reader what the number is for).
In simple and true words, a magic number is a three-digit number, whose sum of the squares of the first two digits is equal to the third one.
Ex-202,
as, 2*2 + 0*0 = 2*2.
Now, WAP in java to accept an integer and print whether is a magic number or not.
It may seem a bit banal, but there IS at least one real magic number in every programming language.
0
I argue that it is THE magic wand to rule them all in virtually every programmer's quiver of magic wands.
FALSE is inevitably 0
TRUE is not(FALSE), but not necessarily 1! Could be -1 (0xFFFF)
NULL is inevitably 0 (the pointer)
And most compilers allow it unless their typechecking is utterly rabid.
0 is the base index of array elements, except in languages that are so antiquated that the base index is '1'. One can then conveniently code for(i = 0; i < 32; i++), and expect that 'i' will start at the base (0), and increment to, and stop at 32-1... the 32nd member of an array, or whatever.
0 is the end of many programming language strings. The "stop here" value.
0 is likewise built into the X86 instructions to 'move strings efficiently'. Saves many microseconds.
0 is often used by programmers to indicate that "nothing went wrong" in a routine's execution. It is the "not-an-exception" code value. One can use it to indicate the lack of thrown exceptions.
Zero is the answer most often given by programmers to the amount of work it would take to do something completely trivial, like change the color of the active cell to purple instead of bright pink. "Zero, man, just like zero!"
0 is the count of bugs in a program that we aspire to achieve. 0 exceptions unaccounted for, 0 loops unterminated, 0 recursion pathways that cannot be actually taken. 0 is the asymptote that we're trying to achieve in programming labor, girlfriend (or boyfriend) "issues", lousy restaurant experiences and general idiosyncracies of one's car.
Yes, 0 is a magic number indeed. FAR more magic than any other value. Nothing ... ahem, comes close.
rlynch#datalyser.com

2's complement example, why not carry?

I'm watching some great lectures from David Malan (here) that is going over binary. He talked about signed/unsigned, 1's compliment, and 2's complement representations. There was an addition done of 4 + (-3) which lined up like this:
0100
1101 (flip 0011 to 1100, then add "1" to the end)
----
0001
But he waved his magical hands and threw away the last carry. I did some wikipedia research bit didn't quite get it, can someone explain to me why that particular carry (in the 8's ->16's columns) was dropped, but he kept the one just prior to it?
Thanks!
The last carry was dropped because it does not fit in the target space. It would be the fifth bit.
If he had carried out the same addition, but with for example 8 bit storage, it would have looked like this:
00000100
11111101
--------
00000001
In this situation we would also be stuck with an "unused" carry.
We have to treat carries this way to make addition with two's compliment work properly, but that's all good, because this is the easiest way of treating carries when you have limited storage. Anyway, we get the correct result, right :)
x86-processors store such an additional carry in the carry flag (CF), which is possible to test with certain instructions.
A carry is not the same as an overflow
In the example you do have a carry out of the MSB. By definition, this carry ends up on the floor. (If there was someplace for it to go, then it would not have been out of the MSB.)
But adding two numbers with different signs cannot overflow. An overflow can only happen when two numbers with the same sign produce a result with a different sign.
If you extend the left-hand side by adding more digit positions, you'll see that the carry rolls over into an infinite number of bit positions towards the left, so you never really get a final carry of 1. So the answer is positive.
...000100
+...111101
----------
....000001
At some point you have to set the number of bits to represent the numbers. He chose 4 bits. Any carry into the 5th bit is lost. But that's OK because he decided to represent the number in just 4 bits.
If he decided to use 5 bits to represent the numbers he would have gotten the same result.
That's the beauty of it... Your result will be the same size as the terms you are adding. So the fifth bit is thrown out
In 2's complement you use the carry bit to signal if there was an overflow in the last operation.
You must look at the LAST two carry bits to see if there was overflow. In your example, the last two carry bits were 11 meaning that there was no overflow.
If the last two carry bits are 11 or 00 then no overflow occurred. If the last two carry bits are 10 or 01 then there was overflow. That is why he sometimes cared about the carry bit and other times he ignored it.
The first row below is the carry row. The left-most bits in this row are used to determine if there was overflow.
1100
0100
1101
----
0001
Looks like you're only using 4 bits, so there is no 16's column.
If you were using more than 4 bits then the -3 representation would be different, and the carry of the math would still be thrown out the end. For example, with 6 bits you'd have:
000100
111101
------
1000001
and since the carry is outside the bit range of your representation it's gone, and you only have 000001
Consider 25 + 15:
5+5 = 10, we keep the 0 and let the 1 go to the tens-column. Then it's 2 + 1 (+ 1) = 4. Hence the result is 40 :)
It's the same thing with binaries. 0 + 1 = 1, 0 + 0 = 0, 1 + 1 = 10 => send the 1 the 8-column, 0 + 1 ( + 1 ) = 10 => send the 1 to the next column - Here's the overflow and why we just throw the 1 away.
This is why 2's complement is so great. It allows you to add / substract just like you do with base-10, because you (ab)use the fact that the sign-bit is the MSB, which will cascade operations all the way to overflows, when nessecary.
Hope I made myself understood. Quite hard to explan this when english is not you native tongue :)
When performing 2's complement addition, the only time that a carry indicates a problem is when there's an overflow condition - that can't happen if the 2 operands have a different sign.
If they have the same sign, then the overflow condition is when the sign bit changes from the 2 operands, ie., there's a carry into the most significant bit.
If I remember my computer architecture learnin' this is often detected at the hardware level by a flag that's set when the carry into the most significant bit is different than the carry out of the most significant bit. Which is not the case in your example (there's a carry into the msb as well as out of the msb).
One simple way to think of it is as "the sign not changing". If the carry into the msb is different than the carry out, then the sign has improperly changed.
The carry was dropped because there wasn't anything that could be done with it. If it's important to the result, it means that the operation overflowed the range of values that could be stored in the result. In assembler, there's usually an instruction that can test for the carry beyond the end of the result, and you can explicitly deal with it there - for example, carrying it into the next higher part of a multiple precision value.
Because you are talking about 4 bit representations. It's unussual compared to an actual machine, but if we were to take for granted that a computer has 4 bits in each byte for a moment, then we have the following properties: a byte wraps at 15 to -15. Anything outside that range cannot be stored. Besides, what would you do with an extra 5th bit beyond the sign bit anyway?
Now, given that, we can see from everyday math that 4 + (-3) = 1, which is exactly what you got.