Bit pattern 1: 10010
Bit pattern 2: 11001
Can anyone show me or explain to me how to solve this 5-bit addition?
Related
Consider the following unsigned integer represented in binary on 16 bits by:
abcdefghijklmnop
where each letter represents one bit.
Write one single expression for each one of the following integers:
a) ghij111nop00abcd
b) mnl1cde01hijlccb
How in the world i can solve this?
I know that you won't solve it for me, but can you please tell me how to tackle it, attack it, think about it?
For some backstory, I'm making a program that can do arithmetic on ones complement numbers. To do this I'm converting a binary string into a BigInteger and then performing the math using said BigIntegers, and then converting that back into a binary string. The only problem occurs when the end result goes below -127 or above +127 because I don't know how to correct it due to the nature of ones complement numbers. I was hoping I could somehow instead convert them like unsigned numbers and do like what this answer says to do.
There are also a couple of other questions that I got from reading the linked question. I put them in block quotes. I'm just asking for information on what they mean, and explain it to me.
Firstly
I know that the r-1 complement for r-base number should do end around carry if the highest bit has carry.
Secondly
End-around carry is actually rather simple: it changes the modulus of the addition operation from rn to rn–1.
And lastly
Again, let's keep the carry bit where it is. If you look at the numbers as unsigned integers, we're computing 13 + 11 = 24. However, due to the wrap-around carry, addition is done modulo 15, so we end up with 9, which represents -6 (the correct result).
If someone can explain these quotes to me and provide some web pages for me to read I would greatly appreciate it! :)
I have a quick question about a problem I'm trying to solve. For this problem, I have to convert (0.0A)16 into IEEE754 half precision floating point standard. I converted it to binary (0000.0000 1010), normalized it (1.010 * 2^5), encoded the exponent (which came out to be 01010), but now I'm lost on how to put it into the actual form. What should I do with the fractional part? The answer comes out to be 0 01010 01 0000 0000.
I know there's something to do with adding a omit 1, but I'm not entirely sure on where that happens either.
Any help is appreciated!
The 1 you have to omit is the first one of the mantissa, since we know the significant part always starts with 1 (this way, IEEE-754 gains one bit of space). The mantissa is 1.010, so you will represent only "010".
The solution 0 01010 0100000000 means:
0 is the sign;
01010 is the exponent;
01000000 is the mantissa, omitting the first one.
I'm studying in French, so I'll try to translate the terms as best as I can, so sorry if it may be unclear.
I have to find the coded message (CRC) with this 10 bits message : 0011111111 and x^2 + x as the polynomial divisor.
I don't have much knowledge in binary and CRC yet, but I still know how to calculate one, but this one is a bit more tricky for me since the polynomial divisor is not an usual one.
There are a lot of examples with divisors such as x^5 + x^4 + 1, but I have yet to find an example with something that resemble this one (x^2 + x).
Here's what I did, but I'm pretty sure that it isn't right at all
001111111100 | 110
110
00111
110
00111
110
00110
110
0000
Do you guys have any idea what I'm doing wrong here?
Thanks a lot!
You can try this :
Computation of CRC with 110 on 0011111111, and see that the result is 2, with a good explanation step by step.
I have been working on these three lab questions for about 5 hours. I'm stuck on the last question.
Consider the following floating point number representation which
stores a floating point number is 16 bits. You have a sign-bit, a six
bit (excess-32) exponent, and a nine bit mantissa.
Explain how the 9-bit mantissa might get you into trouble.
Here is the preceding question. Not sure if it will help in analysis.
What is the range of exponents it supports?
000000 to 111111 or 0 to 63 where exponent values less
than 32 are negative, and exponent values greater than 32 are
positive.
I have a pretty good foundation for floating points and converting between decimals and floating points. Any guidance would be greatly appreciated.
To me, the ratio mantissa to exponent is a bit off. Even if we assume there is a hidden bit, effectively making this into a 10 bit mantissa (with top bit always set), you can represent + or - 2^31, but in 2^31/2^10 = 2^21 steps (i.e. steps of 2097152).
I'd rather use 11 bits mantissa and 5 bit exponent, making this 2^15/2^11 = 2^4, i.e. steps of 16.
So for me the trouble would be that 9+1 bits is simply too imprecise, compared to the relatively large exponent.
My guess is that a nine-bit mantissa simply provides far too little precision, so that any operation apart from trivial ones, will make the calculation far too inexact to be useful.
I admit this answer is a little bit far-fetched, but apart from this, I can't see a problem with the representation.