How do I write 0xFA in signed mantissa. I converted it to binary = 1111_1010. Not sure where to go from here.
The question is "If the register file has 8 bits width total, write the following in signed mantissa."
Also, an explanation of signed mantissa would be great!
So what you have to work with is a byte of data with an unknown type, apparently.
In order to write a number in signed mantissa (see Significand) one would expect that your dealing with a floating point type such as single or double. However you've only got a single byte.
A single is 8 bytes so surely it can't be that and double is double trouble. Also a half requires 16 bits. The only logical alternative type would be SByte but in that case you will never get any numbers that have any mantissa (significant digits) after the decimal. In fact there is no decimal. So Perhaps this is a trick question?
If you go on the assumption of SByte, you get -6x10^0
Just in case you want proof, or if your curious how this looks during debug:
private void SByte2Dec()
{
sbyte convertsHexToSByte = Convert.ToSByte("0xFA", 16);
Single yourAnswer = Convert.ToSingle(convertsHexToSByte);
label1.Text = Convert.ToString(youranswer);
}
In this example I had a windows form with nothing but label1 on it.
Then I put SByte2Dec(); right under InitializeComponent();
The solution is -122. Not sure how to get there...any ideas?
Working backwards from the answer it's simple to see what your professor has done. He is assuming the MSB is the sign bit and the rest is treated like a 7 bit integer. There is a precedent for this, called "Signed Magnitude Representation" but it's not used in modern computing. These days pretty much everyone is using Two's compliment.
I take it this is a beginners course and rather than go though all the trouble of explaining two's compliment and data types your professor is mainly trying to drive home the point of the MSB being a sign bit. If you got the whole sign bit thing and don't know anything else about the way modern computer hardware performs calculations, then you would probably arrive at the same answer.
My guess is that your professor also took to wording the question in a strange way so as to throw you off the path if you tried to Google the answer. If you want to get him back, ask him what the difference between "1000 0000" and "000 0000" is. Also if you or anyone else in the class answered -6 and he counted it wrong, he should be fired. Those students should be awarded bonus points for teaching themselves about two's compliment.
Why would the signed mantissa be -6? I see that the 2's complement is -6 but signed mantissa is different?
I have you read the wiki article I linked to on "Significand"?
The important thing to realize is that "signed mantissa" is not a data type. However, there are (were?) many different machine-specific data types that implemented their own versions of storing floating point numbers before the IEEE standard became widely adopted. These early data types were often referred to as DFP or decimal floating point numbers as opposed to binary floating point. Read this paper and for more in-depth understanding. Also this paper covers the topic quite well.
As I stated earlier, your professor most likely used the terminology "signed mantissa" to throw you off if you went searching the internet for the answer. Apparently you were expected to read between the lines and know that what he was really asking for was a form of decimal floating point, or Signed Magnitude Representation.
"Signed Mantissa" ≠ Two's Compliment
"Signed Mantissa" is to be interpreted as some form of Decimal Floating Point
Where as, Two's Compliment is a form of Binary Floating Point
Related
For some backstory, I'm making a program that can do arithmetic on ones complement numbers. To do this I'm converting a binary string into a BigInteger and then performing the math using said BigIntegers, and then converting that back into a binary string. The only problem occurs when the end result goes below -127 or above +127 because I don't know how to correct it due to the nature of ones complement numbers. I was hoping I could somehow instead convert them like unsigned numbers and do like what this answer says to do.
There are also a couple of other questions that I got from reading the linked question. I put them in block quotes. I'm just asking for information on what they mean, and explain it to me.
Firstly
I know that the r-1 complement for r-base number should do end around carry if the highest bit has carry.
Secondly
End-around carry is actually rather simple: it changes the modulus of the addition operation from rn to rn–1.
And lastly
Again, let's keep the carry bit where it is. If you look at the numbers as unsigned integers, we're computing 13 + 11 = 24. However, due to the wrap-around carry, addition is done modulo 15, so we end up with 9, which represents -6 (the correct result).
If someone can explain these quotes to me and provide some web pages for me to read I would greatly appreciate it! :)
I already know the concept of negative binary numbers, The 0 at the position most significant bit represents that the binary is positive and 1 at the position of most significant bit represents that the binary is negative.
BUT THE PROBLEM THAT INTIMIDATED ME TO ASK A QUESTION ON STACKOVERFLOW IS THIS:
what about the times that we might want to represent a huge number that it's representation has occurred to have 1 in msb.
let me explain it in this way: by considering the above rule for making negative counterparts of our binary numbers we could say that ;
in an 8-bit system we have, For example, a value of positive 12 (decimal) would be written as 00001100 in binary, but negative 12 (decimal) would be written as
10001100 but what makes me confused a bit is that 10001100 could also be interpreted as 268 in decimal while we know that its the negative form of 12 in binary using this method of conversion.
I just want to know how to deal with this tricky, two-faced possible ways of interpreting a binary number, just like the example i gave above(it seem's to be negative, OH! but wait it might also not be:).
It depends on the type you use. If you're using an 8-bit representation which is signed, then the largest number you can store is 1111111 (i.e. the first bit is set aside).
In our example, that would convert to an integer value of 127.
If you use an unsigned type, then you have the extra bit available, allowing you to store 11111111, or 255 expressed as an integer.
A strongly typed language with a good compiler should catch you trying to assign, say, 134 to a signed 8 bit integer and vomit errors all over you.
If you're doing something strange fiddling around with bits yourself, you're on your own! There's no way of reconstructing, post hoc, whether it was intended to be a negative or a large positive, so you'll have to choose a system and stick with it.
The general convention nowadays is to stick with signed representations always - although I have seen code (usually for extreme compute tasks like astrophysical calculations) use unsigned values simply to save memory. And of course images will use unsigned values by convention, usually.
In two's-complement notation, there's always an odd-man-out value to compensate for the 0/origin value that is conceptually neither positive nor negative. We treat 0 as positive for the sake of pragmatism, and we treat its counterpart, which is a 1 in the top bit and 0 in the rest, as negative, but conceptually, they are both special values that have no sign, because in both cases, -v==v.
For instance, in a signed 32-bit value, this number might be represented in one of these based forms:
0b10000000000000000000000000000000
0x80000000
-2147483648
I've personally been using my own term for this odd value for a while, which I will share below as my own answer, and let you all decide whether it's worthy, but I wouldn't be surprised if there's already an accepted name for it.
I leave the rest to you...
Edit: On further research, I did find some sites claiming that "it is sometimes called the weird number", but these blurbs are consistently copied verbatim from a Wikipedia entry on two's complement notation, which itself only references a 2006 college research paper that's unavailable at the given location, but I found here, where it's only referred to in passing as such. Wikipedia also references a single book, but that book's usage appears to be based on the text of the Wikipedia entry, which existed before the book was written. I'm not convinced that anyone other than one University of Tokyo student ever called it "the weird number" in practice.
Depending on context, I might refer to it neutrally as the dead value or, if I'm feeling like anthropomorphizing it, I call it Death. I think of that lone top bit as a scythe of sorts.
I call it this for two reasons:
On the ring that is two's-complement notation, its counterpart is 0, which we commonly refer to as the origin. One antonym for origin is death.
This particular value, being ambiguous as it is, tends to catch out a lot of programmers. It is literally the death of a lot of unsuspecting algorithms.
When writing terse assembly, I tend to abbreviate it as just "D", for instance if I had a condition that was satisfied by all values greater that zero, and Death, I might call the flag "GZD".
I simply call that the minimum integer or minimum value, since that is indeed what it is in two's complement.
It's also described that way in the C standard limits.h (and C++ equivalent) header, such as with SCHAR_MIN, INT_MIN, LONG_MIN and so on.
I have been designing a Delta-Sigma DAC and have run into confusion and despair over the handling of signed numbers in my (sigma)counters and (delta and Vref) comparators.
I have tried to employ signed 2's complement but the EDAcompiler doesn't seem to notice when I do it, its most likely my own mistake!
So basically my question is, how (in Verilog) do I represent negative numbers in a way that they can be used in counters (which can therefore count up and down)? I am aware that a counter register that will hold signed numbers must be declared reg signed [:0]
Thanks!
Gavin
Well I am not that clear on your question. Some compilers may not be able to handle signed registers directly since if I recall correctly it is a Verilog 2001 feature. But generally if you use digital logic that works with signed numbers you shouldn't have an issue. For example if you use an adder ip just mention that inputs are signed numbers. As for the simulator you can select the type of data you need, generally by selecting the register/value, right clicking on the waveform window and changing the type.
Finally if you have to create the logic yourself just use sign extension. so lets say you are working with 4 bit values
-5 would be 11111011 and 5 wold be 00000101. So you can see that for negative numbers the MSB is 1 and for positive its zero. Using this you can interpret the numbers in your code but just make sure that the size is bigger then what you want to use so no overflow occurs.
I'm in a basic Engineering class and we're going through binary conversions. I can figure out the base 10 to binary or hex conversions really well, however the 8bit floating point conversions are kicking my ass and I can't find anything online that breaks it down in a n00b level and shows the steps? Wondering if any gurus have found anything online that would be helpful for this situation.
I have questions like 00101010(8bfp) = what number in base 10
Whenever I want to remember how floating point works, I refer back to the wikipedia page on 32 bit floats. I think it lays out the concepts pretty well.
http://en.wikipedia.org/wiki/Single_precision_floating-point_format
Note that wikipedia doesn't know what 8 bit floats are, I think your professor may have invented them ;)
Binary floating point formats are usually broken down into 3 fields: Sign bit, exponent and mantissa. The sign bit is simply set to 1 if the entire number should be negative, and 0 if the number is positive. The exponent is usually an unsigned int with an offset, where 2 to the 0'th power (1) is in the middle of the range. It's simpler in hardware and software to compare sizes this way. The mantissa works similarly to the mantissa in regular scientific notation, with the following caveat: The most significant bit is hidden. This is due to the requirement of normalizing scientific notation to have one significant digit above the decimal point. Remember when your math teacher in elementary school would whack your knuckles with a ruler for writing 35.648 x 10^6 or 0.35648 x 10^8 instead of the correct 3.5648 x 10^7? Since binary only has two states, this required digit above the decimal point is always one, and eliminating it allows another bit of accuracy at the low end of the mantissa.