floating point hex octal binary - binary

I am working on a calculator that allows you to perform calculations past the decimal point in octal, hexadecimal, binary, and of course decimal. I am having trouble though finding a way to convert floating point decimal numbers to floating point hexadecimal, octal, binary and vice versa.
The plan is to do all the math in decimal and then convert the result into the appropriate number system. Any help, ideas or examples would be appreciated.
Thanks!

Hmm... this was a homework assignment in my university's CS "weed-out" course.
The operations for binary are described in Schaum's Outline Series: Essential Computer Mathematics by Seymour Lipschutz. For some reason it is still on my bookshelf 23 years later.
As a hint, convert octal and hex to binary, perform the operations, convert back to binary.
Or you can perform decimal operations and perform the conversions to octal/hex/binary afterward. The process is essentially the same for all positional number systems arithmetic.

What Programming Language are you using?
it is definatly a good idea to change everything into binary do the math on the binary then convert back. if you multiply a (unsigneD) binary number by 2, ti is the same as a Bit Shift Left ( << 1 in C), a division by 2 is the same as a Bit shit Right (>> in C).
addition and subtraction is the same as you would do in elementary school.
also, remember that if you cast a float as an int it will truncated it int(10.5) = 10;

I had this same problem a few days ago. I found http://www.binaryconvert.com, which allows you to convert between floating point decimal, binary, octal, and hexadecimal in any order you want.

Related

What is precision of MYSQL RAND() function? and how to generate huge random numbers?

What is precision of MYSQL RAND() function?
I can't find it on the official page: MYSQL RAND() function is told to return floating-point number, unfortunately it's precision is not stated in a clear way. It can be a single-precision floating-point data, or double-precision, or any other kind of data.
What I would like to know exactly is - what is the maximum integer range [0,N] in which I can generate random integer numbers with FLOOR(RAND()*N) such that there won't be any "skips" and any number from 0 to N can be generated?
Another thing which I would like to know:
How to generate numbers, which are bigger than N in MySQL?
As written in the MySQL docs the precision is system dependent. So there is not the one answer to your question.
https://dev.mysql.com/doc/internals/en/floating-point-types.html
Since MySQL uses the machine-dependent binary representation of float and double to store values in the database, we have to care about these. Today, most systems use the IEEE standard 754 for binary floating-point arithmetic. It describes a representation for single precision numbers as 1 bit for sign, 8 bits for biased exponent and 23 bits for fraction and for double precision numbers as 1-bit sign, 11-bit biased exponent and 52-bit fraction. However, we can not rely on the fact that every system uses this representation. Luckily, the ISO C standard requires the standard C library to have a header float.h that describes some details of the floating point representation on a machine. The comment above describes the value DBL_DIG. There is an equivalent value FLT_DIG for the C data type float.
At the end I have no clue why the precision of a random number is important in any case. I cannot see any use case

How to perform Arithmetic on Ones Complement Numbers and correct overflow?

For some backstory, I'm making a program that can do arithmetic on ones complement numbers. To do this I'm converting a binary string into a BigInteger and then performing the math using said BigIntegers, and then converting that back into a binary string. The only problem occurs when the end result goes below -127 or above +127 because I don't know how to correct it due to the nature of ones complement numbers. I was hoping I could somehow instead convert them like unsigned numbers and do like what this answer says to do.
There are also a couple of other questions that I got from reading the linked question. I put them in block quotes. I'm just asking for information on what they mean, and explain it to me.
Firstly
I know that the r-1 complement for r-base number should do end around carry if the highest bit has carry.
Secondly
End-around carry is actually rather simple: it changes the modulus of the addition operation from rn to rn–1.
And lastly
Again, let's keep the carry bit where it is. If you look at the numbers as unsigned integers, we're computing 13 + 11 = 24. However, due to the wrap-around carry, addition is done modulo 15, so we end up with 9, which represents -6 (the correct result).
If someone can explain these quotes to me and provide some web pages for me to read I would greatly appreciate it! :)

How do I round this binary number to the nearest even

I have this binary representation of 0.1:
0.00011001100110011001100110011001100110011001100110011001100110
I need to round it to the nearest even to be able to store it in the double precision floating point. I can't seem to understand how to do that. Most tutorials talk about guard, round and sticky bits - where are they in this representation?
Also I've found the following explanation:
Let’s see what 0.1 looks like in double-precision. First, let’s write
it in binary, truncated to 57 significant bits:
0.000110011001100110011001100110011001100110011001100110011001…
Bits 54 and beyond total to greater than half the value of bit
position 53, so this rounds up to
0.0001100110011001100110011001100110011001100110011001101
This one doesn't talk about GRS bits, why? Aren't they always required?
The text you quote is from my article Why 0.1 Does Not Exist In Floating-Point . In that article I am showing how to do the conversion by hand, and the "GRS" bits are an IEEE implementation detail. Even if you are using a computer to do the conversion, you don't have to use IEEE arithmetic (and you shouldn't if you want to do it correctly ), so the GRS bits won't come into play there either. In any case, the GRS bits apply to calculations, not really to the conceptual idea of conversion.

Huffman Coding: handling negative ambiguity with zero

I've written a simple text file compressor that uses Huffman coding. I encode the text and write the binary resulting from Huffman to a file. To decode, I read in the binary and step through the Huffman tree.
That part is straightforward. The problem arises with 0 and negative numbers. For practice/fun/learning, I decided to do my own binary conversion methods (from a Java byte to a string and vice-versa) and I decided to represent negative numbers by flipping the last bit to a 1.
E.g, -2 = 00000101;; 2 = 00000100 (the extra 0's for padding since even the unnecessary 0's are important in Huffman... it's irrelevant, though)
However, 0 = 00000000 = 00000001
This may not seem like a problem, but those two binary strings map to two different characters in the huffman tree.
Is there a better way handle negatives in binary that will get around this?
I'm not sure this will help you, but i will try:
First of all, there is different kind of binary, pure or the others. Binary pure DON'T allow negatives, it goes from 0.......
You can use magnitude and sign, another kind of binnary, it allows negative numbers, and the - or + sign is represented with the most important bit of the number, for example:
A number with 4 bits:
0100=2
1100=-2
(1 bit for the sign, the most important, the first left one, and the other 3 for the number)
You can use too the Two's complement, but it's harder and you need to get the number in binary and then translate it to the other type.
I hope i could help you, and sorry for the lot of mistakes in english!

Help Understanding 8bit Floating Point Conversions with Decimals and Binary

I'm in a basic Engineering class and we're going through binary conversions. I can figure out the base 10 to binary or hex conversions really well, however the 8bit floating point conversions are kicking my ass and I can't find anything online that breaks it down in a n00b level and shows the steps? Wondering if any gurus have found anything online that would be helpful for this situation.
I have questions like 00101010(8bfp) = what number in base 10
Whenever I want to remember how floating point works, I refer back to the wikipedia page on 32 bit floats. I think it lays out the concepts pretty well.
http://en.wikipedia.org/wiki/Single_precision_floating-point_format
Note that wikipedia doesn't know what 8 bit floats are, I think your professor may have invented them ;)
Binary floating point formats are usually broken down into 3 fields: Sign bit, exponent and mantissa. The sign bit is simply set to 1 if the entire number should be negative, and 0 if the number is positive. The exponent is usually an unsigned int with an offset, where 2 to the 0'th power (1) is in the middle of the range. It's simpler in hardware and software to compare sizes this way. The mantissa works similarly to the mantissa in regular scientific notation, with the following caveat: The most significant bit is hidden. This is due to the requirement of normalizing scientific notation to have one significant digit above the decimal point. Remember when your math teacher in elementary school would whack your knuckles with a ruler for writing 35.648 x 10^6 or 0.35648 x 10^8 instead of the correct 3.5648 x 10^7? Since binary only has two states, this required digit above the decimal point is always one, and eliminating it allows another bit of accuracy at the low end of the mantissa.