Binary representation of decimal values as 0.33 - binary

how to get binary of any decimal values...say for example 0.33, 0.6,0.5..
Can someone explain the concept..Once binary representation of decimal values are done..i need to understand the floating representation.

Here is the solution
0.33*2=0.66--------0
0.66*2=1.32--------1
0.32*2=0.64--------0
0.64*2=1.28--------1
0.28*2=0.56--------0
0.56*2=1.12--------1
0.12*2=0.24--------0
0.24*2=0.48--------0
0.48*2=0.96--------0
0.96*2=1.92--------1
0.92*2=1.84--------1
.................
this process continues until result becomes zero and write down to up after point eg:0.11000101010
Hope it may be useful to you.

Related

Encoding Numbers into IEEE754 half precision

I have a quick question about a problem I'm trying to solve. For this problem, I have to convert (0.0A)16 into IEEE754 half precision floating point standard. I converted it to binary (0000.0000 1010), normalized it (1.010 * 2^5), encoded the exponent (which came out to be 01010), but now I'm lost on how to put it into the actual form. What should I do with the fractional part? The answer comes out to be 0 01010 01 0000 0000.
I know there's something to do with adding a omit 1, but I'm not entirely sure on where that happens either.
Any help is appreciated!
The 1 you have to omit is the first one of the mantissa, since we know the significant part always starts with 1 (this way, IEEE-754 gains one bit of space). The mantissa is 1.010, so you will represent only "010".
The solution 0 01010 0100000000 means:
0 is the sign;
01010 is the exponent;
01000000 is the mantissa, omitting the first one.

Explain why there's a different result when converting a binary number to decimal, and when converting its two's complement to decimal

For example, we're given the number -1.5(10).
Converting it to signed binary we get 11.1000(2).
Its two's complement is 00.1000(2), which is 0.5(10) when converted to decimal.
Which is self-explanatory, because it's a different binary number.
What else is there to explain?
You are mixing apples (signed-binary) and oranges (two's complement).
You took a negative value in one representation (signed-binary), negated it using the technique for a different representation (2's complement), and (unsurprisingly) ended up with trash as a result.
If you had negated 11.1000(2) as appropriate for signed-binary, you'd end up with 01.1000(2) -- the correct answer.
If you had started with the 2's complement representation of -1.5, 10.1000(2), and took the 2's complement of that, you'd end up with 01.1000(2) -- also correct.
Note that none of this involves converting anything to decimal.

MySQL JSON stores different floating point value

How are floating point values in JSON data columns rounded in MySQL (5.7)?
I am having trouble finding a good resource to know how to solve my issue.
Here's what happens:
CREATE TABLE someTable (jdoc JSON);
INSERT INTO someTable VALUES('{"data":14970.911769838869}');
Then select the rows:
SELECT * from someTable;
I get data back with a different final digit:
'{"data": 14970.911769838867}'
Any idea why this happens? Can I adjust the data in a way to prevent this or is there a rounding precision issue?
Double precision floating point has about 16 decimal digits of precision. Your number has 17 digits, so it can't be represented exactly in floating point, and you get round-off error in the last digit.
See How many significant digits have floats and doubles in java?
The question is about Java, but just about everything uses the same IEEE 754 floating point format, so the answer applies pretty generally.

MySql: convert a float to decimal produce more decimal number then the stored in back.sql file

i want to understand this:
i have a dump of a table (a sql script file) from a database that use float 9,2 as default type for numbers.
In the backup file i have a value like '4172.08'.
I restore this file in a new database and i convert the float to decimal 20,5.
Now the value in the field is 4172.08008
...where come from the 008??
tnx at all
where come from the 008??
Short answer:
In order to avoid the float inherent precision error, cast first to decimal(9,2), then to decimal(20,5).
Long answer:
Floating point numbers are prone to rounding errors in digital computers. It is a little hard to explain without throwing up a lot of math, but lets try: the same way 1/3 represented in decimal requires an infinite number of digits (it is 1.3333333...), some numbers that are "round" in decimal notation have infinite number of digits in binary. Because this format is stored in binary and has finite precision, there is an implicit rounding error and you may experience funny things like getting 0.30000000000000004 as the result of 1.1 + 1.2.
This is the difference between float and decimal. Float is a binary type, and can't represent that value exactly. So when you convert to decimal (as expected, a decimal type), its not exactly the original value.
See http://floating-point-gui.de/ for some more information.

floating point hex octal binary

I am working on a calculator that allows you to perform calculations past the decimal point in octal, hexadecimal, binary, and of course decimal. I am having trouble though finding a way to convert floating point decimal numbers to floating point hexadecimal, octal, binary and vice versa.
The plan is to do all the math in decimal and then convert the result into the appropriate number system. Any help, ideas or examples would be appreciated.
Thanks!
Hmm... this was a homework assignment in my university's CS "weed-out" course.
The operations for binary are described in Schaum's Outline Series: Essential Computer Mathematics by Seymour Lipschutz. For some reason it is still on my bookshelf 23 years later.
As a hint, convert octal and hex to binary, perform the operations, convert back to binary.
Or you can perform decimal operations and perform the conversions to octal/hex/binary afterward. The process is essentially the same for all positional number systems arithmetic.
What Programming Language are you using?
it is definatly a good idea to change everything into binary do the math on the binary then convert back. if you multiply a (unsigneD) binary number by 2, ti is the same as a Bit Shift Left ( << 1 in C), a division by 2 is the same as a Bit shit Right (>> in C).
addition and subtraction is the same as you would do in elementary school.
also, remember that if you cast a float as an int it will truncated it int(10.5) = 10;
I had this same problem a few days ago. I found http://www.binaryconvert.com, which allows you to convert between floating point decimal, binary, octal, and hexadecimal in any order you want.