How would I write this in IEEE standards? - binary

I would like to know how to write 5/32 in IEEE754 standard. Is there a shortcut to do the fraction part?
The answer is 0 10000010 00100000000000000000000. But there has to be an easier way to write 5/32 into this format than converting it to binary first.

I found out that you can do this to get the decimal numbers a lot faster. 5 (base 10) = 101 (Binary) and 32 = 1/(2^5). So 5/32 is just 101x2^-5

Related

Converting a decimal number x > 1 with n decimals to binary (for example: 70,5)

For homework I am to convert a decimal number to binary. This is usually pretty easy, but I have no idea how to it with a number like 70,5.
I know that there is the multiplication algorithm for x < 1, but here, x > 1. I was thinking about maybe writing 70,5 as a sum of numbers that are < 1, then find the binary expressions of these and take the sum. But I'm not sure this is the right approach.
Any ideas?
I found out how to do it!! Just find the binary of 70 (1000110), the binary of .5 (.1) and put them together like 1000110.1 :D

How to Convert (0.ABBA)16 hexadecimal into octal?

How to convert this ,
I m not getting this?
I tried converting that into decimal 10*16^-1 and so on and got this (0.6708068848)10
Now it become really a complex task is there any short method to do so?
I think you're over-complicating it; I find it easiest to first convert it to binary (base-2) and then to octal (base-8).
Binary (bits partitioned into 3's because octal numbers have 3 bits):
0.101_010_111_011_101_000
Octal:
0.527350

How computers convert decimal to binary integers

This is surely a duplicate, but I was not able to find an answer to the following question.
Let's consider the decimal integer 14. We can obtain its binary representation, 1110, using e.g. the divide-by-2 method (% represents the modulus operand):
14 % 2 = 0
7 % 2 = 1
3 % 2 = 1
1 % 2 = 1
but how computers convert decimal to binary integers?
The above method would require the computer to perform arithmetic and, as far as I understand, because arithmetic is performed on binary numbers, it seems we would be back dealing with the same issue.
I suppose that any other algorithmic method would suffer the same problem. How do computers convert decimal to binary integers?
Update: Following a discussion with Code-Apprentice (see comments under his answer), here is a reformulation of the question in two cases of interest:
a) How the conversion to binary is performed when the user types integers on a keyboard?
b) Given a mathematical operation in a programming language, say 12 / 3, how does the conversion from decimal to binary is done when running the program, so that the computer can do the arithmetic?
There is only binary
The computer stores all data as binary. It does not convert from decimal to binary since binary is its native language. When the computer displays a number it will convert from the binary representation to any base, which by default is decimal.
A key concept to understand here is the difference between the computers internal storage and the representation as characters on your monitor. If you want to display a number as binary, you can write an algorithm in code to do the exact steps that you performed by hand. You then print out the characters 1 and 0 as calculated by the algorithm.
Indeed, like you mention in one of you comments, if compiler has a small look-up table to associate decimal integers to binary integers then it can be done with simple binary multiplications and additions.
Look-up table has to contain binary associations for single decimal digits and decimal ten, hundred, thousand, etc.
Decimal 14 can be transformed to binary by multipltying binary 1 by binary 10 and added binary 4.
Decimal 149 would be binary 1 multiplied by binary 100, added to binary 4 multiplied by binary 10 and added binary 9 at the end.
Decimal are misunderstood in a program
let's take an example from c language
int x = 14;
here 14 is not decimal its two characters 1 and 4 which are written together to be 14
we know that characters are just representation for some binary value
1 for 00110001
4 for 00110100
full ascii table for characters can be seen here
so 14 in charcter form actually written as binary 00110001 00110100
00110001 00110100 => this binary is made to look as 14 on computer screen (so we think it as decimal)
we know number 14 evntually should become 14 = 1110
or we can pad it with zero to be
14 = 00001110
for this to happen computer/processor only need to do binary to binary conversion i.e.
00110001 00110100 to 00001110
and we are all set

Number Systems-Hex vs Binary

Question with regards to the use of hexadecimal. Here's a statement I have a question about:
Hexadecimal is often the preferred means of representing data because it uses fewer digits than binary.
For example, the word vim can be represented by:
Hexadecimal: 76 69 6D
Decimal: 118 105 109
Binary: 01110110 01101001
01101101
Obviously, hex is shorter than binary in this example, however, wont the hex values eventually be converted to binary at the machine level so the end result for hex-binary is exactly the same?
This is a good question.
Yes, the hexadecimal values will be converted in binaries at machine level.
But you are looking the question from the machine point of view.
Hexadecimal notation was introduced because:
It's more easy to read and memorize than binaries for human. For example if you are reading memory addresses, you can observe that they are actually written in hexadecimal, that is far more simple to read then binary.
It's easy to do calculation from binaries to hexadecimal than other base (like
our today-used base 10). For example, it's easy to group binary digits into hex in your head (4 bits per hex digit).
I suggest you this article that gives some easy example calculations to better understand hexadecimal advantages.

floating point hex octal binary

I am working on a calculator that allows you to perform calculations past the decimal point in octal, hexadecimal, binary, and of course decimal. I am having trouble though finding a way to convert floating point decimal numbers to floating point hexadecimal, octal, binary and vice versa.
The plan is to do all the math in decimal and then convert the result into the appropriate number system. Any help, ideas or examples would be appreciated.
Thanks!
Hmm... this was a homework assignment in my university's CS "weed-out" course.
The operations for binary are described in Schaum's Outline Series: Essential Computer Mathematics by Seymour Lipschutz. For some reason it is still on my bookshelf 23 years later.
As a hint, convert octal and hex to binary, perform the operations, convert back to binary.
Or you can perform decimal operations and perform the conversions to octal/hex/binary afterward. The process is essentially the same for all positional number systems arithmetic.
What Programming Language are you using?
it is definatly a good idea to change everything into binary do the math on the binary then convert back. if you multiply a (unsigneD) binary number by 2, ti is the same as a Bit Shift Left ( << 1 in C), a division by 2 is the same as a Bit shit Right (>> in C).
addition and subtraction is the same as you would do in elementary school.
also, remember that if you cast a float as an int it will truncated it int(10.5) = 10;
I had this same problem a few days ago. I found http://www.binaryconvert.com, which allows you to convert between floating point decimal, binary, octal, and hexadecimal in any order you want.