How does a computer convert a bit sequence into a digit sequence? - binary

I am making an n-bit integer datatype, so I'm not restricted to the standard 32 and 64 bit variants.
My binary calculations seem to be correct, I would now like to show them as base10 numbers.
Here is what I don't know: how does a computer now what specific characters to put on the console?
How does it know that 1111 1111 should be the sequence of 255 on the screen? What algorithm is used for this? I know how to convert 1111 1111 into 255 * mathematically *. But how do you do the same * visually *?
I don't really know how to even begin on doing this.

Related

How computers convert decimal to binary integers

This is surely a duplicate, but I was not able to find an answer to the following question.
Let's consider the decimal integer 14. We can obtain its binary representation, 1110, using e.g. the divide-by-2 method (% represents the modulus operand):
14 % 2 = 0
7 % 2 = 1
3 % 2 = 1
1 % 2 = 1
but how computers convert decimal to binary integers?
The above method would require the computer to perform arithmetic and, as far as I understand, because arithmetic is performed on binary numbers, it seems we would be back dealing with the same issue.
I suppose that any other algorithmic method would suffer the same problem. How do computers convert decimal to binary integers?
Update: Following a discussion with Code-Apprentice (see comments under his answer), here is a reformulation of the question in two cases of interest:
a) How the conversion to binary is performed when the user types integers on a keyboard?
b) Given a mathematical operation in a programming language, say 12 / 3, how does the conversion from decimal to binary is done when running the program, so that the computer can do the arithmetic?
There is only binary
The computer stores all data as binary. It does not convert from decimal to binary since binary is its native language. When the computer displays a number it will convert from the binary representation to any base, which by default is decimal.
A key concept to understand here is the difference between the computers internal storage and the representation as characters on your monitor. If you want to display a number as binary, you can write an algorithm in code to do the exact steps that you performed by hand. You then print out the characters 1 and 0 as calculated by the algorithm.
Indeed, like you mention in one of you comments, if compiler has a small look-up table to associate decimal integers to binary integers then it can be done with simple binary multiplications and additions.
Look-up table has to contain binary associations for single decimal digits and decimal ten, hundred, thousand, etc.
Decimal 14 can be transformed to binary by multipltying binary 1 by binary 10 and added binary 4.
Decimal 149 would be binary 1 multiplied by binary 100, added to binary 4 multiplied by binary 10 and added binary 9 at the end.
Decimal are misunderstood in a program
let's take an example from c language
int x = 14;
here 14 is not decimal its two characters 1 and 4 which are written together to be 14
we know that characters are just representation for some binary value
1 for 00110001
4 for 00110100
full ascii table for characters can be seen here
so 14 in charcter form actually written as binary 00110001 00110100
00110001 00110100 => this binary is made to look as 14 on computer screen (so we think it as decimal)
we know number 14 evntually should become 14 = 1110
or we can pad it with zero to be
14 = 00001110
for this to happen computer/processor only need to do binary to binary conversion i.e.
00110001 00110100 to 00001110
and we are all set

How to tell if an integer is signed or not?

How can I/computer tell if binary numbers are signed or unsigned integers?
Eg the binary number 1000 0001 can both be interpreted as -128, if signed, and 129, if unsigned.
One advantage of using unsigned integers in languages like C (as I understand it) is that it enables you to use larger integers due to the extra bit earned by not defining the sign. However, it seems to me that you need something, somewhere, that keeps track of whether the first bit represents a sign or is just a part of what describes the magnitude of the number.
In memory the computer will store the binary representation as 10000001 whether it is unsigned or signed. Just by looking at the number in memory it would be impossible to classify the binary number as being signed or unsigned. We need instructions to tell whether we should be treating this number as unsigned or signed. This is where the compiler comes in. As a programmer, you will designate that number as signed as unsigned. The compiler will translate the code written and generate the desired instructions for that number. Note that depending on the programming language, there may be different methods of generating these instructions. The important part to remember is that there is no difference in the binary number in memory, only in how the programmer communicates how this number should be treated to the compiler.
Computer doesn't need to know about sign. It's about how to print the number. Arithmetic works fine and it doesn't mind if it is signed or unsigned. When it is trimmed to needed length, the result is correct.
Example multiplying on 8-bit:
// negative times negative
254 * 254 = 64516 // decimal unsigned - it's equal to 4
((-2) * (-2)) = 4 // decimal signed
1111 1110 * 1111 1110 = 1111 1100 0000 0100 // binary - 0000 0100
// negative times positive
254 * 2 = 508 // decimal unsigned - it's equal to (-4)
-2 * 2 = -4 // decimal signed
1111 1110 * 0000 0010 = 0000 0001 1111 1100 // binary - 1111 1100
So it's up to you how you represent 1111 1100. If you are using language like Java, it doesn't support unsigned number types.
The variable type keeps track of whether it's signed or unsigned. The actual value in the register cannot tell you (as you would need an extra bit to store that information). You can turn on warnings that warn against unsigned to singed conversions, and then the compiler will yell at you if you accidentally assigned an unsigned value to a signed one or vice versa.

How does the computer recognise that a a given number is in its Two's comeplent form?

I understand what the Two's complement is and what it is useful for. What I'd like to know is how does the computer decide that the number is in its Two's complement form?
How and when does it decide that 1111 1110 is -2 and not 254? Is it at the OS level of processing?
As far as I think it is dependable on programming language.
Lets say integer allocates 1 byte of memory (to make it simple).
If it is UNSIGNED integer (only positive numbers) you can use any number from 0 to 255 (in total of 2^8 numbers, zero included).
00000000 would be 0, and
11111111 would be 255 decimal.
But if your integer is SIGNED ( u can use both, negative and positive numbers) you can use values from -127 to 127, zero included (again 2^8 numbers).
If your compiler bumps into 11111111 SIGNED int value, it will not interpret it as 255 because signed int allows only values from 0 to 127 for positive numbers so it will take it as -1. Next one, -2 would be 11111110 (254 decimal) and so on...
The computer will already be expecting the data to be in (or not in) two's complement form (otherwise there wouldn't be a way of telling if it is - 2 or 254). And yes, that would probably be decided at the OS-level.
You can probably relate this to the same kind of idea used when setting variable types when declaring variables in a high-level programming language; you'll more than likely set the type to be "decimal", for example, or "integer" and then the compiler will expect values to stick to this type.

How to know if it is either a positive/negative number or it is referring to a number in binary?

I'm learning Integer data formats in a computer science book and as far as I understand that binary representation of a integer whether it is positive or negative is to have the leftmost bit (msb) be either a 0 for positive or 1 for negative, lets say in a 8-bit computer how would I know if it is talking about 10000010 - 130 in base 10 or if it is referring to negative 2?
I might be wrong, if i'm please correct me.
If you were to just see the string 10000010 somewhere, I don't know... written on a wall or something, how would you know how to interpret it?
You might say, hey, that's ten million and ten ( you thought it was base 10 ) or you might say hey, that's -126 ( you thought it was two's complement binary ), or you might say that's positive 130 ( you thought it was standard binary ).
It is, in a theoretical sense, up to whatever is doing the interpreting how it is interpreted.
So, when a computer is holding 8 bits of data, it's up to it how it interprets it.
Now if you're programming, you can tell the computer how you want something interpreted. For example, in c++
// char is 1 byte
unsigned char x = 130u;
Here I have told the compiler to put 130 unsigned into a byte, so the computer will store 10000010 and will interpret it as the value 130
Now consider
// char is 1 byte
char x = -126;
Here I have told the compiler to put -126 signed into a byte, so the computer will again store 10000010 but this time it will interpret it as the value -126.
Take a look at the answer posted to this question: Negative numbers are stored as 2's complement in memory, how does the CPU know if it's negative or positive?
The CPU uses something called an opcode in order to determine which function it will take when manipulating a memory location (in this case, the value 10000010). It is that function within the CPU that will either manipulate it as a negative or a positive number. The CPU doesn't have access to whether or not the number is signed or unsigned - it uses the op code when manipulating that number to determine whether or not it should be a signed or unsigned operation.

Hex to Binary Angle Measurement (BAMS) in any language

I have a 32-bit hex value, for example 04FA4FA4 and I want to know how to convert it to BAMS in the form of a double. An example in any language would work fine, I am only concerned about learning the algorithm or formula to make the conversion. I know how to convert to BAMS when I have a form like, 000:00.0000 but I can't figure out how to make the conversion from Hex.
This link is the easiest to understand resource I found. The algorithm is simple:
(decimal hex value) * 180 / 2^(n-1) //where n is the number of bits
The example in the reference is,
0000 0000 1010 0110
0 0 A 6
166 * 180 * 2^−15 = 0.9118 degrees
The code for this algorithm is so simple, I don't think I need to enumerate it here. Let me know, if someone feels this is incorrect.