Binary to standard digit? - binary

I'm going to make a computer in Minecraft. I understand how to build a computer where it can make binary operations but I want the outputs to be displayed as standard integer numbers. How you "convert" the binaries into standard digits? Is there any chart for that? And the digits will be shown like in old calculators; with 7 lines.
--
| |
--
| |
--

In electronics, what you need is called a "binary to binary coded decimal" converter. "Binary coded decimal" is the set of bits needed to produce a number on a 7 segment display. Here's a PDF describing how one of these chips works. Page 3 of the PDF shows the truth table needed to do the conversion as well as a picture of all of the NAND gates that implement it in hardware. You can use the truth table to build the set of boolean expressions needed in your program.

0 = 0
1 = 1
10 = 2
11 = 3
100 = 4
101 = 5
110 = 6
111 = 7
...
Do you see the pattern? Here's the formula:
number = 2^0 * (rightmost digit)
+ 2^1 * (rightmost-but-1 digit
+ 2^2 * (rightmost-but-2 digit) + ...

Maybe what you are looking for is called BCD or Binary Coded Decimal. There is a chart and a karnaugh map for it that has been used for decades. a quick Google search for it gave me this technical page
http://circuitscan.homestead.com/files/digelec/bcdto7seg.htm
How are you trying to build the computer?
Maybe that key word can at least help you find what you need. :)

Your problem has two parts:
Convert a binary number into digits, that is do a binary to BCD conversion.
Convert a digit into a set of segments to activate.
For the latter you can use a table that assigns the bitmap of active segments to each digit.

I think's that's two different questions.
There isn't a "binary string of 0/1" to integer conversion built in - you would normally just write your own to loop over the string and detect each power of 2.
YOu can also write your own 7segment LED display - it's a little tricky because it's on multiple lines, but would be an interesting excersize.
Alternatively most GUIs have an LCD font,Qt certainly does

Related

Why is Binary (machine code) based off Boolean Algebra? What would happen if numbers went from 0-9 instead of 0 or 1?

I'm curious to know if it would be possible to create a computer that uses binary that can go from 0000 up to 9999 by having true and false be 1 and 0, but add numbers 2-9 to get more possibilities for numbers. Is binary code only consisting of 0's and 1's for simplicity? Is it because for some reason computers can only understand True and False?
Binary code starts with 0 (0000) and increases to 1 (0001) to 2 (0010) and 10 (1010). Could is be possible for a computer to recognize 0's and 1's but then go to 2's and other numbers? For example, 0000 = 0, 0001 = 1, 0002 = 2, 0009 = 9 then 0010 = 10, and so on.
If this isn't possible somehow, please explain why and give a general explanation of how computers work because I'm interested and want to learn more. If this isn't used because it's inefficient, please epxlain what makes it inefficient and what makes 0's and 1's more efficient.
Thank you.
I expect that it would be possible to create a computer like this but I searched online and couldn't find out why binary code can't have numbers other than 0's and 1's.
Answer to myself for future reference:
Binary is based on Boolean Algebra because it's a base 2 system, and Decimal is a base 10 system that goes from 0-9 instead of 0 or 1 like Binary which is a base 2 system. Computers easily understand binary because its based off on and off states (0 or 1) with 0 being off and 1 being on. Computers use logic gates which are composed of a multitude of transistors that use boolean logic to store data for the computer. Binary makes hardware convenient for computers. Other number systems are used for other purposes different from Binary's purpose. For example, hexadecimal is used to represent numbers that are large in a more simple way that decimal is able to, take the number one million for example,in decimal, it would be 1000000, in binary it would be 11110100001001000000, and in hexadecimal it would be F4240. This is why the Binary number system is based off boolean alegbra and why computers use binary and not other number systems.
It is based on how the data is stored. Each piece of data stored in your memory can have only two values. Think of your memory as a number of glasses which can either be empty or full. Which means the data is stored as bunch of 1s and 0s. This is the result of moving from analog systems to digital, analog values can be between 0 and 1. In analog systems for example, you can have 0.25 or 0.7. But since computers became digital, the logic became binary.
It will be really beneficial to research the history of computers and learn how they evolved over time, if you are interested in this topic.

LC-3 algorithm for converting ASCII strings to Binary Values

Figure 10.4 provides an algorithm for converting ASCII strings to binary values. Suppose the decimal number is arbitrarily long. Rather than store a table of 10 values for the thousands-place digit, another table for the 10 ten-thousands-place digit, and so on, design an algorithm to do the conversion without resorting to any tables whatsoever.
I have attached pictures of figure 10.4. I am not looking for an answer to the problem, but rather can someone please explain this problem and perhaps give some direction on how to go about creating the algorithm?
Figure 10.4
Figure 10.4 second image
I am unsure as to what it means by tables and do not know where to start really.
The tables are those global, initialized arrays: one called Lookup10 holding 10, 20, 30, 40, ..., and another called Lookup100 holding 100, 200, 300, 400...
You can ignore the tables: as per the assignment instructions you're supposed to find a different way to accomplish this anyway.  Or, you can run that code in simulator or mentally to understand how it works.
The bottom line is that LC-3, while it can do anything (it is turning complete), it can't do much in any one instruction.  For arithmetic & logic, it can do add, not, and.  That's pretty much it!  But that's enough — let's note that modern hardware does everything with only one logic gate, namely NAND, which is a binary operator (so NAND directly available; NOT by providing NAND with the same operand for both inputs; AND by doing NOT after NAND; OR using NOT on both inputs first and then NAND; etc..)
For example, LC-3 cannot multiply or divide or modulus or right shift directly — each of those operations is many instructions and in the general case, some looping construct.  Multiplication can be done by repetitive addition, and division/modulus by repetitive subtraction.  These are super inefficient for larger operands, and there are much more efficient algorithms that are also substantially more complex, so those greatly increase program complexity beyond that already with the repetitive operation approach.
That subroutine goes backwards through the use input string.  It takes a string length count in R1 as parameter supplied by caller (not shown).  It looks at the last character in the input and converts it from an ASCII character to a binary number.
(We would commonly do that conversion from ascii character to numeric value using subtraction: moving the character values from the ascii character range of 0x30..0x39 to numeric values in the range 0..9, but they do it with masking, which also works.  The subtraction approach integrates better with error detection (checking if not a valid digit character, which is not done here), whereas the masking approach is simpler for LC-3.)
The subroutine then obtains the 2nd last digit (moving backwards through the user's input string), converting that to binary using the mask approach.  That yields a number between 0 and 9, which is used as an index into the first table Lookup10.  The value obtained from the table at that index position is basically the index × 10.  So this table is a × 10 table.  The same approach is used for the third (and first or, last-going-backwards) digit, except it uses the 2nd table which is a × 100 table.
The standard approach for string to binary is called atoi (search it) standing for ascii to integer.  It moves forwards through the string, and for every new digit, it multiples the existing value, computed so far, by 10 before adding in the next digit's numeric value.
So, if the string is 456, the first it obtains 4, then because there is another digit, 4 × 10 = 40, then + 5 for 45, then × 10 for 450, then + 6 for 456, and so on.
The advantage of this approach is that it can handle any number of digits (up to overflow).  The disadvantage, of course, is that it requires multiplication, which is a complication for LC-3.
Multiplication where one operand is the constant 10 is fairly easy even in LC-3's limited capabilities, and can be done with simple addition without looping.  Basically:
n × 10 = n + n + n + n + n + n + n + n + n + n
and LC-3 can do those 9 additions in just 9 instructions.  Still, we can also observe that:
n × 10 = n × 8 + n × 2
and also that:
n × 10 = (n × 4 + n) × 2     (which is n × 5 × 2)
which can be done in just 4 instructions on LC-3 (and none of these needs looping)!
So, if you want to do this approach, you'll have to figure out how to go forwards through the string instead of backwards as the given table version does, and, how to multiply by 10 (use any one of the above suggestions).
There are other approaches as well if you study atoi.  You could keep the backwards approach, but now will have to multiply by 10, by 100, by 1000, a different factor for each successive digit .  That might be done by repetitive addition.  Or a count of how many times to multiply by 10 — e.g. n × 1000 = n × 10 × 10 × 10.

How computers convert decimal to binary integers

This is surely a duplicate, but I was not able to find an answer to the following question.
Let's consider the decimal integer 14. We can obtain its binary representation, 1110, using e.g. the divide-by-2 method (% represents the modulus operand):
14 % 2 = 0
7 % 2 = 1
3 % 2 = 1
1 % 2 = 1
but how computers convert decimal to binary integers?
The above method would require the computer to perform arithmetic and, as far as I understand, because arithmetic is performed on binary numbers, it seems we would be back dealing with the same issue.
I suppose that any other algorithmic method would suffer the same problem. How do computers convert decimal to binary integers?
Update: Following a discussion with Code-Apprentice (see comments under his answer), here is a reformulation of the question in two cases of interest:
a) How the conversion to binary is performed when the user types integers on a keyboard?
b) Given a mathematical operation in a programming language, say 12 / 3, how does the conversion from decimal to binary is done when running the program, so that the computer can do the arithmetic?
There is only binary
The computer stores all data as binary. It does not convert from decimal to binary since binary is its native language. When the computer displays a number it will convert from the binary representation to any base, which by default is decimal.
A key concept to understand here is the difference between the computers internal storage and the representation as characters on your monitor. If you want to display a number as binary, you can write an algorithm in code to do the exact steps that you performed by hand. You then print out the characters 1 and 0 as calculated by the algorithm.
Indeed, like you mention in one of you comments, if compiler has a small look-up table to associate decimal integers to binary integers then it can be done with simple binary multiplications and additions.
Look-up table has to contain binary associations for single decimal digits and decimal ten, hundred, thousand, etc.
Decimal 14 can be transformed to binary by multipltying binary 1 by binary 10 and added binary 4.
Decimal 149 would be binary 1 multiplied by binary 100, added to binary 4 multiplied by binary 10 and added binary 9 at the end.
Decimal are misunderstood in a program
let's take an example from c language
int x = 14;
here 14 is not decimal its two characters 1 and 4 which are written together to be 14
we know that characters are just representation for some binary value
1 for 00110001
4 for 00110100
full ascii table for characters can be seen here
so 14 in charcter form actually written as binary 00110001 00110100
00110001 00110100 => this binary is made to look as 14 on computer screen (so we think it as decimal)
we know number 14 evntually should become 14 = 1110
or we can pad it with zero to be
14 = 00001110
for this to happen computer/processor only need to do binary to binary conversion i.e.
00110001 00110100 to 00001110
and we are all set

Storing multiple item settings in database

I'm building a gallery. Each image is a row with name and src and order etc.
I want to have 3 more options per image: Featured, Visible, Disabled.
Should I have those as columns?
Or have just 1 column settings and store a 3 binary numbers?
For example:
111 = Featured, Visible, Disabled
110 = Featured, Visible
010 = Visible
001 = Disabled
Or I can even convert that to DEC and simply store a 0 to 7 (like CHMOD style)
What is the best way to do this?
Thanks!
depends. are only programmers with pocket-protectors looking at data and app code? i am all for complexity but i would say spring for a little overhead. also, despite valiant efforts, i have failed to do bit-wise searching in mysql
Problems with bit-wise searching, see this.
If you are asking how to store/manipulate in MySQL...
SET ('Featured', 'Visible', 'Disabled')
TINYINT UNSIGNED and do bit arithmetic; see 1 << 2 give you 4; 0x4 is a way to express literal 4 in hex; a | b is bitwise OR, etc.

Converting 8 bit binary to BCD using integers

OK hello all , what i am trying to do in VHDL is take an 8 bit binary value and represent it as BCD but theres a catch as this calue must be a fraction of the maximum input which is 9.
1- Convert input to Integer e.g 1000 0000 -> 128
2- Divide integer by 255 then multiply by 90 ( 90 so that i get the one's digit and the first digit after the decimal point to be all after the decimal point)
E.g 128/255*90 = 45.17 ( let this be signal_in)
3.Extract the two digits of 45 by dividing by 20 and store them as separate integers
e.g I would use something like:
LSB_int = signal_in mod 10
Then i would divide signal in by 10 hence changing it to 4.517 then let that equal to MSB_int.. (that would truncate the decimals and store 4 right)
4.Convert both the LSB_int and MSB_int to 4 digit BCD
..and i would be perfect from there...But sadly i got so much trouble...with different data types (signed unsigend std_logic_vectors)and division.So i just need help with any flaws in my thought process and things i should look out for when doing this..
I actually did over my code and thought i saved this one..but i didn't and well i still believe this solution can work i would reply with what i think was my old code...as soon as i could remember it all..
Here is my other question with my new code..(just to show i did do something..)
Convert 8bit binary number to BCD in VHDL
f I understand well, what you need is to convert an 8bit data
0-255 → 0-9000
and represent it with 4 BCD digit.
For example You want 0x80 → 4517 (BCD)
If so I suggest you a totally different idea:
1)
let convert input range in output range with a simple 8bit*8bit->16bit
(in_data * 141) and keep the 14 MSB (0.1% error)
And let say this 14 bit register is the TARGET
2)
Build a 4 digit BCD Up/Down counter (your output)
Build a 14bit Binary Up/Down counter (follower)
Supply both with the same input (reset, clk, UpDown)
(making one the shadow of the other)
3)
Compare TARGET and the binary counter
if (follower < target) then increment the counters
else if (follower > target) then decrements the counters
else nothing
4)
end