OK hello all , what i am trying to do in VHDL is take an 8 bit binary value and represent it as BCD but theres a catch as this calue must be a fraction of the maximum input which is 9.
1- Convert input to Integer e.g 1000 0000 -> 128
2- Divide integer by 255 then multiply by 90 ( 90 so that i get the one's digit and the first digit after the decimal point to be all after the decimal point)
E.g 128/255*90 = 45.17 ( let this be signal_in)
3.Extract the two digits of 45 by dividing by 20 and store them as separate integers
e.g I would use something like:
LSB_int = signal_in mod 10
Then i would divide signal in by 10 hence changing it to 4.517 then let that equal to MSB_int.. (that would truncate the decimals and store 4 right)
4.Convert both the LSB_int and MSB_int to 4 digit BCD
..and i would be perfect from there...But sadly i got so much trouble...with different data types (signed unsigend std_logic_vectors)and division.So i just need help with any flaws in my thought process and things i should look out for when doing this..
I actually did over my code and thought i saved this one..but i didn't and well i still believe this solution can work i would reply with what i think was my old code...as soon as i could remember it all..
Here is my other question with my new code..(just to show i did do something..)
Convert 8bit binary number to BCD in VHDL
f I understand well, what you need is to convert an 8bit data
0-255 → 0-9000
and represent it with 4 BCD digit.
For example You want 0x80 → 4517 (BCD)
If so I suggest you a totally different idea:
1)
let convert input range in output range with a simple 8bit*8bit->16bit
(in_data * 141) and keep the 14 MSB (0.1% error)
And let say this 14 bit register is the TARGET
2)
Build a 4 digit BCD Up/Down counter (your output)
Build a 14bit Binary Up/Down counter (follower)
Supply both with the same input (reset, clk, UpDown)
(making one the shadow of the other)
3)
Compare TARGET and the binary counter
if (follower < target) then increment the counters
else if (follower > target) then decrements the counters
else nothing
4)
end
Related
Figure 10.4 provides an algorithm for converting ASCII strings to binary values. Suppose the decimal number is arbitrarily long. Rather than store a table of 10 values for the thousands-place digit, another table for the 10 ten-thousands-place digit, and so on, design an algorithm to do the conversion without resorting to any tables whatsoever.
I have attached pictures of figure 10.4. I am not looking for an answer to the problem, but rather can someone please explain this problem and perhaps give some direction on how to go about creating the algorithm?
Figure 10.4
Figure 10.4 second image
I am unsure as to what it means by tables and do not know where to start really.
The tables are those global, initialized arrays: one called Lookup10 holding 10, 20, 30, 40, ..., and another called Lookup100 holding 100, 200, 300, 400...
You can ignore the tables: as per the assignment instructions you're supposed to find a different way to accomplish this anyway. Or, you can run that code in simulator or mentally to understand how it works.
The bottom line is that LC-3, while it can do anything (it is turning complete), it can't do much in any one instruction. For arithmetic & logic, it can do add, not, and. That's pretty much it! But that's enough — let's note that modern hardware does everything with only one logic gate, namely NAND, which is a binary operator (so NAND directly available; NOT by providing NAND with the same operand for both inputs; AND by doing NOT after NAND; OR using NOT on both inputs first and then NAND; etc..)
For example, LC-3 cannot multiply or divide or modulus or right shift directly — each of those operations is many instructions and in the general case, some looping construct. Multiplication can be done by repetitive addition, and division/modulus by repetitive subtraction. These are super inefficient for larger operands, and there are much more efficient algorithms that are also substantially more complex, so those greatly increase program complexity beyond that already with the repetitive operation approach.
That subroutine goes backwards through the use input string. It takes a string length count in R1 as parameter supplied by caller (not shown). It looks at the last character in the input and converts it from an ASCII character to a binary number.
(We would commonly do that conversion from ascii character to numeric value using subtraction: moving the character values from the ascii character range of 0x30..0x39 to numeric values in the range 0..9, but they do it with masking, which also works. The subtraction approach integrates better with error detection (checking if not a valid digit character, which is not done here), whereas the masking approach is simpler for LC-3.)
The subroutine then obtains the 2nd last digit (moving backwards through the user's input string), converting that to binary using the mask approach. That yields a number between 0 and 9, which is used as an index into the first table Lookup10. The value obtained from the table at that index position is basically the index × 10. So this table is a × 10 table. The same approach is used for the third (and first or, last-going-backwards) digit, except it uses the 2nd table which is a × 100 table.
The standard approach for string to binary is called atoi (search it) standing for ascii to integer. It moves forwards through the string, and for every new digit, it multiples the existing value, computed so far, by 10 before adding in the next digit's numeric value.
So, if the string is 456, the first it obtains 4, then because there is another digit, 4 × 10 = 40, then + 5 for 45, then × 10 for 450, then + 6 for 456, and so on.
The advantage of this approach is that it can handle any number of digits (up to overflow). The disadvantage, of course, is that it requires multiplication, which is a complication for LC-3.
Multiplication where one operand is the constant 10 is fairly easy even in LC-3's limited capabilities, and can be done with simple addition without looping. Basically:
n × 10 = n + n + n + n + n + n + n + n + n + n
and LC-3 can do those 9 additions in just 9 instructions. Still, we can also observe that:
n × 10 = n × 8 + n × 2
and also that:
n × 10 = (n × 4 + n) × 2 (which is n × 5 × 2)
which can be done in just 4 instructions on LC-3 (and none of these needs looping)!
So, if you want to do this approach, you'll have to figure out how to go forwards through the string instead of backwards as the given table version does, and, how to multiply by 10 (use any one of the above suggestions).
There are other approaches as well if you study atoi. You could keep the backwards approach, but now will have to multiply by 10, by 100, by 1000, a different factor for each successive digit . That might be done by repetitive addition. Or a count of how many times to multiply by 10 — e.g. n × 1000 = n × 10 × 10 × 10.
This is surely a duplicate, but I was not able to find an answer to the following question.
Let's consider the decimal integer 14. We can obtain its binary representation, 1110, using e.g. the divide-by-2 method (% represents the modulus operand):
14 % 2 = 0
7 % 2 = 1
3 % 2 = 1
1 % 2 = 1
but how computers convert decimal to binary integers?
The above method would require the computer to perform arithmetic and, as far as I understand, because arithmetic is performed on binary numbers, it seems we would be back dealing with the same issue.
I suppose that any other algorithmic method would suffer the same problem. How do computers convert decimal to binary integers?
Update: Following a discussion with Code-Apprentice (see comments under his answer), here is a reformulation of the question in two cases of interest:
a) How the conversion to binary is performed when the user types integers on a keyboard?
b) Given a mathematical operation in a programming language, say 12 / 3, how does the conversion from decimal to binary is done when running the program, so that the computer can do the arithmetic?
There is only binary
The computer stores all data as binary. It does not convert from decimal to binary since binary is its native language. When the computer displays a number it will convert from the binary representation to any base, which by default is decimal.
A key concept to understand here is the difference between the computers internal storage and the representation as characters on your monitor. If you want to display a number as binary, you can write an algorithm in code to do the exact steps that you performed by hand. You then print out the characters 1 and 0 as calculated by the algorithm.
Indeed, like you mention in one of you comments, if compiler has a small look-up table to associate decimal integers to binary integers then it can be done with simple binary multiplications and additions.
Look-up table has to contain binary associations for single decimal digits and decimal ten, hundred, thousand, etc.
Decimal 14 can be transformed to binary by multipltying binary 1 by binary 10 and added binary 4.
Decimal 149 would be binary 1 multiplied by binary 100, added to binary 4 multiplied by binary 10 and added binary 9 at the end.
Decimal are misunderstood in a program
let's take an example from c language
int x = 14;
here 14 is not decimal its two characters 1 and 4 which are written together to be 14
we know that characters are just representation for some binary value
1 for 00110001
4 for 00110100
full ascii table for characters can be seen here
so 14 in charcter form actually written as binary 00110001 00110100
00110001 00110100 => this binary is made to look as 14 on computer screen (so we think it as decimal)
we know number 14 evntually should become 14 = 1110
or we can pad it with zero to be
14 = 00001110
for this to happen computer/processor only need to do binary to binary conversion i.e.
00110001 00110100 to 00001110
and we are all set
See this code:
var jsonString = '{"id":714341252076979033,"type":"FUZZY"}';
var jsonParsed = JSON.parse(jsonString);
console.log(jsonString, jsonParsed);
When I see my console in Firefox 3.5, the value of jsonParsed is the number rounded:
Object id=714341252076979100 type=FUZZY
Tried different values, the same outcome (number rounded).
I also don't get its rounding rules. 714341252076979136 is rounded to 714341252076979200, whereas 714341252076979135 is rounded to 714341252076979100.
Why is this happening?
You're overflowing the capacity of JavaScript's number type, see §8.5 of the spec for details. Those IDs will need to be strings.
IEEE-754 double-precision floating point (the kind of number JavaScript uses) can't precisely represent all numbers (of course). Famously, 0.1 + 0.2 == 0.3 is false. That can affect whole numbers just like it affects fractional numbers; it starts once you get above 9,007,199,254,740,991 (Number.MAX_SAFE_INTEGER).
Beyond Number.MAX_SAFE_INTEGER + 1 (9007199254740992), the IEEE-754 floating-point format can no longer represent every consecutive integer. 9007199254740991 + 1 is 9007199254740992, but 9007199254740992 + 1 is also 9007199254740992 because 9007199254740993 cannot be represented in the format. The next that can be is 9007199254740994. Then 9007199254740995 can't be, but 9007199254740996 can.
The reason is we've run out of bits, so we no longer have a 1s bit; the lowest-order bit now represents multiples of 2. Eventually, if we keep going, we lose that bit and only work in multiples of 4. And so on.
Your values are well above that threshold, and so they get rounded to the nearest representable value.
As of ES2020, you can use BigInt for integers that are arbitrarily large, but there is no JSON representation for them. You could use strings and a reviver function:
const jsonString = '{"id":"714341252076979033","type":"FUZZY"}';
// Note it's a string −−−−^−−−−−−−−−−−−−−−−−−^
const obj = JSON.parse(jsonString, (key, value) => {
if (key === "id" && typeof value === "string" && value.match(/^\d+$/)) {
return BigInt(value);
}
return value;
});
console.log(obj);
(Look in the real console, the snippets console doesn't understand BigInt.)
If you're curious about the bits, here's what happens: An IEEE-754 binary double-precision floating-point number has a sign bit, 11 bits of exponent (which defines the overall scale of the number, as a power of 2 [because this is a binary format]), and 52 bits of significand (but the format is so clever it gets 53 bits of precision out of those 52 bits). How the exponent is used is complicated (described here), but in very vague terms, if we add one to the exponent, the value of the significand is doubled, since the exponent is used for powers of 2 (again, caveat there, it's not direct, there's cleverness in there).
So let's look at the value 9007199254740991 (aka, Number.MAX_SAFE_INTEGER):
+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− sign bit
/ +−−−−−−−+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− exponent
/ / | +−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−+− significand
/ / | / |
0 10000110011 1111111111111111111111111111111111111111111111111111
= 9007199254740991 (Number.MAX_SAFE_INTEGER)
That exponent value, 10000110011, means that every time we add one to the significand, the number represented goes up by 1 (the whole number 1, we lost the ability to represent fractional numbers much earlier).
But now that significand is full. To go past that number, we have to increase the exponent, which means that if we add one to the significand, the value of the number represented goes up by 2, not 1 (because the exponent is applied to 2, the base of this binary floating point number):
+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− sign bit
/ +−−−−−−−+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− exponent
/ / | +−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−+− significand
/ / | / |
0 10000110100 0000000000000000000000000000000000000000000000000000
= 9007199254740992 (Number.MAX_SAFE_INTEGER + 1)
Well, that's okay, because 9007199254740991 + 1 is 9007199254740992 anyway. But! We can't represent 9007199254740993. We've run out of bits. If we add just 1 to the significand, it adds 2 to the value:
+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− sign bit
/ +−−−−−−−+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− exponent
/ / | +−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−+− significand
/ / | / |
0 10000110100 0000000000000000000000000000000000000000000000000001
= 9007199254740994 (Number.MAX_SAFE_INTEGER + 3)
The format just cannot represent odd numbers anymore as we increase the value, the exponent is too big.
Eventually, we run out of significand bits again and have to increase the exponent, so we end up only being able to represent multiples of 4. Then multiples of 8. Then multiples of 16. And so on.
What you're seeing here is actually the effect of two roundings. Numbers in ECMAScript are internally represented double-precision floating-point. When id is set to 714341252076979033 (0x9e9d9958274c359 in hex), it actually is assigned the nearest representable double-precision value, which is 714341252076979072 (0x9e9d9958274c380). When you print out the value, it is being rounded to 15 significant decimal digits, which gives 14341252076979100.
It is not caused by this json parser. Just try to enter 714341252076979033 to fbug's console. You'll see the same 714341252076979100.
See this blog post for details:
http://www.exploringbinary.com/print-precision-of-floating-point-integers-varies-too
JavaScript uses double precision floating point values, ie a total precision of 53 bits, but you need
ceil(lb 714341252076979033) = 60
bits to exactly represent the value.
The nearest exactly representable number is 714341252076979072 (write the original number in binary, replace the last 7 digits with 0 and round up because the highest replaced digit was 1).
You'll get 714341252076979100 instead of this number because ToString() as described by ECMA-262, §9.8.1 works with powers of ten and in 53 bit precision all these numbers are equal.
The problem is that your number requires a greater precision than JavaScript has.
Can you send the number as a string? Separated in two parts?
JavaScript can only handle exact whole numbers up to about 9000 million million (that's 9 with 15 zeros). Higher than that and you get garbage. Work around this by using strings to hold the numbers. If you need to do math with these numbers, write your own functions or see if you can find a library for them: I suggest the former as I don't like the libraries I've seen. To get you started, see two of my functions at another answer.
I have a BINARY number which i want to convert it into the DECIMAL and OCTAL.
(0100 1111 1011 0010)2
I know how to convert it into the decimal. But the question making me confuse. Because middle of every 4 digits there is a space "0101 1111"
can u help me how to understand this question.
Thanks
First of all, make sure that number you are converting into Decimal and Octal is actually 'Binary' and not 'Binary Coded Decimal (BCD)'. Usually when the number is grouped into 4 binary digits, it represents a BCD instead of just binary.
So, once you make sure its actually binary and not BCD, the conversion to both decimal and octal are simple steps.
For binary to octal, you group the binary number into sets of 3 digits, starting form the Least Significant Bit(LSB or right-most) to the Most Significant Bit(MSB or left-most). Add leading zeros if a group of 3 digits can not be formed at the MSB.
Now convert each group of digits from binary to octal:
(000) -> 0
(001) -> 1
.
.
(111) -> 7
Finally put the numbers together, and there you have your binary converted to octal.
Eg:-
binary - 00101101
split into groups of 2: -> 000 101 101 -> 0 5 5 - > 55
Difference between'Binary Coded Decimal' and 'Binary':
For the decimal number 1248
the binary would simply be 10011100000
However, the BCD would be -> 0001 0010 0100 1000
The spaces are not part of the number, it's just to make it easier for humans to read. Conversion from binary to octal is simple. Group the binary digits into sets of 3 (from right to left, add extra 0s to the leftmost group, then convert each group individually. Your example:
0100 1111 1011 0010 -> 100 111 110 110 010 -> 47662
The space is just for readability. Especially nice if you try to convert this to hex, because 4 binary digits make up one hex-digit.
Firstly, those spaces are for human readability; just delete them. Secondly, If this is not for a computer program, simply open up the windows calculator, go to view, and select programmer. Then chose the bin radio button and type in your number. the qword radio button should be selected. If it's for a program, I will need to know what language to help you.
To convert octal to decimal very quickly there are two methods. You can actually do the actual calculation in bitshift. In programming, you should do bitshift.
Example octal number = 147
Method one: From left to right.
Step 1: First digit is one. Take that times 8 plus 4. Got 12.
Step 2: Take 12 times 8 + 7. Got 103, and 103 is the answer.
Ultimately you can use method one to convert any base into base 10.
Method one is reading from left to right of the string. Make a result holder for calculation. When you read the first leftmost digit, you add that to a result value. Each time you read a new digit, you take the result value and multiply that by the base of the number(for octal, that would be 8), then you add the value of the new digit to the result.
Method 2, bitshift:
Octal Number: 147.
Step 1: 1 = 1(bin) = Shift << 3 = 1000(result value)
Step 2: 4 = 100(bin) + 1000(result value) = 1100(result value)
Step 3: 1100(result value) Shift << 3 = 1100000
Step 4: 7 = 111(bin) + 1100000(result value) = 1100111
Step 5: 1100111 binary is 103 decimal.
In a programming loop, you can do something like the below, and it is lightning fast. The code is so simple that it can be converted into any programming language. Note that there isn't any error checking.
for ( int i = 0; i < length; i++ ){
c = (str.charAt(i) ^ 48);
if ( c > 7 ) return 0; // <- if char ^ 48 > 7 then that is not a valid octal number.
out = (out << 3) + c;
}
As the title of the question says, I want to take a number (declared preferably as int or char or std::uint8_t), convert it into its binary representation, then truncate or pad it by a certain variable number of bits given, and then insert it into a bit container (preferably std::vector<bool> because I need variable bit container size as per the variable number of bits). For example, I have int a= 2, b = 3. And let's say I have to write this as three bits and six bits respectively into the container. So I have to put 010 and 000011 into the bit container. So, how would I go from 2 to 010 or 3 to 000011 using normal STL methods? I tried every possible thing that came to my mind, but I got nothing. Please help. Thank you.
You can use a combination of 'shifting' (>>) and 'bit-wise and' (&).
First lets look at the bitwise &: For instance if you have an int a=7 and you do the &-operation on it with 13, you will get 5. Why?
Because & gives 1 at position i iff both operands have a 1 at position i. So we get:
00...000111 // binary 7
& 00...001101 // binary 13
-------------
00...000101 // binary 5
Next, by using the shift operation >> you can shift the binary representation of your ints. For instance 5 >> 1 is 2. Why?
Because each position gets displaced by 1 to the right. The rightmost bit "falls out". Hence we have:
00...00101 //binary for 5
shift by 1 to the right gives:
00...00010 // binary for 2
Another example: 13 (01101) shifted by 2 is 3 (00011). I hope you get the idea.
Hence, by repeatedly shifting and doing & with 1 (00..0001), you can read out the binary representation of a number.
Finally, you can use this 1 to set the corresponding position in your vector<bool>. Assuming you want to have the representation you show in your post, you will have to fill in your vector from the back. So, you could for instance do something along the lines:
unsigned int f=13; //the number we want to convert
std::vector<bool> binRepr(size, false); //size is the container-size you want to use.
for(int currBit=0; currBit<size; currBit++){
binRepr[size-1-currBit] = (f >> currBit) & 1;
}
If the container is smaller than the binary representation of your int, the container will contain the truncated number. If it is larger, it will fill in with 0s.
I'm using an unsigned int since for an int you would still have to take care of negative numbers (for positive numbers it should work the same) and we would have to dive into the two's complement representation, which is not difficult, but requires a bit more bit-fiddling.