Binary combinations count - binary

How can we find out the total count of combinations with number of continuous 0s <= 4 in n-bit binary numbers.
Example: Lets consider n=16-bit binary numbers. 2^16 = 65536 numbers.
The following combinations have continuous 0s <=4, so they are allowed -
1000011111111111
1000010000100001
1000100001011001
0101010001000011
0101001000100001
The following combinations have at least one continuous 0s >4, so they must not be allowed -
1000010000011111
1000010000010001
1000000011011001
0101010000000011
0101000001000011
One way is to iterate through all n-bit combinations and filter the required ones.
However it will be feasible only for small n-bit numbers. Is there a faster way to determine it for large n-bit numbers.
Thanks.

Let T(n) be the number of numbers of binary length n that you are interested in. Then it should be that there is a recursive linear formula T(n)=2*T(n-1)-T(n-6) that can be solved using standard techniques and identifying the base case(s), where what you are looking for is T(16).
T(i)=2^i up to i < 5
T(5)=2^5 - 1
T(n)=2*T(n-1)-T(n-6) for n > 5
Solving the recurrence yields a formula that should be fairly simple and should be evaluated fast for many values of n.
The reasoning behind the recursive formula is that adding 1 is always safe which accounts for T(n-1) term. Then there is another term T(n-1) - T(n-6), which reflects the fact that we cannot extend numbers of length n-1 with 0 if their suffix consists of 4 zeros in which case the last but 5th digit must be 1 that extends any "good" number with n-6 digits.

Related

LC-3 algorithm for converting ASCII strings to Binary Values

Figure 10.4 provides an algorithm for converting ASCII strings to binary values. Suppose the decimal number is arbitrarily long. Rather than store a table of 10 values for the thousands-place digit, another table for the 10 ten-thousands-place digit, and so on, design an algorithm to do the conversion without resorting to any tables whatsoever.
I have attached pictures of figure 10.4. I am not looking for an answer to the problem, but rather can someone please explain this problem and perhaps give some direction on how to go about creating the algorithm?
Figure 10.4
Figure 10.4 second image
I am unsure as to what it means by tables and do not know where to start really.
The tables are those global, initialized arrays: one called Lookup10 holding 10, 20, 30, 40, ..., and another called Lookup100 holding 100, 200, 300, 400...
You can ignore the tables: as per the assignment instructions you're supposed to find a different way to accomplish this anyway.  Or, you can run that code in simulator or mentally to understand how it works.
The bottom line is that LC-3, while it can do anything (it is turning complete), it can't do much in any one instruction.  For arithmetic & logic, it can do add, not, and.  That's pretty much it!  But that's enough — let's note that modern hardware does everything with only one logic gate, namely NAND, which is a binary operator (so NAND directly available; NOT by providing NAND with the same operand for both inputs; AND by doing NOT after NAND; OR using NOT on both inputs first and then NAND; etc..)
For example, LC-3 cannot multiply or divide or modulus or right shift directly — each of those operations is many instructions and in the general case, some looping construct.  Multiplication can be done by repetitive addition, and division/modulus by repetitive subtraction.  These are super inefficient for larger operands, and there are much more efficient algorithms that are also substantially more complex, so those greatly increase program complexity beyond that already with the repetitive operation approach.
That subroutine goes backwards through the use input string.  It takes a string length count in R1 as parameter supplied by caller (not shown).  It looks at the last character in the input and converts it from an ASCII character to a binary number.
(We would commonly do that conversion from ascii character to numeric value using subtraction: moving the character values from the ascii character range of 0x30..0x39 to numeric values in the range 0..9, but they do it with masking, which also works.  The subtraction approach integrates better with error detection (checking if not a valid digit character, which is not done here), whereas the masking approach is simpler for LC-3.)
The subroutine then obtains the 2nd last digit (moving backwards through the user's input string), converting that to binary using the mask approach.  That yields a number between 0 and 9, which is used as an index into the first table Lookup10.  The value obtained from the table at that index position is basically the index × 10.  So this table is a × 10 table.  The same approach is used for the third (and first or, last-going-backwards) digit, except it uses the 2nd table which is a × 100 table.
The standard approach for string to binary is called atoi (search it) standing for ascii to integer.  It moves forwards through the string, and for every new digit, it multiples the existing value, computed so far, by 10 before adding in the next digit's numeric value.
So, if the string is 456, the first it obtains 4, then because there is another digit, 4 × 10 = 40, then + 5 for 45, then × 10 for 450, then + 6 for 456, and so on.
The advantage of this approach is that it can handle any number of digits (up to overflow).  The disadvantage, of course, is that it requires multiplication, which is a complication for LC-3.
Multiplication where one operand is the constant 10 is fairly easy even in LC-3's limited capabilities, and can be done with simple addition without looping.  Basically:
n × 10 = n + n + n + n + n + n + n + n + n + n
and LC-3 can do those 9 additions in just 9 instructions.  Still, we can also observe that:
n × 10 = n × 8 + n × 2
and also that:
n × 10 = (n × 4 + n) × 2     (which is n × 5 × 2)
which can be done in just 4 instructions on LC-3 (and none of these needs looping)!
So, if you want to do this approach, you'll have to figure out how to go forwards through the string instead of backwards as the given table version does, and, how to multiply by 10 (use any one of the above suggestions).
There are other approaches as well if you study atoi.  You could keep the backwards approach, but now will have to multiply by 10, by 100, by 1000, a different factor for each successive digit .  That might be done by repetitive addition.  Or a count of how many times to multiply by 10 — e.g. n × 1000 = n × 10 × 10 × 10.

Binary representation of a natural number

To get a binary representation from a natural number like 20, we divide this number by 2 and so on until we cannot divide by 2 anymore. To get a binary representation from a decimal number like 0.4512, we multiply this number by 2 repeated times.
What is the logic explanation why with these two systems we get a binary representation?
Thanks
It is based on the fact that numbers are coded in binary.
If the number A is an integer, A is rewritten as A=∑i=0n-1ai×2i=an-1×2n-1+an-2×2n-2+...+a1×2+a0
where ai=0 or 1.
It is easy to see that is A is even, a0=0, and if it is odd, a0=1. So we already have the least significant bit a0.
Now, if we divide A by two, a0 disappears and we have
A/2=an-1×2n-2+an-2×2n-3+...+a2×2+a1
We can determine this way a1 depending on the parity of A/2. and we continue, we get all the bits of A.
Fractional numbers are expressed according to negative powers of 2. If A=0.a-1a-2...a-n, A=a-1/2+a-2/4+...+a-n/2^n
If we multiply it by two, 2×A=a-1+a-2/2+...+a-n/2^n-1. If 2×A&geq;1, we must have a-1=1, otherwise a-1=0. And we can determine other bits is a similar way by successive multiplications by two.

Binary numbers addition

I have just started doing some binary number exercices to prepare for a class that i will start next month and i got the hang of all the conversion from decimal to binary and viceverca But now with the two letters 'a ' ' b' in this exercise i am not sure how can i apply that knowledge to add the bits with the following exercise
Given two Binary numbers a = (a7a6 ... a0) and b = (b7b6 ... b0).There is a clculator that can add 4-bit binary numbers.How many bits will be used to represent the result of a 4-bit addition? Why?
We would like to use our calculator to calculate a + b. For this we can put as many as eight bits (4 bits of the first and 4 bits of the second number) of our choice in the calculator and continue to use the result bit by bit
How many additions does our calculator have to carry out for the addition of a and b at most? How many bits is the result maximum long?
How many additions does the calculator have to perform at least For the result to be correct for all possible inputs a and b?
The number of bits needed to represent a 4-bit binary addition is 5. This is because there could be a carry-over bit that pushes the result to 5 bits.
For example 1111 + 0010 = 10010.
This can be done the same way as adding decimal numbers. From right to left just add the numbers of the same significance. If the two bits are 1+1, the result is 10 so that place becomes a zero and the 1 carries over to the next pair of bits, just like decimal addition.
With regard to the min/max number of step, these seems more like an algorithm specific question. Look up some different binary addition algorithms, like ripple-carry for instance, and it should give you a better idea of what is meant by the question.

MYSQL: Cant save 1000000 on a float field

I have a float column and I'm trying to save the value 1000000. It automatically turns it to 1e+06. How can I fix it?
To have the value returned formatted as 1000000, you can simply add integer zero to the column in the SELECT list.
SELECT mycol+0 AS mycol FROM mytable
MySQL is storing the value IEEE floating point format. (One bit for sign, a certain number of bits for the exponent, and a certain number of bits for the mantissa. This isn't really a MySQL thing, it's the standard representation for floating point values.)
As far as what's being returned, that's an issue with converting that value into string representation.
A floating point number has a large range of values. To represent the maximum value of a float (3.402823e+38) as a decimal value, that would require 38 decimal digits. The seven left most digits of the value are significant, but we'd need to add another 32 zeros/digits to indicate the position of the decimal point.
So, returning a string representation of scientific notation is a reasonable approach to returning a representation of the value.
Those two things are equivalent:
1e+06
= 1 * 10^6
= 1 * 1,000,000
= 1,000,000
It's called scientific notation (see here). mySQL uses it to display huge/tiny values, especially approximate values (see here).
You can use DOUBLE(8, 3) where 8 is the total no. of digits excluding the decimal point, and 3 is the no. of digits to follow the decimal.

Given the following, how do I find the number of normalized floating-point numbers and why?

I'm trying to understand how floating point number arithmetic plays a role in computer science when using the binary system. I came across an excerpt from What Every Computer Scientist Should Know About Floating-Point Arithmetic which defines normalized numbers as unique floating-point numbers with the leading significand being non-zero. It goes on to say...
When β
= 2, p = 3, e min = -1 and e max = 2 there are 16 normalized floating-point numbers, as shown in Figure D-1.
Where β is the base, p is the precision, e min is the minimum exponent, and e max is the maximum exponent.
My attempt at understanding how he came to the conclusion of there being 16 normalized floating-point numbers was to multiply together the possible number of significands β^p and the possible number of exponents e max - e min + 1. My result was 32 possible normalized floating-point values. I am unsure of how to get the correct result of 16 normalized floating-point values as was declared in the paper above. I assumed negative floating-point values were excluded, however, I did not include them in my calculations.
This question is more geared toward mathematical formulae. But it will help me to better understand how floating-point arithmetic works in computer science.
I would like to know how to get the correct result of 16 normalized floating-point numbers and why.
Since the first bit is always 1, with 3 bits for the mantissa you have only two bits to vary, yielding 4 different mantissa values. Combined with 4 different exponent values that's 16. I haven't looked at the paper though.
My attempt at understanding how he came to the conclusion of there being 16 normalized floating-point numbers was to multiply together the possible number of significands β^p and the possible number of exponents e max - e min + 1
This is correct except that the number of possible significands is not βp in binary with an implicit leading 1. In these conditions, the number of possible significands is βp-1, encoded over p-1 bits.
In other words, the missing values for the possible significands have already been taken advantage of when the encoding reserved, say, 52 bits to encode a precision of 53 binary digits.