I have a program that finds the prime factors of a given number. The algorithm works in the way described below.
1) While n is divisible by 2, print 2 and divide n by 2.
2) After step 1, n must be odd. Now start a loop from i = 3 to square root of n. While i divides n, print i and divide n by i, increment i by 2 and continue.
3) If n is a prime number and is greater than 2, then n will not become 1 by above two steps. So print n if it is greater than 2.
Is there a way to make it faster?
You can make it faster by only dividing by primes rather than composites. If it's divisible by a composite, you will have already discovered that when dividing by the prime factors of that composite. For example, there's no point testing to see if a number is divisible by 9 if you've already established it's divisible by 3 (and reduced it correctly).
In addition, I wouldn't advance the test value (by two in your case, to the next prime in mine) until you're sure the value you're testing is non-divisible by it.
Case in point: 27. When your test value is 3, your algorithm divides 27 by 3 to get 9, then immediately increase the test value to 5. It should be left at 3 until the number you're dividing it by is no longer a factor.
Related
Figure 10.4 provides an algorithm for converting ASCII strings to binary values. Suppose the decimal number is arbitrarily long. Rather than store a table of 10 values for the thousands-place digit, another table for the 10 ten-thousands-place digit, and so on, design an algorithm to do the conversion without resorting to any tables whatsoever.
I have attached pictures of figure 10.4. I am not looking for an answer to the problem, but rather can someone please explain this problem and perhaps give some direction on how to go about creating the algorithm?
Figure 10.4
Figure 10.4 second image
I am unsure as to what it means by tables and do not know where to start really.
The tables are those global, initialized arrays: one called Lookup10 holding 10, 20, 30, 40, ..., and another called Lookup100 holding 100, 200, 300, 400...
You can ignore the tables: as per the assignment instructions you're supposed to find a different way to accomplish this anyway. Or, you can run that code in simulator or mentally to understand how it works.
The bottom line is that LC-3, while it can do anything (it is turning complete), it can't do much in any one instruction. For arithmetic & logic, it can do add, not, and. That's pretty much it! But that's enough — let's note that modern hardware does everything with only one logic gate, namely NAND, which is a binary operator (so NAND directly available; NOT by providing NAND with the same operand for both inputs; AND by doing NOT after NAND; OR using NOT on both inputs first and then NAND; etc..)
For example, LC-3 cannot multiply or divide or modulus or right shift directly — each of those operations is many instructions and in the general case, some looping construct. Multiplication can be done by repetitive addition, and division/modulus by repetitive subtraction. These are super inefficient for larger operands, and there are much more efficient algorithms that are also substantially more complex, so those greatly increase program complexity beyond that already with the repetitive operation approach.
That subroutine goes backwards through the use input string. It takes a string length count in R1 as parameter supplied by caller (not shown). It looks at the last character in the input and converts it from an ASCII character to a binary number.
(We would commonly do that conversion from ascii character to numeric value using subtraction: moving the character values from the ascii character range of 0x30..0x39 to numeric values in the range 0..9, but they do it with masking, which also works. The subtraction approach integrates better with error detection (checking if not a valid digit character, which is not done here), whereas the masking approach is simpler for LC-3.)
The subroutine then obtains the 2nd last digit (moving backwards through the user's input string), converting that to binary using the mask approach. That yields a number between 0 and 9, which is used as an index into the first table Lookup10. The value obtained from the table at that index position is basically the index × 10. So this table is a × 10 table. The same approach is used for the third (and first or, last-going-backwards) digit, except it uses the 2nd table which is a × 100 table.
The standard approach for string to binary is called atoi (search it) standing for ascii to integer. It moves forwards through the string, and for every new digit, it multiples the existing value, computed so far, by 10 before adding in the next digit's numeric value.
So, if the string is 456, the first it obtains 4, then because there is another digit, 4 × 10 = 40, then + 5 for 45, then × 10 for 450, then + 6 for 456, and so on.
The advantage of this approach is that it can handle any number of digits (up to overflow). The disadvantage, of course, is that it requires multiplication, which is a complication for LC-3.
Multiplication where one operand is the constant 10 is fairly easy even in LC-3's limited capabilities, and can be done with simple addition without looping. Basically:
n × 10 = n + n + n + n + n + n + n + n + n + n
and LC-3 can do those 9 additions in just 9 instructions. Still, we can also observe that:
n × 10 = n × 8 + n × 2
and also that:
n × 10 = (n × 4 + n) × 2 (which is n × 5 × 2)
which can be done in just 4 instructions on LC-3 (and none of these needs looping)!
So, if you want to do this approach, you'll have to figure out how to go forwards through the string instead of backwards as the given table version does, and, how to multiply by 10 (use any one of the above suggestions).
There are other approaches as well if you study atoi. You could keep the backwards approach, but now will have to multiply by 10, by 100, by 1000, a different factor for each successive digit . That might be done by repetitive addition. Or a count of how many times to multiply by 10 — e.g. n × 1000 = n × 10 × 10 × 10.
I'm building a mini binary calculator, and using this logic I am able to combine 2 digit binary numbers (0-3, in decimal) to return at most a 3 digit binary number (0-6, 7 is unreachable with what I have):
However, there's trouble when I combine 3 and 1: it returns 2 and not 4, whereas combining 2 and 2 returns 4. I'm new to binary/logic gates, and I'm having trouble understanding why this is happening, and if there is another way to arrange the gates to allow an output of 4 when combining 3 and 1?
You seam to have misunderstood the purpose of Cin (carry in) and Cout (carry out). In all its simpleness, carry represents an overflow. Which is used to overflow into the next digit of the addition.
1-bit Half Adder
You already got the half adder right. It's just an XOR for the sum and an AND for the Cout (carry out).
1-bit Full Adder
Making a full adder, is just 2 half adders. That receives the A, B bits and the Cin from the last full adder in the chain of full adders for the addition.
4-bit Full Adder
However many bits you need to add together, you can always use the last and "unused" Cout to detect whether the result overflows or not.
A infinitely long stream of 0's and 1's are coming ,you have to find out whether number formed so far is divisible by 3 or not.
i tried it by finding decimal equivalent of binary number and then summing all the digits and finding if that sum is divisible by 3 or not. but i know it is wrong method because number is infinitely long so after some time number will be out of range. So what would be the approach for this.
another approach would be to find out even positions set bits and odd positions set bits and if difference of total number of set bits for odd and even positions is divisible by 3 then number will be divisible by 3.
but here also after sum time number will be out of range.
is there any better approach?
Just keep track of x = current number mod 3.
If b is the next bit, update:
x = (2*x + b) % 3
When x = 0 the current number is divisible by 3
We know that all prime numbers are of the form 6k+-1. To check if n is a prime number, can't we just divide n by 6, take the floor of that, then check if adding or subtracting 1 equals n? That would check if a number is prime in constant time, right? If this is true, why do other methods bother using sieves for primality tests? Also, using this method, wouldn't finding a range of primes from 2 to n be O(n)? So this method is faster than the sieve of Eratosthenes?
Yes, all primes are of the form 6k +/- 1, but that doesn't mean that each number that is of the form 6k +/- 1 is prime. Consider 25, which is 6 * 4 + 1. Clearly, 25 is not prime.
We know that all prime numbers are of the form 6k+-1.
But not all numbers in the form 6k+-1 are prime. (E.g. 6 * 4 + 1 = 25)
This means that your isPrime based on this test will give false positives (e.g. for 25). It will never give false negatives though, so it can be used to weed out some possibilities before you apply a real primality test.
You may find https://en.wikipedia.org/wiki/Primality_test#Simple_methods educational. In particular, the 6k+1 pattern is just one of many patterns that can be used in creating a naive primality test, the more general/optimized case of which ends up reducing to ... the Sieve of Eratosthenes.
This works, but the utility is minor. Essentially, this statement is equivalent to "If a number is not divisible by 2 or 3, it might be prime."
6k is divisible by 6 (redundant check - same as divisible by 2 or 3).
6k + 1 may be prime
6k + 2 = 2(k+3) is divisible by 2
6k + 3 = 3(k+2) is divisible by 3
6k + 4 = 2(3k + 2) is divisible by 2 (redundant check)
6k + 5 may be prime. (and is equivalent to 6m - 1 for m = k+1)
So what we've actually accomplished is replaced trial division by 2, 3, (and eliminate multiples, sieve wise) with two slightly more complex operations. This means that this method is the first two iterations of the sieve of Eratosthenes.
So any algorithm using this property is equivalent to the following code:
boolean isPrime(n)
{
if (n % 2 == 0) return false;
if (n % 3 == 0) return false;
return isPrimeRigourousCheck(n);
}
Which is pretty basic, and much simpler than solving the other equation 2 times.
Of interest, we can construct other similar rules. For example, the obvious next choice is
30k +- (1,7,11,13) but that gives us 8 cases to solve, and is equivalent to adding the single trial division:
if (n % 5 == 0) return false;
to the code above. (which is a 3 iteration sieve)
A specific example
I need to generate a random number between 0 and 2, inclusive. (or choose randomly between -1, 0, and 1).
The naive approach would be to do something like rand() mod 3 where rand() returns an integer. This approach will not generate statistically random numbers unless the upper bound of rand() is not relatively prime (and the lower bound is 0).
For instance, assuming rand() returned 2 bits (from 0 to 3, inclusive), the modulus would map:
0 -> 0
1 -> 1
2 -> 2
3 -> 0
This skew toward 0 would obviously be much less if more bits would be returned, but regardless, the skew would remain.
The generic question
Is there a way of generating an evenly distributed random number between 0 and n-1, inclusive, where n is relatively prime to 2?
A common approach is to discard random values above the last full cycle, and just ask for a new random number.
It might help choosing your rand() upper bound to be k*n where k is an integer. This way the outcome will be evenly distributed provided that rand() is a good random generator.
If it's not possible to reduce the upper bound, you can pick k so that k*n is as close to rand() upper bound as possible and discard the results above this number trying again.
See my answer to a similar question.
Basically, use your RNG and discard everything above N and try again. For optimization, you can use mod, and discard everything above n * floor(MAX / n)
Generic Answer: You need to use more than just 2 bits of the number.
My rule of thumb is to generate floating-point values, x, 0.0 <= x < 1.0, multiply by 3 and truncate. That should get you values in the range 0, 1 and 2 that depend on a larger number of bits.