Why do prime factors exist only till the square root of number? [closed] - prime-factoring

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 10 months ago.
Improve this question
To find all the prime factors of a number we traverse from 2 to sqrt(number). What makes all the prime factors accommodated within sqrt(number)?

If you express a number as z = x*y, either x or y has to be <=sqrt(z), otherwise the product becomes greater than z
For all (x,y) pairs such that z = x*y, if you traverse x between [2, sqrt(z)], you can cover all y by getting z/x
"What makes all the prime factors accommodated within sqrt(number)". This fact is wrong, a simple counter example is 7 for 28. By using the first two points however, when you test the divisibility of 4 (which is <=sqrt(28)), you get 7 by doing 28/4

Suppose c = a * b. If a | c, then b | c i.e. (c / a) | c.
We leverage this fact to reduce the time complexity to find all the factors of a number.
For example, take the number 12.
Square root of 12 is 3.46 approximately.
1 and 12 (the number itself) are already factors.
So we can iterate from 2 until 3.
Since 12 % 2 is 0, 2 is a factor and 12 / 2 = 6 is also a factor.
12 % 3 is 0, so 3 is a factor and 12 / 3 = 4 is also a factor.
This way, we have found all the factors.
In a sense, we iterate till the multiplicative middle point which lies at the square root for a single number and leverage the above property.
To find all prime factors, we must iterate over prime numbers from 2 until square root of the number.

Related

Reduction of odd number of elements CUDA

It seems that it possible to make reduction only for odd number of elements. For example, it needs to sum up numbers. When I have even number of elements, it will be like this:
1 2 3 4
1+2
3+3
6+4
But what to do when I have, for instance 1 2 3 4 5? The last iteration is the sum of three elements 6+4+5 or what? I saw the same question here, but couldn't find the answer.
A parallel reduction will add pairs of elements first:
1 1+3 4+6
2 2+4
3
4
Your example with an odd number of elements would typically be realized as:
1 1+4 5+3 8+7
2 2+5 7+0
3 3+0
4 0+0
5
0
0
0
That is to say, typically a parallel reduction will work with a power-of-2 set of threads, and at most one threadblock (the last one) will have less than a full complement of data to work with. The usual method to handle this is to zero-pad the data out to the threadblock size. If you study the cuda parallel reduction sample code, you'll find examples of this.

One s complement and two´s complement logic behind [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
for my computer science class I need to finish a project where I desperately need to understand logic of one´s complement and two´s complement. I already know how to construct these and also how hardware adder works when dealing with two´s complement. The thing that is bothering me and I need help with is the logic behind one´s complement addition. Why do we have to add the bit we would carry over (and discard when using two complement coding) to the sum to get the correct result? I don´t understand why binary addition behaves like this when adding one´s complement and why is that last adding of carry over bit so crucial. I need to understand the logic behind it. Thanks
A 1's complement number (say,16 bits) can be described as below - in terms of the relationship between the binary code c (which is 0..0xFFFF, or 0..65535) and the 'value' x which the code represents, x is in range -32767.. 32767.
if the value x is 0..32767, the code c is numerically equal to x, and thus has a zero in the MSB
if the value x is negative, -1 .. -32767, then c is 65535 - x, and the MSB of c is 1.
The code c=65535 = 0xFFFF is to be interpreted as zero; since it has a 1 in the MSB I will call it '-0'.
Now consider what happens to the x values when you add two codes using a a binary adder:
If both x1, x2 are positive, the adder adds the codes: c = c1 + c2. both MSBs are zero at the input, so there will be no carry. So the value of c will be the sum x1+ x2 and this is how it will be interpreted (assuming the sum doesn't overflow into the MSB).
If both x1, x2 are negative, then by adding c1+c2 you are adding (65535+x1)+(65535+x2). There will always be a carry-out; by discarding this you end up with a binary value equal to 65534+(x1+x2). To get the code we want to represent this negative sum, i.e. 65535+(x1+x2), we need to add 1 more.
If the signs are different, the sum of codes is 65535+(x1+x2). There may or may not be a carry-out. Since the MSBs are different, the carry-out occurs if, and only if, the MSB of the sum is zero. In the case where the true sum (x1+x2) < 0, you won't have a carry-out, and the value 65535+(x1+x2) is the proper code for (x1+x2). If the sum (x1+x2)>0, there will be a carry-out; the sum of the codes (after discarding the carry out) will be (x1+x2)-1, and thus you have to add 1 to get the proper code (x1+x2).
So, looking at all the cases, it works out that whenever a carry-out occurs, (and you effectively subtract 65536 by discarding it) you need to add 1 to get the proper code to represent the sum.
When x1+x2 = 0 -- with one <0 and the other >0 - the sum of codes will always be 65535, which is left as is and is to be interpreted as zero ("-0").
When both are zero, you have three cases:
0 + 0 = 0 Fairly simple...
-0 + -0: codes are 0xFFFF + 0xFFFF = (carry+ 0xFFFE) = 0xFFFF after adding carry back in, interpreted as -0.
-0+ 0: codes are 0xFFFF + 0 = 0xFFFF (-0)
So the only case where the 1's complement sum is a 'proper' 0 is when both inputs are proper 0. All other cases adding up to zero give a '-0'.
Mathematically, you could say that 1's complement uses (c = x) mod 65535, with c constrained to 0...0xFFFF. And so the addition needs to be done modulo 65535. Each time you have a carryout, you subtract 65536 (by discarding it from the top) and add 1 (by adding it at the bottom); since you subtract 65535 in the process, the modulo 65535 value is preserved.

Bouncing off inclined surface [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I can calculate distance between inclined line and my ball (with normal vector), But how can I calculate new velocity?
Anders' answer was a good one but I realise that you may not have a great mathematical back ground so I will elaborate. The problem you have at the moment is poorly stated. However, see the following figure
This will allow us to derive the equation you require. Now, the scalar product of two vectors a and b, a.b gives the magnitude of a multiplied by the projection of b onto a. Basically, if we take n as a unit vector (magnitude 1 in each component direction) then a.n gives the magnitude of the components of a which act in the direction of n.
So, splitting the velocity components into those parallel and perpendicular to the plain; to get the velocity V we first split U into components.
Perpendicular to the plane in direction n, we have a vector velocity w = (U.n) n. This means that in fact we can write U = (U.n) n + [U - (U.n) n]. This is saying that U is made up of the perpendicular component of itself + the parallel component of itself. Now, -V is very similar to U but the parallel components acts in the reverse direction, so we can write -V = (U.n) n - [U - (U.n) n].
Combining the above gives the result Anders stated, i.e. V = U -2[(U.n) n]. The dot/scalar product is defined as a.b = |a||b|cos(A) where A is the angle between the vectors laid together tail-to-tail, this should enable you to solve your problem.
I hope this helps
If The vector v=(vx,vy) is the initial velocity and the plane has normal n=(nx,ny) then the new reflected velocity vector r will be
r=v−2(v⋅n)*n
The product (v⋅n) is the dot product of v and n, defined as vxnx+vyny. Note that the plane normal must be normalized (length 1.0). A related question with the same answer https://math.stackexchange.com/questions/13261/how-to-get-a-reflection-vector

calculating the density of a set

(I wish my mathematical vocabulary was more developed)
I have a website. On that website is a video. As a user watches the video, a bit of javascript stores how far they have gotten so far in the video. When they stop watching the video, that number of seconds is stored. There's no pattern to when the js will do this, unfortunately.
So if one person is watching the video, we might see this set:
3
6
8
10
12
16
And another person might get bored immediately:
1
3
This data is all stored in the same place, anonymously. So the sorted table with all this info would look like this:
1
3
3
6
8
10
12
16
Finally, the amount of times the video is started at all is stored. In this case it would be 2.
So. How do I get the average 'high-time' (the farthest reached point in the video) for all of the times the video was played?
I know that if we had a value for every second:
1
2
3
4
5
6
7
...
14
15
16
1
2
3
Then we could count up the values and divide by the number of plays:
(19) / 2 = 9.5
Or if the data was otherwise uniform, say in increments of 5, then we could count that up and multiply it by 5 (in the example, we would have some loss of precision, but that's ok):
5
10
15
5
(4) * 5 / 2 = 10
So it seems like I have a general function which would work:
count * 1/d = avg
where d is the density of the numbers (in the example above with 5 second increments, 1/5).
Is there a way to derive the density, d, from a set of effectively random numbers?
Why not just keep the last time that has been provided, and average across those? If you either throw away, or only pay attention to, the last number, it seems like you could just average over these.
You might also want to check out the term standard deviation as the raw average of this might not be the most useful measurement. If you have the standard deviation as well, it could help you realize that you have an average of 7, but it is composed of mostly 1's and 15's.
If you HAVE to have all the data, like you suggested, I will try and think about this a little bit more. I'm not totally certain how you can associate a value with all the previous values that came with it. Do you ALWAYS know the sequence by which numbers are output? If so, I think I know of a way you could derive the 'last' one, which might be slightly computationally expensive.
If you only have a sequence of integers, I think you may be able to increase each value (exponentially?) to 'compensate' for the fact that a later value 'contains' earlier values. I'm still working through this idea, but maybe it will give someone else a seed. What if you average over the sum of these, and then take the base2 logarithm of this average? Does that provide any kind of useful metric? That should 'weight' the later values to the point where they compensate for the sum of earlier values. I think.
In python-esk:
sum = 0
numberOf = 0
for node in nodes:
sum = sum + node.value ^ 2
numberOf = numberOf + 1
weightedAverage = log(sum/numberOf, 2)
print weightedAverage
print "Thanks Brian"
I think that #brian-stiner is on the right track in one of his comments.
Start with something like:
1
3
3
6
8
10
12
16
Turn that into numbers and counts.
1, 1
3, 2
6, 1
8, 1
10, 1
12, 1
16, 1
And then reading from the end down, find all of the points that happened more often than any remaining ones.
3, 2
16, 1
Take differences in counts.
3, 1
16, 1
And you have an estimate of stopping places.
This will not be an unbiased estimate. But if the JavaScript is independently inconsistent and the number of people is large, the biases should be fairly small.
It won't be right, but it will be close enough for government work.
Assuming increments are always around 5, some missing, some a bit longer or shorter. Then it won't be easy (possible?) to do this exactly. My suggestion: compute something like a 'moving count'. Similar to moving average.
So, for second 7: count how many numbers are 5,6,7,8 or 9 and divide by 5. That will give you a pretty good guess of how many people watched the 7th second. Do the same for second 10. The difference would be close to the number of the people who left between second 7 and 10.
To get the total time watched for each user, you'll have parse the list smallest to largest. If you have 4 views, you'll go through your list until you find that you no longer have 4 identical numbers, the last number where you had 4 identical numbers is the maximum of the first view. Then you'll look for when the 3 identical numbers stop, and so on. For example:
4 views data:
1111222233334445566778
4 views side by side:
1 1 1 1
2 2 2 2
3 3 3 3 <- first view max is 3 seconds
4 4 4 <- second view max is 4 seconds
5 5
6 6
7 7 <- third view max is 7 seconds
8 <- fourth view max is 8 seconds
EDIT- Oh, I just noticed that they are not uniform. In that case, the moving average would probably be your best bet.
The number of values roughly corresponds to the number of time periods in which your javascript sends the values (minus 1/2 if the video stop is accompanied with a obligatory time posting, since its moment is random within the interval).
If all clients have similar intervals and you know them, you may just use:
SELECT (COUNT(*) - 0.5) * 5.0 / (SELECT counter FROM countertable)
FROM ticktable
5.0 is the interval between the posts here.
Note that it does not even look at the values: you could as well just store "ticks".
For the max time, you could use MAX() on your field. Perhaps something like...
SELECT MAX(play_time) AS maxTime FROM video
Which would give you the longest time someone has played the video for.
If you want other things, like AVG() then you'll need more complex queries, for collecting on a per-user basis etc etc.
MySQL also contains a Standard Deviation function called STDDEV() and STD() which could help you too.

Simulating a roll with a biased dice

I did a search but didn't really get any proper hits. Maybe I used incorrect terms?
What I want to ask about is an algorithm for simulating a biased role rather than a standard supposedly-random roll.
It wouldn't be a problem if you can't give me exact answers (maybe the explanation is lengthy?) but I would appreciate &pointers to material I can read about it.
What I have in mind is to for example, shift the bias towards the 5, 6 area so that the numbers rolls would have a higher chances of getting a 5 or a 6; that's the sort of problem I'm trying to solve.
[Update]
Upon further thought and by inspecting some of the answers, I've realized that what I want to achieve is really the Roulette Wheel Selection operator that's used in genetic algorithms since having a larger sector means increasing the odds the ball will land there. Am I correct with this line of thought?
In general, if your probabilities are {p1,p2, ...,p6}, construct the following helper list:
{a1, a2, ... a5} = { p1, p1+p2, p1+p2+p3, p1+p2+p3+p4, p1+p2+p3+p4+p5}
Now get a random number X in [0,1]
If
X <= a1 choose 1 as outcome
a1 < X <= a2 choose 2 as outcome
a2 < X <= a3 choose 3 as outcome
a3 < X <= a4 choose 4 as outcome
a4 < X <= a5 choose 5 as outcome
a5 < X choose 6 as outcome
Or, more efficient pseudocode
if X > a5 then N=6
elseif X > a4 then N=5
elseif X > a3 then N=4
elseif X > a2 then N=3
elseif X > a1 then N=2
else N=1
Edit
This is equivalent to the roulette wheel selection you mention in your question update as shown in this picture:
Let's say the die is biased towards a 3.
Instead of picking a random entry from an array 1..6 with 6 entries, pick a random entry from an array 1..6, 3, 3. (8 entries).
Make a 2 dimensional array of possible values and their weights. Sum up all the weights. Randomly choose a value on the range of 0 to the sum of the weights.
Now iterate through the array while keeping an accumulator of the weights seen so far. Once this value exceeds your random number, pick the value of the die represented here.
Hope this helps
Hmm. Say you want to have a 1/2 chance of getting a six, and a 1/10 chance of getting any other face. To simulate this, you could generate a random integer n in [1, 2, ... , 10] , and the outcome would map to six if n is in [6, 7, 8, 9, 10] and map to n otherwise.
One way that's usually fairly easy is to start with a random number in an expanded range, and break that range up into unequal pieces.
For example, with a perfectly even (six-sided) die, each number should come up 1/6th of the time. Let's assume you decide on round percentages -- all the other numbers should come up 16 percent of the time, but 2 should come up 17 percent of the time.
You could do that by generating numbers from 1 to 100. If the number is from 1 to 16, it comes out as a 1. If it's from 17 to 34, it comes out as a 2. If it's from 34 to 50, it comes out as a 3 (and the rest are blocks of 16 apiece).