PathFinding Algorithm: How to handle changing Weights Efficiently - language-agnostic

So, I have a simple pathfinding algorithm which precalculates the shortest route to several target endpoints, each of which has a different weight. This is somewhat equivalent to having one endpoint with a node between it and each endpoint, though the edges there have different weights. The algorithm it uses is a simple spreading algorithm, which in 1d looks this this (| means wall, - means space):
5 - - - 3 | - - - 2 - - - - 2
5 4 - - 3 | - - - 2 - - - - 2 : Handled distance 5 nodes
5 4 3 - 3 | - - - 2 - - - - 2 : Handled distance 4 nodes
5 4 3 2 3 | - - - 2 - - - - 2 : Handled distance 3 nodes
5 4 3 2 3 | - - 1 2 1 - - 1 2 : Handled distance 2 nodes
Done. Any remaining rooms are unreachable.
So, let's suppose I have a precalculated pathfinding solution like this one, where only the 5 is a target:
- - - - | 5 4 3 2 1 -
If I change the wall to a room. Recomputing is simple. Just re-handle all distance nodes (but ignore the ones which already exist). However, I am not able to figure out an efficient way to handle what to do if the 4 became a wall. Clearly the result is this:
- - - - | 5 | - - - -
However, in a 2d solution I'm not sure how to efficiently deal with 4. It is easily possible to store that 4 depends on 5 and thus needs recaculation, but how do I determine its new dependency and values safely? I'd rather avoid recalculating the entire array.
One solution, which is better than nothing, is (roughly) to only recalculate array elements with a manhattan distance of 5 from the 5, and to maintain source information.
This would basically mean reapplying the algorithm to a selected area But can I do better?

Hmmm. One solution I've come up with is this:
Keep a list of nodes that are reachable most quickly from each node. If a node becomes a wall, check which node it was reachable from and grab the corresponding list. Then Recheck all those nodes using the standard algorithm. When reaching a node where the new distance is smaller, mark it as being in need of retesting.
Take all the neighbors of the marked nodes which are unmarked and reapply the algorithm on them, ignoring any marked nodes that this technique hits. If the reapplied algorithm increases the value of a marked node, use the new value.

Related

Why do prime factors exist only till the square root of number? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 10 months ago.
Improve this question
To find all the prime factors of a number we traverse from 2 to sqrt(number). What makes all the prime factors accommodated within sqrt(number)?
If you express a number as z = x*y, either x or y has to be <=sqrt(z), otherwise the product becomes greater than z
For all (x,y) pairs such that z = x*y, if you traverse x between [2, sqrt(z)], you can cover all y by getting z/x
"What makes all the prime factors accommodated within sqrt(number)". This fact is wrong, a simple counter example is 7 for 28. By using the first two points however, when you test the divisibility of 4 (which is <=sqrt(28)), you get 7 by doing 28/4
Suppose c = a * b. If a | c, then b | c i.e. (c / a) | c.
We leverage this fact to reduce the time complexity to find all the factors of a number.
For example, take the number 12.
Square root of 12 is 3.46 approximately.
1 and 12 (the number itself) are already factors.
So we can iterate from 2 until 3.
Since 12 % 2 is 0, 2 is a factor and 12 / 2 = 6 is also a factor.
12 % 3 is 0, so 3 is a factor and 12 / 3 = 4 is also a factor.
This way, we have found all the factors.
In a sense, we iterate till the multiplicative middle point which lies at the square root for a single number and leverage the above property.
To find all prime factors, we must iterate over prime numbers from 2 until square root of the number.

Terminology - What is the complement of memoization?

If I am correct, memoization is associated with READ operations. What is the term which refers to the same technique used for WRITE operations?
Example:
Let's say an app receives the following inputs,
0 1 2 2 2 2 2 3 4 3 3 3 3 3 4 4 4 2 1 2 5 5 5 5 3
Instead of saving everything, we tune the app to save only the transitions. (i.e ignore consecutive duplicates)
0 1 2 3 4 3 4 2 1 2 5 3
What is the (standard) term that can be used to describe the above technique?
I feel bad about using the same term since the final outcome is quite different. In READ operations, if memoization is used, the final outcome will remain the same. But in above example for WRITE operations, the final output is different from the original input.
"Deduplication of adjacent/most-recent entries". Your example looks like what the uniq tool does.
If you preserved the count of duplicates, it would be a form of RLE (run-length encoding).
As an aside, I guess you mean memoization as a way to speed up reads, and this as a method to speed up writes, but I wouldn't say that's the opposite of memoization, since it's opposite of the general goal, but not related to the particular method.
As far as I know, there is no applicable terminology for what you are asking.
And it is NOT memoization ... or (to my mind) the reverse of memorization.
(Likewise, there is no word in English for a cat with three legs.)

Difference between quantile results and iqr

I'm trying to understand a little more about how Octave calculates quartiles and interquartile range. Consider the following:
A=[1 4 7 10 14];
quantile(A, [0.25 0.75])
ans = 3.2500 11.0000
This result seems consistent with Method 3 on the Wikipedia page about quartiles. Given that the interquartile range is Q3-Q1, I'd expect the result to be 7.75.
However, running iqr(A) gives a result of 6. Clearly this is calculated from 10 minus 4 from the original data, which is consistent with Method 2 from the same Wikipedia page.
What is the reason for using two different methods for calculating Q1 and Q3?

Reduction of odd number of elements CUDA

It seems that it possible to make reduction only for odd number of elements. For example, it needs to sum up numbers. When I have even number of elements, it will be like this:
1 2 3 4
1+2
3+3
6+4
But what to do when I have, for instance 1 2 3 4 5? The last iteration is the sum of three elements 6+4+5 or what? I saw the same question here, but couldn't find the answer.
A parallel reduction will add pairs of elements first:
1 1+3 4+6
2 2+4
3
4
Your example with an odd number of elements would typically be realized as:
1 1+4 5+3 8+7
2 2+5 7+0
3 3+0
4 0+0
5
0
0
0
That is to say, typically a parallel reduction will work with a power-of-2 set of threads, and at most one threadblock (the last one) will have less than a full complement of data to work with. The usual method to handle this is to zero-pad the data out to the threadblock size. If you study the cuda parallel reduction sample code, you'll find examples of this.

calculating the density of a set

(I wish my mathematical vocabulary was more developed)
I have a website. On that website is a video. As a user watches the video, a bit of javascript stores how far they have gotten so far in the video. When they stop watching the video, that number of seconds is stored. There's no pattern to when the js will do this, unfortunately.
So if one person is watching the video, we might see this set:
3
6
8
10
12
16
And another person might get bored immediately:
1
3
This data is all stored in the same place, anonymously. So the sorted table with all this info would look like this:
1
3
3
6
8
10
12
16
Finally, the amount of times the video is started at all is stored. In this case it would be 2.
So. How do I get the average 'high-time' (the farthest reached point in the video) for all of the times the video was played?
I know that if we had a value for every second:
1
2
3
4
5
6
7
...
14
15
16
1
2
3
Then we could count up the values and divide by the number of plays:
(19) / 2 = 9.5
Or if the data was otherwise uniform, say in increments of 5, then we could count that up and multiply it by 5 (in the example, we would have some loss of precision, but that's ok):
5
10
15
5
(4) * 5 / 2 = 10
So it seems like I have a general function which would work:
count * 1/d = avg
where d is the density of the numbers (in the example above with 5 second increments, 1/5).
Is there a way to derive the density, d, from a set of effectively random numbers?
Why not just keep the last time that has been provided, and average across those? If you either throw away, or only pay attention to, the last number, it seems like you could just average over these.
You might also want to check out the term standard deviation as the raw average of this might not be the most useful measurement. If you have the standard deviation as well, it could help you realize that you have an average of 7, but it is composed of mostly 1's and 15's.
If you HAVE to have all the data, like you suggested, I will try and think about this a little bit more. I'm not totally certain how you can associate a value with all the previous values that came with it. Do you ALWAYS know the sequence by which numbers are output? If so, I think I know of a way you could derive the 'last' one, which might be slightly computationally expensive.
If you only have a sequence of integers, I think you may be able to increase each value (exponentially?) to 'compensate' for the fact that a later value 'contains' earlier values. I'm still working through this idea, but maybe it will give someone else a seed. What if you average over the sum of these, and then take the base2 logarithm of this average? Does that provide any kind of useful metric? That should 'weight' the later values to the point where they compensate for the sum of earlier values. I think.
In python-esk:
sum = 0
numberOf = 0
for node in nodes:
sum = sum + node.value ^ 2
numberOf = numberOf + 1
weightedAverage = log(sum/numberOf, 2)
print weightedAverage
print "Thanks Brian"
I think that #brian-stiner is on the right track in one of his comments.
Start with something like:
1
3
3
6
8
10
12
16
Turn that into numbers and counts.
1, 1
3, 2
6, 1
8, 1
10, 1
12, 1
16, 1
And then reading from the end down, find all of the points that happened more often than any remaining ones.
3, 2
16, 1
Take differences in counts.
3, 1
16, 1
And you have an estimate of stopping places.
This will not be an unbiased estimate. But if the JavaScript is independently inconsistent and the number of people is large, the biases should be fairly small.
It won't be right, but it will be close enough for government work.
Assuming increments are always around 5, some missing, some a bit longer or shorter. Then it won't be easy (possible?) to do this exactly. My suggestion: compute something like a 'moving count'. Similar to moving average.
So, for second 7: count how many numbers are 5,6,7,8 or 9 and divide by 5. That will give you a pretty good guess of how many people watched the 7th second. Do the same for second 10. The difference would be close to the number of the people who left between second 7 and 10.
To get the total time watched for each user, you'll have parse the list smallest to largest. If you have 4 views, you'll go through your list until you find that you no longer have 4 identical numbers, the last number where you had 4 identical numbers is the maximum of the first view. Then you'll look for when the 3 identical numbers stop, and so on. For example:
4 views data:
1111222233334445566778
4 views side by side:
1 1 1 1
2 2 2 2
3 3 3 3 <- first view max is 3 seconds
4 4 4 <- second view max is 4 seconds
5 5
6 6
7 7 <- third view max is 7 seconds
8 <- fourth view max is 8 seconds
EDIT- Oh, I just noticed that they are not uniform. In that case, the moving average would probably be your best bet.
The number of values roughly corresponds to the number of time periods in which your javascript sends the values (minus 1/2 if the video stop is accompanied with a obligatory time posting, since its moment is random within the interval).
If all clients have similar intervals and you know them, you may just use:
SELECT (COUNT(*) - 0.5) * 5.0 / (SELECT counter FROM countertable)
FROM ticktable
5.0 is the interval between the posts here.
Note that it does not even look at the values: you could as well just store "ticks".
For the max time, you could use MAX() on your field. Perhaps something like...
SELECT MAX(play_time) AS maxTime FROM video
Which would give you the longest time someone has played the video for.
If you want other things, like AVG() then you'll need more complex queries, for collecting on a per-user basis etc etc.
MySQL also contains a Standard Deviation function called STDDEV() and STD() which could help you too.