I'm trying to develop a cellular automata simulation and the problem is I want to get the close neighbours and far neighbours of each cell (illustrated as blue and beige) and determine which cells are dead and using some rules bring them to life. So at each iteration I'll be running through all the cells in the array and I want to somehow efficiently get all the close and far neighbours of these cells.
However depending on the position of the cell on the grid, only some of the neighbours will be available, and the only way I thought of doing this so far is having a getNeighbours(cell) method which will return a list with all the available neighbours of that cell that I will have to iterate to get the non-living ones.
getNeighbours(cell):
If cell.row > 0:
neighbours.add((coordinate,value),CLOSE_TOP_MIDDLE)
If cell.row > 1:
neighbours.add((coordinate,value),FAR_TOP_MIDDLE)
[...]
However that is a lot of overhead and a lot of comparisons to be done for each cell in the grid!
Is there any generic approach that is generally used with cellular automations? Maybe any data structures I can use? Because with what I have so far each iteration will take a lot of time if the grid is large enough.
Depending on the programming language that you use, there may be packages which provide the desired functionality. In Java, for example, there exists a package called JCASim: Cellular automata simulation system.
Finding neighbours in a CA can be a non-trivial task (e.g., if you use hexagonal cells etc). Even the term 'neighbor' has to be defined: Moore neighborhood or von Neumann neighborhood (these Wikipedia-articles also provide some pseudo-code).
In your case, you can implement the neighbor-search yourself:
Let's assume your CA consists of n rows with n columns (labelled from 0,..., n-1) as shown in your picture.
Your getNeighbour-function has to check all next-neighbor cells (grey background color in your image).
If you use periodic boundary conditions, you can use the the modulus-operator (%) to get the 9 next-neighbor cells. With periodic boundary conditions the neighbour cells of cell (x,y) are: (x+1 % n, y), (x, y+1 % n), (x+1 % n, y+1 % n), (x+n-1 % n, y), (x, y+n-1 %n), ...)
With open boundaries you have to discard all neighbours where x+1 > n-1, y+1 > n-1 or x-1 < 0, y-1 < 0
This way, you can check all cells with a grey background color in your picture.
Call the same function on each of the grey cells. This way you also check the cells with a blue background color.
Now, you have checked all cells in the neighborhood that you defined
Related
I am attempting to complete an assignment for an AI course however I cannot understand a question. Unfortunately, I cannot find any information on the internet that clearly explains how to predict the next generation in a CA. I have posted a link to a screenshot of my question below.
Image
Edit:
This is my edited answer;
Edited Answer
In Margolus neighborhoods, the grid is divided into 2x2 blocks. Depending on which step you are in, the division of blocks either starts from the top-left corner or is offset one cell down and one cell to the right. (See Wikipedia on Block cellular automata.) Your instructions say to start from the top-left corner.
So you need to divide up the grid into 2x2 blocks. Then, you check how the patterns in each block match the 15 possible Margolus neighborhood configurations:
For the given grid, you end up with the following. The "neighborhoods" are labeled in yellow highlighted text:
Now you look at the rules you were given: MS, D 0; 14; 11; 5; etc. These numbers after the D tell you, in order, how each configuration should change.
0th number in rule (D 0): Counting from 0, the first number tells you how the 0 (empty) configuration should change. The given number is 0, which means empty 2x2 blocks will not change in the next generation.
1st number in rule (D 0; 14;): The next number tells you how the 1 configuration (one X in upper left corner) should change. That number is 14, which means if we have any 2x2 blocks with the 1 configuration, it should morph into the 14 block. We don't have any 1 configurations, so we go to the next number in the rule.
2nd number in rule (D 0; 14; 11;): The next number tells you how the 2 configuration should change, and that number is 11. We have 2 blocks with the 2 configuration (one X in upper right corner), and the rule tells us we need to convert them to configuration 11 (2x2 block filled with X's except lower left corner).
After evaluating these first 3 rules, you end up with:
Continue for the rest of the numbers in the rule and you will have your answer. As for whether the rule is reversible, see here.
I am trying to understand Loss functions for Bounding Box Regression in CNNs. Currently I use Lasagne and Theano, which makes writing loss expressions very easy. Many sources propose different methods and I am asking myself which one is usually used in practice.
The bounding boxes coordinates are represented as normalized coordinates in the order [left, top, right, bottom] (using T.matrix('targets', dtype=theano.config.floatX)).
I have tried the following functions so far; however all of them have their drawbacks.
Intersection over Union
I was adviced to use the Intersection over Union measure to identify how well the 2 bounding boxes align and overlap. However, a problem occurs when the boxes don't overlap and then intersection is 0; then the whole quotient turns 0 regardless of how far the bounding boxes are apart. I implemented it as:
def get_area(A):
return (A[:,2] - A[:,0]) * (A[:,1] - A[:,3])
def get_intersection(A, B):
return (T.minimum(A[:,2], B[:,2]) - T.maximum(A[:,0], B[:,0])) \
* (T.minimum(A[:,1], B[:,1]) - T.maximum(A[:,3], B[:,3]))
def bbox_overlap_loss(A, B):
"""Computes the bounding box overlap using the
Intersection over union"""
intersection = get_intersection(A, B)
union = get_area(A) + get_area(B) - intersection
# Turn into loss
l = 1.0 - intersection / union
return l.mean()
Squared Diameter Difference
To create an error measure for non overlapping bounding boxes, I tried to compute the squared difference of the bounding box diameter. It seems to work, but I almost sure that there is much better way to do this. I implemented it as:
def squared_diameter_loss(A, B):
# Represent the squared distance from the real diameter
# in normalized pixel coordinates
l = (abs(A[:,0:2]-B[:,0:2]) + abs(A[:,2:4]-B[:,2:4]))**2
return l.mean()
Euclidean Loss
The simplest function would be the Euclidean Loss which computes the square root of the difference of the bounding box parameters squared. However, this doesn't take into account the area of the overlapping bounding box but only the difference of the parameters left, right, top, bottom. I implemented it as:
def euclidean_loss(A, B):
l = lasagne.objectives.squared_error(A, B)
return l.mean()
Could someone guide me on which would be the best loss function for bounding box regression for this use case or spot if I am doing something wrong here. Which loss function is usually used in practice?
Speaking from personal implementation experience, I had much better results training a CNN using IOU as the loss function as opposed to Euclidean (MSE or L2) Loss. Have not used the squared diameter difference loss. In general, a loss function that explicitly represents the goodness of your outputs for the tasks you hope to accomplish is probably best.
With regards to the IOU having a value of zero, you can introduce some additional term in the formulation so that it gracefully trends towards 0, perhaps based on normalized distance between bbox centers. This might give the additional effect of helping to center bounding boxes relative to the ground truth.
This response is mostly conceptual but I'd be happy to supply code examples if desired.
I have plots of points which look like this.
The tracks which these points form can be a circle or an ellipse. Clearly the center of the circular tracks in the two images above are different.
How can I find the center point of these tracks (circular/elliptical)? I want to find the (x,y) coordinates which is the center, not necessary that it has to be a point that's in the plotted data set. i.e., I don't want a medoid.
EDIT: Also, is there anyway that I can find an equation for circle/ellipse that envelopes a majority of these points? In the elliptical track, I've added an ellipse that envelopes the points on the track. The values were calculated by trial and error. The center was also calculated by eye balling the plot. How can I do this programmatically?
Smallest circle problem and the here is a paper (PDF download available) on the smallest ellipse problem. Both have O(N) algorithms and should be able to provide the formula for the circle and area from which you can get the center. However, they focus on enclosing all of the points. To solve that issue you'll need to remove some a number of the bounding points, which you should get from the algorithms as well. Unfortunately, it's pretty much up to you as to what qualifies as a good enough solution.
A fast and simple randomized solution is:
Randomly divide the set of points into k sets of N/k points each.
Run the smallest circle/ellipse algorithm for each set
For each of the k sets, pick at least 1 but no more m bounding points to remove from main point set.
Return to step 1, t times.
Return the result of the circle/ellipse algorithm on remaining points.
The algorithm removes between k and mk bounding points every pass at a cost of O(N). For your purpose you'll probably want to remove some percentage of the bounding points, 1-25% seems like a good starting point. This solution assumes that k is very small compared to N, otherwise you'll be removing too many points.
A slower but likely better algorithm is useful in the case that you want to repeated remove one or all of the bounding point from the smallest elipse, recalculate the smallest ellipse, then remove the bounding points again.
You can do this by having the parent node be the bounding points (points stored as a set for easy for faster removal) of the smallest enclosing ellipse of it's children. The maximum number of bounding points should be no more than k (which I'm thinking is 9 for an ellipse, compared to 3 for a circle). So removing a point from the data structure at O(k log N) as it requires recalculating the smallest circle, which is O(k) for each parent that is affected which is O(log N). So removing m points from the data structure should be O(mk log N). You might also want to consider calculating the area of the ellipse every every removed point and removing every point for a cost of O(Nk log N) until you only have three points left. You could then analyze the area data to determine what ellipse should be used. A simple result would be to simply use the ellipse that has the area closest to the average area of all of the ellipses created, but may not be exactly what you seek. It also might be too slow, in which case I recommend a single pass of the faster algorithm.
This looks like an instance of Robust Ellipse Fitting. Check this paper: Outlier Elimination for
Robust Ellipse and Ellipsoid Fitting http://arxiv.org/pdf/0910.4610.pdf.
A first rough and easy solution is provided by the ellipse of inertia (2D version of the ellipsoid of inertia http://en.wikipedia.org/wiki/Moment_of_inertia#Inertia_ellipsoid). Its center is just the centroid and axes are given by Eigen vectors/values of the 2x2 matrix of inertia.
Let me first explain the idea. The actual math question is below the screenshots.
For musical purpose I am building a groove algorithm where event positions are translated by a mathematical function F(X). The positions are normalized inside the groove range, so I am basically dealing with values between zero and one (which makes shaping groove curves way easier-the only limitation is x'>=0).
This groove algorithm accepts any event position and also work by filtering static notes from a data-structure like a timeline note-track. For filtering events in a certain range (audio block-size) I need the inverse groove-function to locate the notes in the track and transform them into the groove space. So far so good. It works!
In short: I use an inverse function for the fact that it is mirrored to (y=x). So I can plug in a value x and get a y. This y can obviously plugged into the inverse function to get first x again.
Problem: I now want to be able to blend the groove into another, but the usual linear (hint hint) blending code does not behave like I expected it. To make it easier, I first tried to blend to y=x.
B(x)=alpha*F(x) + (1-alpha)*x;
iB(x)=alpha*iF(x) + (1-alpha)*x;
For alpha=1 we get the full curve. For alpha=0 we get the straight line. But for alpha between 0 and 1 B(x) and iB(x) are not mirrored anymore (close, but not enough), F(x) and iF(x) are still mirrored.
Is there a solution for that (besides quantizing the curve into line segments)? Any subject I should throw an eye on?
you are combining two functions, f(x) and g(x), so that y = a f(x) + (1-a) g(x). and given some y, a, f and g, you want to find x. at least, that is what i understand.
i don't see how to do this generally (although i haven't tried very hard - i mean, it would be worth asking someone else), but i suspect that for "nice" shaped functions, like you seem to be using, newton's method would be fairly quick.
you want to find x such that y = a f(x) + (1-a) g(x). in other words, when 0 = a f(x) + (1-a) g(x) - y.
so let's define r(x) = a f(x) + (1-a) g(x) - y and find the "zero" of that. start with a guess in the middle, x_0 = 0.5. calculate x_1 = x_0 - r(x_0) / r'(x_0). repeat. if you are lucky this will rapidly converge (if not, you might consider defining the functions relative to y=x, which you already seem to be doing, and trying it again).
see wikipedia
This problem can't be solved algebraically, in general.
Consider for instance
y = 2e^x (inverse x = log 0.5y)
and
y = 2x (inverse x = 0.5y).
Blending these together with weight 0.5 gives y = e^x+x, and it is well-known that it is not possible to solve for x here using only elementary functions, even though the inverse of each piece was easy to find.
You will want to use a numerical method to approximate the inverse, as discussed by andrew above.
I'm drawing rectangles at random positions on the stage, and I don't want them to overlap.
So for each rectangle, I need to find a blank area to place it.
I've thought about trying a random position, verify if it is free with
private function containsRect(r:Rectangle):Boolean {
var free:Boolean = true;
for (var i:int = 0; i < numChildren; i++)
free &&= getChildAt(i).getBounds(this).containsRect(r);
return free;
}
and in case it returns false, to try with another random position.
The problem is that if there is no free space, I'll be stuck trying random positions forever.
There is an elegant solution to this?
Let S be the area of the stage. Let A be the area of the smallest rectangle we want to draw. Let N = S/A
One possible deterministic approach:
When you draw a rectangle on an empty stage, this divides the stage into at most 4 regions that can fit your next rectangle. When you draw your next rectangle, one or two regions are split into at most 4 sub-regions (each) that can fit a rectangle, etc. You will never create more than N regions, where S is the area of your stage, and A is the area of your smallest rectangle. Keep a list of regions (unsorted is fine), each represented by its four corner points, and each labeled with its area, and use weighted-by-area reservoir sampling with a reservoir size of 1 to select a region with probability proportional to its area in at most one pass through the list. Then place a rectangle at a random location in that region. (Select a random point from bottom left portion of the region that allows you to draw a rectangle with that point as its bottom left corner without hitting the top or right wall.)
If you are not starting from a blank stage then just build your list of available regions in O(N) (by re-drawing all the existing rectangles on a blank stage in any order, for example) before searching for your first point to draw a new rectangle.
Note: You can change your reservoir size to k to select the next k rectangles all in one step.
Note 2: You could alternatively store available regions in a tree with each edge weight equaling the sum of areas of the regions in the sub-tree over the area of the stage. Then to select a region in O(logN) we recursively select the root with probability area of root region / S, or each subtree with probability edge weight / S. Updating weights when re-balancing the tree will be annoying, though.
Runtime: O(N)
Space: O(N)
One possible randomized approach:
Select a point at random on the stage. If you can draw one or more rectangles that contain the point (not just one that has the point as its bottom left corner), then return a randomly positioned rectangle that contains the point. It is possible to position the rectangle without bias with some subtleties, but I will leave this to you.
At worst there is one space exactly big enough for our rectangle and the rest of the stage is filled. So this approach succeeds with probability > 1/N, or fails with probability < 1-1/N. Repeat N times. We now fail with probability < (1-1/N)^N < 1/e. By fail we mean that there is a space for our rectangle, but we did not find it. By succeed we mean we found a space if one existed. To achieve a reasonable probability of success we repeat either Nlog(N) times for 1/N probability of failure, or N² times for 1/e^N probability of failure.
Summary: Try random points until we find a space, stopping after NlogN (or N²) tries, in which case we can be confident that no space exists.
Runtime: O(NlogN) for high probability of success, O(N²) for very high probability of success
Space: O(1)
You can simplify things with a transformation. If you're looking for a valid place to put your LxH rectangle, you can instead grow all of the previous rectangles L units to the right, and H units down, and then search for a single point that doesn't intersect any of those. This point will be the lower-right corner of a valid place to put your new rectangle.
Next apply a scan-line sweep algorithm to find areas not covered by any rectangle. If you want a uniform distribution, you should choose a random y-coordinate (assuming you sweep down) weighted by free area distribution. Then choose a random x-coordinate uniformly from the open segments in the scan line you've selected.
I'm not sure how elegant this would be, but you could set up a maximum number of attempts. Maybe 100?
Sure you might still have some space available, but you could trigger the "finish" event anyway. It would be like when tween libraries snap an object to the destination point just because it's "close enough".
HTH
One possible check you could make to determine if there was enough space, would be to check how much area the current set of rectangels are taking up. If the amount of area left over is less than the area of the new rectangle then you can immediately give up and bail out. I don't know what information you have available to you, or whether the rectangles are being laid down in a regular pattern but if so you may be able to vary the check to see if there is obviously not enough space available.
This may not be the most appropriate method for you, but it was the first thing that popped into my head!
Assuming you define the dimensions of the rectangle before trying to draw it, I think something like this might work:
Establish a grid of possible centre points across the stage for the candidate rectangle. So for a 6x4 rectangle your first point would be at (3, 2), then (3 + 6 * x, 2 + 4 * y). If you can draw a rectangle between the four adjacent points then a possible space exists.
for (x = 0, x < stage.size / rect.width - 1, x++)
for (y = 0, y < stage.size / rect.height - 1, y++)
if can_draw_rectangle_at([x,y], [x+rect.width, y+rect.height])
return true;
This doesn't tell you where you can draw it (although it should be possible to build a list of the possible drawing areas), just that you can.
I think that the only efficient way to do this with what you have is to maintain a 2D boolean array of open locations. Have the array of sufficient size such that the drawing positions still appear random.
When you draw a new rectangle, zero out the corresponding rectangular piece of the array. Then checking for a free area is constant^H^H^H^H^H^H^H time. Oops, that means a lookup is O(nm) time, where n is the length, m is the width. There must be a range based solution, argh.
Edit2: Apparently the answer is here but in my opinion this might be a bit much to implement on Actionscript, especially if you are not keen on the geometry.
Here's the algorithm I'd use
Put down N number of random points, where N is the number of rectangles you want
iteratively increase the dimensions of rectangles created at each point N until they touch another rectangle.
You can constrain the way that the initial points are put down if you want to have a minimum allowable rectangle size.
If you want all the space covered with rectangles, you can then incrementally add random points to the remaining "free" space until there is no area left uncovered.