What is soft constraint in box2d? - libgdx

I am creating a mouse joint and I bump across this term, what it actually means.
documentation for mouse joint:-"A mouse joint is used to make a point on a body track a specified world point. This a soft constraint with a maximum force.
* This allows the constraint to stretch and without applying huge forces."

Let's say that we have a distance joint;
b2DistanceJointDef DistJointDef;
you can achieve a spring-like effect by tuning the frequency and damping ratios.
DistJointDef.frequencyHz = 0.5f;
DistJointDef.dampingRatio = 0.5f;
FrequencyHz will determine how much the body should stretch/shrink over time.
whereas the dampingRation will determine how long the spring-like effect will last.
These principles are also applied to Mouse joints. you can modify their frequency and damping ratio to achieve a similar effect.
If I recall correctly, you can apply the soft constraints on wheel joints as well.
here is a little bit more info on the subject from Box2dManual
Softness is achieved by tuning two constants in the definition: frequency and damping ratio. Think of the frequency as the frequency of a harmonic oscillator (like a guitar string). The frequency is specified in Hertz. Typically the frequency should be less than a half the frequency of the time step. So if you are using a 60Hz time step, the frequency of the distance joint should be less than 30Hz. The reason is related to the Nyquist frequency.
The damping ratio is non-dimensional and is typically between 0 and 1, but can be larger. At 1, the damping is critical (all oscillations should vanish).

Related

Why are negative frequencies of a discrete fourier transform appear where high frequencies are supposed to be?

i am very new to this topic, please help me understand why this happens...
if i make the dft of a cos wave
cos(w*x)=0.5*(exp(i*w*x)+exp(-i*w*x))
i expect to get one peak at the "w" frequency, and the negative one "-w" should not be visible, but it appears at the far end of the spectrum where higher frequencies are supposed to be...
why? do high frequencies produce the same effect as negative ones?
if you imagine a wave with frequency equal to 3*pi/2, that is, (0, 3pi/2, 6pi/2, 9pi/2), it does seem like a wave with negative frequency -pi/2 (0, -pi/2, -pi, -3pi/2)
is that the reason to what is happening? please help!
High frequencies between the Nyquist rate (Fs/2) and the Sample rate (Fs) alias to negative frequencies, due to sampling above the Nyquist rate. And vice versa, negative frequencies alias to the top half of an FFT. "Alias" means you can't tell those frequencies apart at that sample rate, due to the fact that sampling sinewaves at either of those frequencies at that sample rate can produce absolutely identical sets of samples.

center of a cluster of points and track shape

I have plots of points which look like this.
The tracks which these points form can be a circle or an ellipse. Clearly the center of the circular tracks in the two images above are different.
How can I find the center point of these tracks (circular/elliptical)? I want to find the (x,y) coordinates which is the center, not necessary that it has to be a point that's in the plotted data set. i.e., I don't want a medoid.
EDIT: Also, is there anyway that I can find an equation for circle/ellipse that envelopes a majority of these points? In the elliptical track, I've added an ellipse that envelopes the points on the track. The values were calculated by trial and error. The center was also calculated by eye balling the plot. How can I do this programmatically?
Smallest circle problem and the here is a paper (PDF download available) on the smallest ellipse problem. Both have O(N) algorithms and should be able to provide the formula for the circle and area from which you can get the center. However, they focus on enclosing all of the points. To solve that issue you'll need to remove some a number of the bounding points, which you should get from the algorithms as well. Unfortunately, it's pretty much up to you as to what qualifies as a good enough solution.
A fast and simple randomized solution is:
Randomly divide the set of points into k sets of N/k points each.
Run the smallest circle/ellipse algorithm for each set
For each of the k sets, pick at least 1 but no more m bounding points to remove from main point set.
Return to step 1, t times.
Return the result of the circle/ellipse algorithm on remaining points.
The algorithm removes between k and mk bounding points every pass at a cost of O(N). For your purpose you'll probably want to remove some percentage of the bounding points, 1-25% seems like a good starting point. This solution assumes that k is very small compared to N, otherwise you'll be removing too many points.
A slower but likely better algorithm is useful in the case that you want to repeated remove one or all of the bounding point from the smallest elipse, recalculate the smallest ellipse, then remove the bounding points again.
You can do this by having the parent node be the bounding points (points stored as a set for easy for faster removal) of the smallest enclosing ellipse of it's children. The maximum number of bounding points should be no more than k (which I'm thinking is 9 for an ellipse, compared to 3 for a circle). So removing a point from the data structure at O(k log N) as it requires recalculating the smallest circle, which is O(k) for each parent that is affected which is O(log N). So removing m points from the data structure should be O(mk log N). You might also want to consider calculating the area of the ellipse every every removed point and removing every point for a cost of O(Nk log N) until you only have three points left. You could then analyze the area data to determine what ellipse should be used. A simple result would be to simply use the ellipse that has the area closest to the average area of all of the ellipses created, but may not be exactly what you seek. It also might be too slow, in which case I recommend a single pass of the faster algorithm.
This looks like an instance of Robust Ellipse Fitting. Check this paper: Outlier Elimination for
Robust Ellipse and Ellipsoid Fitting http://arxiv.org/pdf/0910.4610.pdf.
A first rough and easy solution is provided by the ellipse of inertia (2D version of the ellipsoid of inertia http://en.wikipedia.org/wiki/Moment_of_inertia#Inertia_ellipsoid). Its center is just the centroid and axes are given by Eigen vectors/values of the 2x2 matrix of inertia.

How to divide tiny double precision numbers correctly without precision errors?

I'm trying to diagnose and fix a bug which boils down to X/Y yielding an unstable result when X and Y are small:
In this case, both cx and patharea increase smoothly. Their ratio is a smooth asymptote at high numbers, but erratic for "small" numbers. The obvious first thought is that we're reaching the limit of floating point accuracy, but the actual numbers themselves are nowhere near it. ActionScript "Number" types are IEE 754 double-precision floats, so should have 15 decimal digits of precision (if I read it right).
Some typical values of the denominator (patharea):
0.0000000002119123
0.0000000002137313
0.0000000002137313
0.0000000002155502
0.0000000002182787
0.0000000002200977
0.0000000002210072
And the numerator (cx):
0.0000000922932995
0.0000000930474444
0.0000000930582124
0.0000000938123574
0.0000000950458711
0.0000000958000159
0.0000000962901528
0.0000000970442977
0.0000000977984426
Each of these increases monotonically, but the ratio is chaotic as seen above.
At larger numbers it settles down to a smooth hyperbola.
So, my question: what's the correct way to deal with very small numbers when you need to divide one by another?
I thought of multiplying numerator and/or denominator by 1000 in advance, but couldn't quite work it out.
The actual code in question is the recalculate() function here. It computes the centroid of a polygon, but when the polygon is tiny, the centroid jumps erratically around the place, and can end up a long distance from the polygon. The data series above are the result of moving one node of the polygon in a consistent direction (by hand, which is why it's not perfectly smooth).
This is Adobe Flex 4.5.
I believe the problem most likely is caused by the following line in your code:
sc = (lx*latp-lon*ly)*paint.map.scalefactor;
If your polygon is very small, then lx and lon are almost the same, as are ly and latp. They are both very large compared to the result, so you are subtracting two numbers that are almost equal.
To get around this, we can make use of the fact that:
x1*y2-x2*y1 = (x2+(x1-x2))*y2 - x2*(y2+(y1-y2))
= x2*y2 + (x1-x2)*y2 - x2*y2 - x2*(y2-y1)
= (x1-x2)*y2 - x2*(y2-y1)
So, try this:
dlon = lx - lon
dlat = ly - latp
sc = (dlon*latp-lon*dlat)*paint.map.scalefactor;
The value is mathematically the same, but the terms are an order of magnitude smaller, so the error should be an order of magnitude smaller as well.
Jeffrey Sax has correctly identified the basic issue - loss of precision from combining terms that are (much) larger than the final result.
The suggested rewriting eliminates part of the problem - apparently sufficient for the actual case, given the happy response.
You may find, however, that if the polygon becomes again (much) smaller and/or farther away from the origin, inaccuracy will show up again. In the rewritten formula the terms are still quite a bit larger than their difference.
Furthermore, there's another 'combining-large&comparable-numbers-with-different-signs'-issue in the algorithm. The various 'sc' values in subsequent cycles of the iteration over the edges of the polygon effectively combine into a final number that is (much) smaller than the individual sc(i) are. (if you have a convex polygon you will find that there is one contiguous sequence of positive values, and one contiguous sequence of negative values, in non-convex polygons the negatives and positives may be intertwined).
What the algorithm is doing, effectively, is computing the area of the polygon by adding areas of triangles spanned by the edges and the origin, where some of the terms are negative (whenever an edge is traversed clockwise, viewing it from the origin) and some positive (anti-clockwise walk over the edge).
You get rid of ALL the loss-of-precision issues by defining the origin at one of the polygon's corners, say (lx,ly) and then adding the triangle-surfaces spanned by the edges and that corner (so: transforming lon to (lon-lx) and latp to (latp-ly) - with the additional bonus that you need to process two triangles less, because obviously the edges that link to the chosen origin-corner yield zero surfaces.
For the area-part that's all. For the centroid-part, you will of course have to "transform back" the result to the original frame, i.e. adding (lx,ly) at the end.

Duration for Kinetic Scrolling (Momentum) Based On Velocity?

i'm trying to implement kinetic scrolling of a list object, but i'm having a problem determining the amount of friction (duration) to apply based on the velocity.
my applyFriction() method evenly reduces the velocity of the scrolling object based on a duration property. however, using the same duration (IE: 1 second) for every motion doesn't appear natural.
for motions with a small amount of velocity (IE: 5 - 10 pixels) a 1 second duration appears fine, but applying friction over a 1 second duration for motions with lots of velocity (IE: 100+ pixels) the scrolling object will appear to slow and stop much faster.
essentially, i'm trying to determine the appropriate duration for each motion so that both small and large amounts of velocity will share a matching friction, so the moving object will appear to always have a constant "weight".
is there a general algorithm for determining the duration of kinetic movement based on different velocities?
note: i'm programming in ActionScript 3.0 and employing the Tween class to reduce the velocity of a moving object over a duration.
A better model for friction is that the frictional force is proportional to velocity. You'll need a constant to determine the relationship between force and acceleration (mass, more or less). Writing the relationships as a difference equation,
F[n] = -gamma * v[n-1]
a[n] = F[n]/m
v[n] = v[n-1] + dt * a[n]
= v[n-1] + dt * F[n] / m
= v[n-1] - dt * gamma * v[n-1] / m
= v[n-1] * (1 - dt*gamma/m)
So, if you want your deceleration to look smooth and natural, instead of linearly decreasing your velocity you want to pick some constant slightly less than 1 and repeatedly multiply the velocity by this constant. Of course, this only asymptotically approaches zero, so you probably want to have a threshold below which you just set velocity to zero.
So, for example:
v_epsilon = <some minimum velocity>;
k_frict = 0.9; // play around with this a bit
while (v > v_epsilon) {
v = v * k_frict;
usleep(1000);
}
v = 0;
I think you'll find this looks much more natural.
If you want to approximate this with a linear speed reduction, then you'll want to make the amount of time you spend slowing down proportional to the natural log of the initial velocity. This won't look quite right, but it'll look somewhat better than what you've got now.
(Why natural log? Because frictional force proportional to velocity sets up a first-order differential equation, which gives an exp(-t/tau) kind of response, where tau is a characteristic of the system. The time to decay from an arbitrary velocity to a given limit is proportional to ln(v_init) in a system like this.)
i had looked into this problem before: why does Android momentum scrolling not feel as nice as the iPhone?
Fortunately, a guy already got out a video camera, recorded an iPhone scrolling, and figured out what it does: archive
flick list with its momentum scrolling and deceleration
When I started to work [on this], I did not pay attention carefully to the way the scrolling works on iPhone. I was just assuming that the deceleration is based on Newton’s law of motion, i.e. a moving body receiving a friction which is forced to stop after a while. After a while, I realized that this is actually not how iPhone (and later iOS devices such as iPad) does it. Using a camera and capturing few dozens scrolling movement of various iOS applications, it came to me that all the scrolling will stop after the same amount of time, regardless the size of the list or the speed of the flick. How fast you flick (which determines the initial velocity of the scrolling) only determines where the list would stop and not when.
This lead him to the greatly simplified math:
amplitude = initialVelocity * scaleFactor;
step = 0;
ticker = setInterval(function() {
var delta = amplitude / timeConstant;
position += delta;
amplitude -= delta;
step += 1;
if (step > 6 * timeConstant) {
clearInterval(ticker);
}
}, updateInterval);
In fact, this is how the deceleration is implemented in Apple’s own PastryKit library (and now part of iAd). It reduces the scrolling speed by a factor of 0.95 for each animation tick (16.7 msec, targeting 60 fps). This corresponds to a time constant of 325 msec. If you are a math geek, then obviously you realize that the exponential nature of the scroll velocity will yield the exponential decay in the position. With a little bit of scriblling, eventually you find out that
325 = -16.7ms / ln(0.95)
Giving the motion:
Your question was about the duration to use. i like how the iPhone feels (as opposed to Android). i think you should use 1,950 ms:
- (1000 ms / 60) / ln(0.95) * 6 = 1950 ms

Finding a free area in the stage

I'm drawing rectangles at random positions on the stage, and I don't want them to overlap.
So for each rectangle, I need to find a blank area to place it.
I've thought about trying a random position, verify if it is free with
private function containsRect(r:Rectangle):Boolean {
var free:Boolean = true;
for (var i:int = 0; i < numChildren; i++)
free &&= getChildAt(i).getBounds(this).containsRect(r);
return free;
}
and in case it returns false, to try with another random position.
The problem is that if there is no free space, I'll be stuck trying random positions forever.
There is an elegant solution to this?
Let S be the area of the stage. Let A be the area of the smallest rectangle we want to draw. Let N = S/A
One possible deterministic approach:
When you draw a rectangle on an empty stage, this divides the stage into at most 4 regions that can fit your next rectangle. When you draw your next rectangle, one or two regions are split into at most 4 sub-regions (each) that can fit a rectangle, etc. You will never create more than N regions, where S is the area of your stage, and A is the area of your smallest rectangle. Keep a list of regions (unsorted is fine), each represented by its four corner points, and each labeled with its area, and use weighted-by-area reservoir sampling with a reservoir size of 1 to select a region with probability proportional to its area in at most one pass through the list. Then place a rectangle at a random location in that region. (Select a random point from bottom left portion of the region that allows you to draw a rectangle with that point as its bottom left corner without hitting the top or right wall.)
If you are not starting from a blank stage then just build your list of available regions in O(N) (by re-drawing all the existing rectangles on a blank stage in any order, for example) before searching for your first point to draw a new rectangle.
Note: You can change your reservoir size to k to select the next k rectangles all in one step.
Note 2: You could alternatively store available regions in a tree with each edge weight equaling the sum of areas of the regions in the sub-tree over the area of the stage. Then to select a region in O(logN) we recursively select the root with probability area of root region / S, or each subtree with probability edge weight / S. Updating weights when re-balancing the tree will be annoying, though.
Runtime: O(N)
Space: O(N)
One possible randomized approach:
Select a point at random on the stage. If you can draw one or more rectangles that contain the point (not just one that has the point as its bottom left corner), then return a randomly positioned rectangle that contains the point. It is possible to position the rectangle without bias with some subtleties, but I will leave this to you.
At worst there is one space exactly big enough for our rectangle and the rest of the stage is filled. So this approach succeeds with probability > 1/N, or fails with probability < 1-1/N. Repeat N times. We now fail with probability < (1-1/N)^N < 1/e. By fail we mean that there is a space for our rectangle, but we did not find it. By succeed we mean we found a space if one existed. To achieve a reasonable probability of success we repeat either Nlog(N) times for 1/N probability of failure, or N² times for 1/e^N probability of failure.
Summary: Try random points until we find a space, stopping after NlogN (or N²) tries, in which case we can be confident that no space exists.
Runtime: O(NlogN) for high probability of success, O(N²) for very high probability of success
Space: O(1)
You can simplify things with a transformation. If you're looking for a valid place to put your LxH rectangle, you can instead grow all of the previous rectangles L units to the right, and H units down, and then search for a single point that doesn't intersect any of those. This point will be the lower-right corner of a valid place to put your new rectangle.
Next apply a scan-line sweep algorithm to find areas not covered by any rectangle. If you want a uniform distribution, you should choose a random y-coordinate (assuming you sweep down) weighted by free area distribution. Then choose a random x-coordinate uniformly from the open segments in the scan line you've selected.
I'm not sure how elegant this would be, but you could set up a maximum number of attempts. Maybe 100?
Sure you might still have some space available, but you could trigger the "finish" event anyway. It would be like when tween libraries snap an object to the destination point just because it's "close enough".
HTH
One possible check you could make to determine if there was enough space, would be to check how much area the current set of rectangels are taking up. If the amount of area left over is less than the area of the new rectangle then you can immediately give up and bail out. I don't know what information you have available to you, or whether the rectangles are being laid down in a regular pattern but if so you may be able to vary the check to see if there is obviously not enough space available.
This may not be the most appropriate method for you, but it was the first thing that popped into my head!
Assuming you define the dimensions of the rectangle before trying to draw it, I think something like this might work:
Establish a grid of possible centre points across the stage for the candidate rectangle. So for a 6x4 rectangle your first point would be at (3, 2), then (3 + 6 * x, 2 + 4 * y). If you can draw a rectangle between the four adjacent points then a possible space exists.
for (x = 0, x < stage.size / rect.width - 1, x++)
for (y = 0, y < stage.size / rect.height - 1, y++)
if can_draw_rectangle_at([x,y], [x+rect.width, y+rect.height])
return true;
This doesn't tell you where you can draw it (although it should be possible to build a list of the possible drawing areas), just that you can.
I think that the only efficient way to do this with what you have is to maintain a 2D boolean array of open locations. Have the array of sufficient size such that the drawing positions still appear random.
When you draw a new rectangle, zero out the corresponding rectangular piece of the array. Then checking for a free area is constant^H^H^H^H^H^H^H time. Oops, that means a lookup is O(nm) time, where n is the length, m is the width. There must be a range based solution, argh.
Edit2: Apparently the answer is here but in my opinion this might be a bit much to implement on Actionscript, especially if you are not keen on the geometry.
Here's the algorithm I'd use
Put down N number of random points, where N is the number of rectangles you want
iteratively increase the dimensions of rectangles created at each point N until they touch another rectangle.
You can constrain the way that the initial points are put down if you want to have a minimum allowable rectangle size.
If you want all the space covered with rectangles, you can then incrementally add random points to the remaining "free" space until there is no area left uncovered.