Absolute and absolute relative error (numerical errors) - numerical-methods

I'd like to understand how absolute and relative errors work in order to write some code.
Suppose, we have x1*=4.54 x2*=3.00 and x3*=15.0, accuracy: 3 digits.
How do we define:
a. the absolute error of x1*-x2*+x3* and
b. the absolute relative error of x1*x2*/x3*
c. the accuracy in a and b.
Trying to make sense:
a.
|e1|<=0.5*10^(-3)
|e2|<=0.5*10^(-3)
|e3|<=0.5*10^(-3)
or
|e1|<=0.5*10^(-2)
|e2|<=0.5*10^(-2)
|e3|<=0.5*10^(-1)
and then |e|<=|e1|+|e2|+|e3|=(15+4+3)*0.5*10^(-3)
b. |r|<=|r1|+|r2|+|r3|=|e1/x1*|+|e2/x2*|+|e3/x3*|

You're on the right track. Of course you can't find the actual error because you don't know the actual values. Thus you want to constrain the error. First note that we're talking about rounding error (as you've done), thus the maximum that each variable can be off by is 0.5 of its precision, i.e.
|e1| <= 0.005
|e2| <= 0.005
|e3| <= 0.05
For the absolute error of x1-x2+x3, the worst case is that all of the errors add together linearly, I.e.:
|e123| <= 0.005+0.005+0.05 = 0.06.
Because it's absolute error, you don't have to rescale by what the actual values of x1... are.
For the relative error of (x1x2)/x3, it's a little more complicated--you have to actually propogate (multiply) out the error. But, if you assume that the error is much smaller than the value, i.e. |e1| << x1 (which is a good approximation for this case), then you get the equation that you used in 'b':
|r| = |r123 / (x1x2/x3) | ~< |e1/x1| + |e2/x2| + |e3/x3|
Because this is relative error, you do have to rescale the errors by the actual values.
So, overall, you just about had it right --- just a little trouble with the absolute error.

Related

Get the numeric part of an amount from CSS

Is there a way, using pure CSS to fetch the numeric value without pulling back the unit too?
e.g. say I have a CSS variable defined as :root {--maxWidth: 100px;}. If I want to get the ratio of that value to my viewport's width I can't as calc(100vw / var(--maxWidth)) would fail as you can't divide a number with units by another number with units; even where they're the same unit.
I can get around this example case by omitting the units from my variable (e.g. :root {--maxWidth: 100;}), but I'm wondering how to do this in cases where you can't.
More specifically, I want to get the ratio / conversion value for 1vw to 1px so that I can write code which uses px values, then use transform: scale(var(--horizontalRatio), var(--verticalRatio)) to resize everything to fit perfectly in the viewport; but to do that I need a way to convert between pixels and viewport units.
There is a way to work around this; everywhere I set a size in pixels I could instead set the size to calc(100vw * X/var(--maxWidthInPx)) where X is the size in pixels of what I'm setting and --maxWdithInPx is a numeric only value giving the max width of the static px size. However, that means putting these little equations everywhere, rather than just having 1 place where things get scaled.
I've found several javascript solutions for this; but I need something that's CSS only.
In the near (or a far) future this will be possible using only CSS. The specification has changed to allow the division and multiplication of different types.
You can read the following:
At a * sub-expression, multiply the types of the left and right arguments. The sub-expression’s type is the returned result.
At a / sub-expression, let left type be the result of finding the types of its left argument, and right type be the result of finding the types of its right argument and then inverting it.
The sub-expression’s type is the result of multiplying the left type and right type.
As you can see, there are new rules that defines how types are multiplied and how the result is calculated so I am pretty sure what you want is possible but there is no implementation for this to test.
The current specification is more restrictive:
At *, check that at least one side is <number>. If both sides are <integer>, resolve to <integer>. Otherwise, resolve to the type of the other side.
At /, check that the right side is <number>. If the left side is <integer>, resolve to <number>. Otherwise, resolve to the type of the left side.
If an operator does not pass the above checks, the expression is invalid

How contrastive loss work intuitively in siamese network

I am having issue in getting clear concept of contrastive loss used in siamese network.
Here is pytorch formula
torch.mean((1-label) * torch.pow(euclidean_distance, 2) +
(label) * torch.pow(torch.clamp(margin - euclidean_distance, min=0.0), 2))
where margin=2.
If we convert this to equation format, it can be written as
(1-Y)*D^2 + Y* max(m-d,0)^2
Y=0, if both images are from same class
Y=1, if both images are from different class
What i think, if images are from same class the distance between embedding should decrease. and if images are from different class, the distance should increase.
I am unable to map this concept to contrastive loss.
Let say, if Y is 1 and distance value is larger, the first part become zero (1-Y), and second also become zero, because it should choose whether m-d or 0 is bigger.
So the loss is zero which does not make sense.
Can you please help me to understand this
If the distance of a negative sample is greater than the specified margin, it should be already separable from a positive sample. Therefore, there is no benefit in pushing it farther away.
For details please check this blog post, where the concept of "Equilibrium" gets explained and why the Contrastive Loss makes reaching this point easier.

as3 - how to randomize object positions without colliding?

If I have two instances called block1 and block2. And they move off the stage. It scrolls down the y position and it respawns back on top. But I don't want the x/y position colliding with the other blocks? I want it to respawn back to position, but I want it randomized but at the same time I don't want it touching each other?
Heres my code:
if (block1.y > stage.stageHeight)
{
block1.y = -550;
block1.x = (Math.floor(Math.random() * (maxNum - minNum + 5)) + minNum);
}
I'm pretty sure I'm calculating the respawn coordinates the wrong way, but I'm not sure how to put it in a random x and y position without colliding with other blocks.
A very simple method can be just to spawn your box, do a collision check, then if collision, remove and respawn and recheck until you find an empty spot where it fits
This is obviously quite inefficient, but is pretty simple to implement quickly if you have some sort of collision detection already working. Keep in mind if there is no spot that it can fit in, then it'll loop forever so you may want to set a max try count or something of that sort.
How fast/well it'll actually work will depend on if the spawn area is pretty sparse or pretty dense, which will increase/decrease the percentage that it'll find a good empty spot the first few times.
There is some room for improvement, going down this path though, such as if your collision detection system gives a minimum translation vector, you could just move the new shape over and use that position to spawn.
Other simple methods could involve keeping track of known occupied positions and adjusting your random range to avoid those values.

calculating the point of acceleration

I've been struggling to calculate the accelerator. I've spend a whole day in searching, trial & error but all in vain. I've one horizontal line on the stage (AS3) of let say 200 width. Center-point of that line is on 60 (if it was 100, I would have surely done it by just calculating the percentage). Now I need to know the width of given percentage. For example, total width of 60% or where will 30% (or any other percentage) start from?
What I know is the total width, and the center-point (either in percentage or in width).
Your help will be highly appreciated. In case if there is any formula, please give me details, don't just mention a/b/c as I'd never been a student of physics :(
Edit:
I don't have 10 reputations, so I can't post image directly here. Please click the following link to see the image.
Link: http://oi62.tinypic.com/11sk183.jpg
Edit:
Here is what I want exactly: I want to travel n% from any point (A/B/C/D) to its relative point (A->B/A->D ...) (Link)
http://i59.tinypic.com/2wp2lbl.jpg
If I understand correctly, you want a non-linear scale, so that pixel 1 on the line is 0%, pixel 100 on the line is 60% and pixel 200 is 100%?
If x=pixelpos/200 is the relative position on the line, one easy variation of the linear scale y=x*100% is y=(x+a*x*(1-x))*100%.
For x=0.5 the value is y=0.5+a*0.25, so for that to be 0.6=60% one needs a=0.4.
To get in the reverse direction the x for y=0.3=30%, one needs to solve a quadratic equation y=x*(1+a*(1-x)) or a*x^2-(1+a)*x+y=0. With the general solution formula, this gives
x = (1+a)/(2*a)-sqrt((1+a)^2-4*a*y)/(2*a)
= (2*y) / ( (1+a) + sqrt((1+a)^2-4*a*y) )
= (2*y) / ( (1+a) + sqrt((1-a)^2+4*a*(1-y)) )
and with a=0.4 and y=0.3
x = 0.6/( 1.4 + sqrt(1.98-0.48) )
approx 0.6/2.6=3/13=231/1001 approx 0.23
corresponding to pixel 46.
This will only work for a between -1 and 1, since for other values the slope at x=0 or x=1 will not be positive.
Another simple formula uses hyperbola instead of parabola,
y=a*x/(1+(a-1)*x)
with the inversion by
y+(a-1)*x*y = a*x <=> y = (a-(a-1)*y)*x
x = (y/a)/(1+(1/a-1)*y)
and
a = (y*(1-x))/(x*(1-y))
here there is no problem with monotonicity as long as there is no pole for x in [0,1], which is guaranteed for a>0.

How to divide tiny double precision numbers correctly without precision errors?

I'm trying to diagnose and fix a bug which boils down to X/Y yielding an unstable result when X and Y are small:
In this case, both cx and patharea increase smoothly. Their ratio is a smooth asymptote at high numbers, but erratic for "small" numbers. The obvious first thought is that we're reaching the limit of floating point accuracy, but the actual numbers themselves are nowhere near it. ActionScript "Number" types are IEE 754 double-precision floats, so should have 15 decimal digits of precision (if I read it right).
Some typical values of the denominator (patharea):
0.0000000002119123
0.0000000002137313
0.0000000002137313
0.0000000002155502
0.0000000002182787
0.0000000002200977
0.0000000002210072
And the numerator (cx):
0.0000000922932995
0.0000000930474444
0.0000000930582124
0.0000000938123574
0.0000000950458711
0.0000000958000159
0.0000000962901528
0.0000000970442977
0.0000000977984426
Each of these increases monotonically, but the ratio is chaotic as seen above.
At larger numbers it settles down to a smooth hyperbola.
So, my question: what's the correct way to deal with very small numbers when you need to divide one by another?
I thought of multiplying numerator and/or denominator by 1000 in advance, but couldn't quite work it out.
The actual code in question is the recalculate() function here. It computes the centroid of a polygon, but when the polygon is tiny, the centroid jumps erratically around the place, and can end up a long distance from the polygon. The data series above are the result of moving one node of the polygon in a consistent direction (by hand, which is why it's not perfectly smooth).
This is Adobe Flex 4.5.
I believe the problem most likely is caused by the following line in your code:
sc = (lx*latp-lon*ly)*paint.map.scalefactor;
If your polygon is very small, then lx and lon are almost the same, as are ly and latp. They are both very large compared to the result, so you are subtracting two numbers that are almost equal.
To get around this, we can make use of the fact that:
x1*y2-x2*y1 = (x2+(x1-x2))*y2 - x2*(y2+(y1-y2))
= x2*y2 + (x1-x2)*y2 - x2*y2 - x2*(y2-y1)
= (x1-x2)*y2 - x2*(y2-y1)
So, try this:
dlon = lx - lon
dlat = ly - latp
sc = (dlon*latp-lon*dlat)*paint.map.scalefactor;
The value is mathematically the same, but the terms are an order of magnitude smaller, so the error should be an order of magnitude smaller as well.
Jeffrey Sax has correctly identified the basic issue - loss of precision from combining terms that are (much) larger than the final result.
The suggested rewriting eliminates part of the problem - apparently sufficient for the actual case, given the happy response.
You may find, however, that if the polygon becomes again (much) smaller and/or farther away from the origin, inaccuracy will show up again. In the rewritten formula the terms are still quite a bit larger than their difference.
Furthermore, there's another 'combining-large&comparable-numbers-with-different-signs'-issue in the algorithm. The various 'sc' values in subsequent cycles of the iteration over the edges of the polygon effectively combine into a final number that is (much) smaller than the individual sc(i) are. (if you have a convex polygon you will find that there is one contiguous sequence of positive values, and one contiguous sequence of negative values, in non-convex polygons the negatives and positives may be intertwined).
What the algorithm is doing, effectively, is computing the area of the polygon by adding areas of triangles spanned by the edges and the origin, where some of the terms are negative (whenever an edge is traversed clockwise, viewing it from the origin) and some positive (anti-clockwise walk over the edge).
You get rid of ALL the loss-of-precision issues by defining the origin at one of the polygon's corners, say (lx,ly) and then adding the triangle-surfaces spanned by the edges and that corner (so: transforming lon to (lon-lx) and latp to (latp-ly) - with the additional bonus that you need to process two triangles less, because obviously the edges that link to the chosen origin-corner yield zero surfaces.
For the area-part that's all. For the centroid-part, you will of course have to "transform back" the result to the original frame, i.e. adding (lx,ly) at the end.