Could someone explain me w.r.t. coordinates - actionscript-3

Could someone please explain me what are w.r.t. coordinates? or at least direct me to a place that explains what they are? I've being searching for two days or so and all that I found is tutorials on how are they used but not what they actually are or even what wrt stand for.
These tutorials take the assumption I already know what they are which is stressful because I've never heard of them.
I'm working in as3 trying to do some parametric surfaces using pixel particles and I understand these are kind of useful while moving the particles around.
This is the relevant function where they are used as u,v and w, where p is a single particle that also contains xyz values that are not being modified.
function onEnter(evt:Event):void {
dphi = 0.015*Math.cos(getTimer()*0.000132);
dtheta = 0.017*Math.cos(getTimer()*0.000244);
phi = (phi + dphi) % pi2;
theta = (theta + dtheta) % pi2;
cost = Math.cos(theta);
sint = Math.sin(theta);
cosp = Math.cos(phi);
sinp = Math.sin(phi);
//We calculate some of the rotation matrix entries here for increased efficiency:
M11 = cost*sinp;
M12 = sint*sinp;
M31 = -cost*cosp;
M32 = -sint*cosp;
p = firstParticle;
//////// redrawing ////////
displayBitmapData.lock();
//apply filters pre-update
displayBitmapData.colorTransform(displayBitmapData.rect,darken);
displayBitmapData.applyFilter(displayBitmapData, displayBitmapData.rect, origin, blur);
p = firstParticle;
do {
//Calculate rotated coordinates
p.u = M11*p.x + M12*p.y + cosp*p.z;
p.v = -sint*p.x + cost*p.y;
p.w = M31*p.x + M32*p.y + sinp*p.z;
//Calculate viewplane projection coordinates
m = fLen/(fLen - p.u);
p.projX = p.v*m + projCenterX;
p.projY = p.w*m + projCenterY;
if ((p.projX > displayWidth)||(p.projX<0)||(p.projY<0)||(p.projY>displayHeight)||(p.u>uMax)) {
p.onScreen = false;
}
else {
p.onScreen = true;
}
if (p.onScreen) {
//we read the color in the position where we will place another particle:
readColor = displayBitmapData.getPixel(p.projX, p.projY);
//we take the blue value of this color to represent the current brightness in this position,
//then we increase this brightness by levelInc.
level = (readColor & 0xFF)+levelInc;
//we make sure that 'level' stays smaller than 255:
level = (level > 255) ? 255 : level;
/*
We create light blue pixels quickly with a trick:
the red component will be zero, the blue component will be 'level', and
the green component will be 50% of the blue value. We divide 'level' in
half using a fast technique: a bit-shift operation of shifting down by one bit
accomplishes the same thing as dividing by two (for an integer output).
*/
//dColor = ((level>>1) << 8) | level;
dColor = (level << 16) | (level << 8) | level;
displayBitmapData.setPixel(p.projX, p.projY, dColor);
}
p = p.next;
} while (p != null)
displayBitmapData.unlock();
}
This is the example I'm using http://www.flashandmath.com/flashcs4/light/
I kinda understand how are they used but I don't get why.
Thanks in advance.
PD: kind of surprised there is not even a tag related to it.

In that Particle3D.as class linked, they have:
//coords WRT viewpoint axes
public var u:Number;
public var v:Number;
public var w:Number;
From the code example you posted to the question it becomes clear that coords WRT viewpoint axes means coordinates with respect to viewpoint axes, since the code is doing exactly that .
What they are doing is a Camera (or Viewing) Transformation, where the Particle's world coordinates (x,y,z) is transformed from the world coordinate system to coordinates in the camera (or view) coordinate system (u,v,w).
(x,y,z) are the coordinates of the particle in the world coordinate system
(u,v,w) are the coordinates of the particle in the camera coordinate system
For example, the world coordinate system might have an origin at (0,0,0) with the camera positioned at something like (5,3,6) with an lookat vector of (1,0,0) and up vector of (0,1,0).

Related

Calculate large distance between two points using GeoTools

New to GeoTools and GIS and I am trying to calculate distance between Mumbai and Durban using GeoTools library. I am getting close to accurate results for small distances but when i go for bigger ones,the calculation is way too offcourse by 2000 km, i dont completely understand the CRS system .Below is my Code to calculate the distance between Mumbai and Durban
Coordinate source = new Coordinate(19.0760, 72.8777); ///Mumbai Lat Long
Coordinate destination1 = new Coordinate(-29.883333, 31.049999); //Durban Lat Long
GeometryFactory geometryFactory = new GeometryFactory();
Geometry point1 = geometryFactory.createPoint(source);
Geometry point2 = geometryFactory.createPoint(destination1);
CoordinateReferenceSystem auto = auto = CRS.decode("AUTO:42001,13.45,52.3");
MathTransform transform = CRS.findMathTransform(DefaultGeographicCRS.WGS84, auto);
Geometry g3 = JTS.transform(point1, transform);
Geometry g4 = JTS.transform(point2, transform);
double distance = g3.distance(g4);
This is what happens when you copy code blindly from stackexchange questions without reading the question it was based on which explains why.
All the times I've answered that question (and posted code like that) the questioner is trying to use lat/lon coordinates in degrees to measure a short distance in metres. The trick shown in your question creates an automatic UTM projection centred on the position specified after the "AUTO:42001," bit (in your case 52N 13E) - this needs to be the centre of the area you are interested in, so in your case those values are probably wrong anyway.
But you aren't interested in a small region Mumbai to Durban is a significant way around the Earth so you need to allow for the curvature of the Earth's surface. Also you aren't trying to do something difficult for which JTS is the only source of process (e.g buffering). In this case you should use the GeodeticCalculator which takes the shape of the Earth into account using the library from C. F. F. Karney, Algorithms for geodesics, J. Geodesy 87, 43–55 (2013).
Anyway enough explanation that no one will read in the future, here's the code:
public static void main(String[] args) {
DefaultGeographicCRS crs = DefaultGeographicCRS.WGS84;
if (args.length != 4) {
System.err.println("Need 4 numbers lat_1 lon_1 lat_2 lon_2");
return;
}
GeometryFactory geomFactory = new GeometryFactory();
Point[] points = new Point[2];
for (int i = 0, k = 0; i < 2; i++, k += 2) {
double x = Double.valueOf(args[k]);
double y = Double.valueOf(args[k + 1]);
if (CRS.getAxisOrder(crs).equals(AxisOrder.NORTH_EAST)) {
System.out.println("working with a lat/lon crs");
points[i] = geomFactory.createPoint(new Coordinate(x, y));
} else {
System.out.println("working with a lon/lat crs");
points[i] = geomFactory.createPoint(new Coordinate(y, x));
}
}
double distance = 0.0;
GeodeticCalculator calc = new GeodeticCalculator(crs);
calc.setStartingGeographicPoint(points[0].getX(), points[0].getY());
calc.setDestinationGeographicPoint(points[1].getX(), points[1].getY());
distance = calc.getOrthodromicDistance();
double bearing = calc.getAzimuth();
Quantity<Length> dist = Quantities.getQuantity(distance, SI.METRE);
System.out.println(dist.to(MetricPrefix.KILO(SI.METRE)).getValue() + " Km");
System.out.println(dist.to(USCustomary.MILE).getValue() + " miles");
System.out.println("Bearing " + bearing + " degrees");
}
Giving:
working with a lon/lat crs
POINT (72.8777 19.076)
POINT (31.049999 -29.883333)
7032.866960793305 Km
4370.020928274692 miles
Bearing -139.53428618565218 degrees

function that detects if a ray is intersecting an object

I have a function that detects if a ray is intersecting an object, but it works with a radius around the center of the object, I want it to work with a bounding box, I want to give it 2 Vector3D of the bounding box, and one vector of the origin of the ray and one of the direction of the ray, and it will calculate if there is an intersection, can anyone help me with that? what is the mathematical formula for this?
intersectRay(origin:Vector3D, dir:Vector3D):
Found the solution.
1. I use a bounding box of 8 points, each for each corner.
2. I used this function to give each point a location of x and y on a 2D plain this way I turned the 3D problem into a 2D problem, the x and y are really the horizontal angle of the point relative to the camera position and the vertical angle relative to the camera position point:
public function AngleBetween2vectors(v1:Vector3D,v2:Vector3D):Point
{
var angleX:Number = Math.atan2(v1.x-v2.x,v1.z-v2.z);
angleX = angleX*180/Math.PI;
var angleY:Number = Math.atan2(v1.y-v2.y,v1.z-v2.z);
angleY = angleY*180/Math.PI;
return new Point(angleX,angleY);
}
Then I use a convex hull algorithm to delete the point that are not part of the external outline polygon which marks the place of the object on the screen, can be found on the net, make sure the bounding box doesn't contain duplicate points like if you have a flat plain with no depth, this can cause problem for the algorithm, so when you create the bounding box clean them out.
Then I use this algorithm to determine if the point of the mouse click falls within this polygon or outside of it:
private function pnpoly( A:Array,p:Point ):Boolean
{
var i:int;
var j:int;
var c:Boolean = false;
for( i = 0, j = A.length-1; i < A.length; j = i++ ) {
if( ( ( A[i].y > p.y ) != ( A[j].y > p.y ) ) &&
( p.x < ( A[j].x - A[i].x ) * ( p.y - A[i].y ) / ( A[j].y - A[i].y ) + A[i].x ) )
{
c = !c;
}
}
return c;
}
Then I measure the distance to the object and pick the closest one to the camera position, using this function:
public function DistanceBetween2Vectors(v1:Vector3D,v2:Vector3D):Number
{
var a:Number = Math.sqrt(Math.pow((v1.x-v2.x),2)+Math.pow((v1.y-v2.y),2));
var b:Number = Math.sqrt(Math.pow((v1.z-v2.z),2)+Math.pow((v1.y-v2.y),2));
return Math.sqrt(Math.pow(a,2)+Math.pow(b,2));
}
I'm sure there are more efficient ways, but this way is an interesting one, and it's good enough for me, I like it because it is intuitive, I don't like to work with abstract mathematics, it's very hard for me, and if there is a mistake, it's very hard to find it. If anyone has any suggestions on how I can make it more efficient, I'll be happy to hear them.

In Starling, how do you transform Filters to match the target Sprite's rotation & position?

Let's say your Starling display-list is as follows:
Stage
|___MainApp
|______Canvas (filter's target)
Then, you decide your MainApp should be rotated 90 degrees and offset a bit:
mainApp.rotation = Math.PI * 0.5;
mainApp.x = stage.stageWidth;
But all of a sudden, the filter keeps on applying itself to the target (canvas) in the angle it was originally (as if the MainApp was still at 0 degrees).
(notice in the GIF how the Blur's strong horizontal value continues to only apply horizontally although the parent object turned 90 degrees).
What would need to be changed to apply the filter to the target object before it gets it's parents transform? That way (I'm assuming) the filter's result would get transformed by the parent objects.
Any guess as to how this could be done?
https://github.com/bigp/StarlingShaderIssue
(PS: the filter I'm actually using is custom-made, but this BlurFilter example shows the same issue I'm having with the custom one. If there's any patching-up to do in the shader code, at least it wouldn't necessarily have to be done on the built-in BlurFilter specifically).
I solved this myself with numerous trial and error attempts over the course of several hours.
Since I only needed the shader to run in either at 0 or 90 degrees (not actually tweened like the gif demo shown in the question), I created a shader with two specialized sets of AGAL instructions.
Without going in too much details, the rotated version basically requires a few extra instructions to flip the x and y fields in the vertex and fragment shader (either by moving them with mov or directly calculating the mul or div result into the x or y field).
For example, compare the 0 deg vertex shader...
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
"mul posScaled, va1, viewportScale", // pass displacement positions (scaled)
].join("\n");
... with the 90 deg vertex shader:
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
//Calculate the rotated vertex "displacement" UVs
"mov temp1, va1",
"mov temp2, va1",
"mul temp2.y, temp1.x, viewportScale.y", //Flip X to Y, and scale with viewport Y
"mul temp2.x, temp1.y, viewportScale.x", //Flip Y to X, and scale with viewport X
"sub temp2.y, 1.0, temp2.y", //Invert the UV for the Y axis.
"mov posScaled, temp2",
].join("\n");
You can ignore the special aliases in the AGAL example, they're essentially posOriginal = v0, posScaled = v1 variants and viewportScale = vc4constants, then I do a string-replace to change them back to their respective registers & fields ).
Just a human-readable trick I use to avoid going insane. \☻/
The part that I struggled with the most was calculating the correct scale to adjust the UV's scale (with proper detection to Stage / Viewport resize and render-texture size shifts).
Eventually, this is what I came up with in the AS3 code:
var pt:Texture = _passTexture,
dt:RenderTexture = _displacement.texture,
notReady:Boolean = pt == null,
star:Starling = Starling.current;
var finalScaleX:Number, viewRatioX:Number = star.viewPort.width / star.stage.stageWidth;
var finalScaleY:Number, viewRatioY:Number = star.viewPort.height / star.stage.stageHeight;
if (notReady) {
finalScaleX = finalScaleY = 1.0;
} else if (isRotated) {
//NOTE: Notice how the native width is divided with height, instead of same side. Weird, but it works!
finalScaleY = pt.nativeWidth / dt.nativeHeight / _imageRatio / paramScaleX / viewRatioX; //Eureka!
finalScaleX = pt.nativeHeight / dt.nativeWidth / _imageRatio / paramScaleY / viewRatioY; //Eureka x2!
} else {
finalScaleX = pt.nativeWidth / dt.nativeWidth / _imageRatio / viewRatioX / paramScaleX;
finalScaleY = pt.nativeHeight / dt.nativeHeight / _imageRatio / viewRatioY / paramScaleY;
}
Hopefully these extracted pieces of code can be helpful to others with similar shader issues.
Good luck!

How to draw paths specified in terms of straight and curved motion

I have information on paths I would like to draw. The information consists of a sequence of straight sections and curves. For straight sections, I have only the length. For curves, I have the radius, direction and angle. Basically, I have a turtle that can move straight or move in a circular arc from the current position (after which moving straight will be in a different direction).
I would like some way to draw these paths with the following conditions:
Minimal (preferably no) trigonometry.
Ability to center on a canvas and scale to fit any arbitrary size.
From what I can tell, GDI+ gives me number 2, Cairo gives me number 1, but neither one makes it particularly easy to get both. I'm open to suggestions of how to make GDI+ or Cairo (preferably pycairo) work, and I'm also open to any other library (preferably C# or Python).
I'm even open to abstract mathematical explanations of how this would be done that I can convert into code.
For 2D motion, the state is [x, y, a]. Where the angle a is relative to the positive x-axis. Assuming initial state of [0, 0, 0]. 2 routines are needed to update the state according to each type of motion. Each path yields a new state, so the coordinates can be used to configure the canvas accordingly. The routines should be something like:
//by the definition of the state
State followLine(State s, double d) {
State s = new State();
s.x = s0.x + d * cos(s0.a);
s.y = s0.y + d * sin(s0.a);
s.a = s0.a;
return s;
}
State followCircle(State s0, double radius, double arcAngle, boolean clockwise) {
State s1 = new State(s0);
//look at the end point on the arc
if(clockwise) {
s1.a = s0.a - arcAngle / 2;
} else {
s1.a = s0.a + arcAngle / 2;
}
//move to the end point of the arc
State s = followLine(s1, 2 * radius * sin(arcAngle/ 2));
//fix new angle
if(clockwise) {
s.a = s0.a - arcAngle;
} else {
s.a = s0.a + arcAngle;
}
return s;
}

How to get unrotated display object width/height of a rotated display object?

If I create a rectangle with 100px width and 100px height and then rotate it, the size of the element's "box" will have increased.
With 45 rotation, the size becomes about 143x143 (from 100x100).
Doing sometimes like cos(angleRad) * currentWidth seems to work for 45 rotation, but for other bigger angles it doesn't.
At the moment I am doing this:
var currentRotation = object.rotation;
object.rotation = 0;
var normalizedWidth = object.width;
var normalizedHeight = object.height;
object.rotation = currentRotation;
Surely, there must be a better and more efficient way. How should I get the "normalized" width and height of a displayobject, aka the size when it has not been rotated?
The best approach would probably be to use the code posted in the question - i.e. to unrotate the object, check its width, and then re-rotate it. Here's why.
First, simplicity. It's obvious what's being done, and why it works. Anyone coming along later should have no trouble understanding it.
Second, accuracy. Out of curiosity I coded up all three suggestions currently in this thread, and I was not really surprised to find that for an arbitrarily scaled object, they give three slightly different answers. The reason for this, in a nutshell, is that Flash's rendering internals are heavily optimized, and among other things, width and height are not stored internally as floats. They're stored as "twips" (twentieths of a pixel) on the ground that further accuracy is visually irrelevant.
Anyway, if the three methods give different answers, which is the most accurate? For my money, the most correct answer is what Flash thinks the width of the object is when it's unrotated, which is what the simple method gives us. Also, this method is the only one that always give answers rounded to the nearest 1/20, which I surmise (though I'm guessing) to mean it's probably equal to the value being stored internally, as opposed to being a calculated value.
Finally, speed. I assume this will surprise you, but when I coded the three methods up, the simple approach was the fastest by a small margin. (Don't read too much into that - they were all very close, and if you tweak my code, a different method might edge into the lead. The point is they're very comparable.)
You probably expected the simple method to be slower on the grounds that changing an object's rotation would cause lots of other things to be recalculated, incurring overhead. But all that really happens immediately when you change the rotation is that the object's transform matrix gets some new values. Flash doesn't really do much with that matrix until it's next time to draw the object on the screen. As for what math occurs when you then read the object's width/height, it's difficult to say. But it's worth noting that whatever math takes place in the simple method is done by the Player's heavily optimized internals, rather than being done in AS3 like the algebraic method.
Anyway I invite you to try out the sample code, and I think you'll find that the simple straightforward method is, at the least, no slower than any other. That plus simplicity makes it the one I'd go with.
Here's the code I used:
// init
var clip:MovieClip = new MovieClip();
clip.graphics.lineStyle( 10 );
clip.graphics.moveTo( 12.345, 37.123 ); // arbitrary
clip.graphics.lineTo( 45.678, 29.456 ); // arbitrary
clip.scaleX = .87; // arbitrary
clip.scaleY = 1.12; // arbitrary
clip.rotation = 47.123; // arbitrary
// run the test
var iterations:int = 1000000;
test( method1, iterations );
test( method2, iterations );
test( method3, iterations );
function test( fcn:Function, iter:int ) {
var t0:uint = getTimer();
for (var i:int=0; i<iter; i++) {
fcn( clip, i==0 );
}
trace(["Elapsed time", getTimer()-t0]);
}
// the "simple" method
function method1( m:MovieClip, traceSize:Boolean ) {
var rot:Number = m.rotation;
m.rotation = 0;
var w:Number = m.width;
var h:Number = m.height;
m.rotation = rot;
if (traceSize) { trace([ "method 1", w, h ]); }
}
// the "algebraic" method
function method2( m:MovieClip, traceSize:Boolean ) {
var r:Number = m.rotation * Math.PI/180;
var c:Number = Math.abs( Math.cos( r ) );
var s:Number = Math.abs( Math.sin( r ) );
var denominator:Number = (c*c - s*s); // an optimization
var w:Number = (m.width * c - m.height * s) / denominator;
var h:Number = (m.height * c - m.width * s) / denominator;
if (traceSize) { trace([ "method 2", w, h ]); }
}
// the "getBounds" method
function method3( m:MovieClip, traceSize:Boolean ) {
var r:Rectangle = m.getBounds(m);
var w:Number = r.width*m.scaleX;
var h:Number = r.height*m.scaleY;
if (traceSize) { trace([ "method 3", w, h ]); }
}
And my output:
method 1,37.7,19.75
Elapsed time,1416
method 2,37.74191378925391,19.608455916982187
Elapsed time,1703
method 3,37.7145,19.768000000000004
Elapsed time,1589
Surprising, eh? But there's an important lesson here about Flash development. I hereby christen Fen's Law of Flash Laziness:
Whenever possible, avoid tricky math by getting the renderer to do it for you.
It not only gets you done quicker, in my experience it usually results in a performance win anyway. Happy optimizing!
Here's the algorithmic approach, and its derivation.
First, let's do the opposite problem: Given a rectangle of unrotated width w, unrotated height h, and rotation r, what is the rotated width and height?
wr = abs(sin(r)) * h + abs(cos(r)) * w
hr = abs(sin(r)) * w + abs(cos(r)) * h
Now, try the problem as given: Given a rectangle of rotated width wr, rotated height hr, and rotation r, what is the unrotated width and height?
We need to solve the above equations for h and w. Let c represent abs(cos(r)) and s represent abs(sin(r)). If my rusty algebra skills still work, then the above equations can be solved with:
w = (wr * c - hr * s) / (c2 - s2)
h = (hr * c - wr * s) / (c2 - s2)
You should get the bounds of your square in your object's coordinate space (which means no rotations).
e.g.
var b:Sprite = new Sprite();
b.graphics.lineStyle(0.1);
b.graphics.drawRect(0,0,100,100);
b.rotation = 10;
trace('global coordinate bounds: ' + b.getBounds(this));//prints global coordinate bounds: (x=-17.35, y=0, w=115.85, h=115.85);
trace('local coordinate bounds: ' + b.getBounds(b));//prints local coordinate bounds: (x=0, y=0, w=100, h=100)
HTH,
George
Chip's answer in code:
// convert degrees to radians
var r:Number = this.rotation * Math.PI/180;
// cos, c in the equation
var c:Number = Math.abs(Math.cos(r));
// sin, s in the equation
var s:Number = Math.abs(Math.sin(r));
// get the unrotated width
var w:Number = (this.width * c - this.height * s) / (Math.pow(c, 2) - Math.pow(s, 2));