In Starling, how do you transform Filters to match the target Sprite's rotation & position? - actionscript-3

Let's say your Starling display-list is as follows:
Stage
|___MainApp
|______Canvas (filter's target)
Then, you decide your MainApp should be rotated 90 degrees and offset a bit:
mainApp.rotation = Math.PI * 0.5;
mainApp.x = stage.stageWidth;
But all of a sudden, the filter keeps on applying itself to the target (canvas) in the angle it was originally (as if the MainApp was still at 0 degrees).
(notice in the GIF how the Blur's strong horizontal value continues to only apply horizontally although the parent object turned 90 degrees).
What would need to be changed to apply the filter to the target object before it gets it's parents transform? That way (I'm assuming) the filter's result would get transformed by the parent objects.
Any guess as to how this could be done?
https://github.com/bigp/StarlingShaderIssue
(PS: the filter I'm actually using is custom-made, but this BlurFilter example shows the same issue I'm having with the custom one. If there's any patching-up to do in the shader code, at least it wouldn't necessarily have to be done on the built-in BlurFilter specifically).

I solved this myself with numerous trial and error attempts over the course of several hours.
Since I only needed the shader to run in either at 0 or 90 degrees (not actually tweened like the gif demo shown in the question), I created a shader with two specialized sets of AGAL instructions.
Without going in too much details, the rotated version basically requires a few extra instructions to flip the x and y fields in the vertex and fragment shader (either by moving them with mov or directly calculating the mul or div result into the x or y field).
For example, compare the 0 deg vertex shader...
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
"mul posScaled, va1, viewportScale", // pass displacement positions (scaled)
].join("\n");
... with the 90 deg vertex shader:
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
//Calculate the rotated vertex "displacement" UVs
"mov temp1, va1",
"mov temp2, va1",
"mul temp2.y, temp1.x, viewportScale.y", //Flip X to Y, and scale with viewport Y
"mul temp2.x, temp1.y, viewportScale.x", //Flip Y to X, and scale with viewport X
"sub temp2.y, 1.0, temp2.y", //Invert the UV for the Y axis.
"mov posScaled, temp2",
].join("\n");
You can ignore the special aliases in the AGAL example, they're essentially posOriginal = v0, posScaled = v1 variants and viewportScale = vc4constants, then I do a string-replace to change them back to their respective registers & fields ).
Just a human-readable trick I use to avoid going insane. \☻/
The part that I struggled with the most was calculating the correct scale to adjust the UV's scale (with proper detection to Stage / Viewport resize and render-texture size shifts).
Eventually, this is what I came up with in the AS3 code:
var pt:Texture = _passTexture,
dt:RenderTexture = _displacement.texture,
notReady:Boolean = pt == null,
star:Starling = Starling.current;
var finalScaleX:Number, viewRatioX:Number = star.viewPort.width / star.stage.stageWidth;
var finalScaleY:Number, viewRatioY:Number = star.viewPort.height / star.stage.stageHeight;
if (notReady) {
finalScaleX = finalScaleY = 1.0;
} else if (isRotated) {
//NOTE: Notice how the native width is divided with height, instead of same side. Weird, but it works!
finalScaleY = pt.nativeWidth / dt.nativeHeight / _imageRatio / paramScaleX / viewRatioX; //Eureka!
finalScaleX = pt.nativeHeight / dt.nativeWidth / _imageRatio / paramScaleY / viewRatioY; //Eureka x2!
} else {
finalScaleX = pt.nativeWidth / dt.nativeWidth / _imageRatio / viewRatioX / paramScaleX;
finalScaleY = pt.nativeHeight / dt.nativeHeight / _imageRatio / viewRatioY / paramScaleY;
}
Hopefully these extracted pieces of code can be helpful to others with similar shader issues.
Good luck!

Related

LWJGL Picking - Select Certain Block When Hovering ( gluUnProject() )

This video will show my current situation, and I currently can't find any answers to it online.
https://www.youtube.com/watch?v=O8Mh-1Emoc8&feature=youtu.be
My Code:
public Vector3D pickBlock() {
glDisable(GL_TEXTURE);
IntBuffer viewport = BufferUtils.createIntBuffer(16);
FloatBuffer modelview = BufferUtils.createFloatBuffer(16);
FloatBuffer projection = BufferUtils.createFloatBuffer(16);
FloatBuffer winZ = BufferUtils.createFloatBuffer(1);
float winX, winY;
FloatBuffer position = BufferUtils.createFloatBuffer(3);
glGetFloat(GL_MODELVIEW_MATRIX, modelview);
glGetFloat(GL_PROJECTION_MATRIX, projection);
glGetInteger(GL_VIEWPORT, viewport);
winX = (float)Display.getWidth() / 2;
winY = (float)viewport.get(3) - (float)Display.getHeight() / 2;
glReadPixels(Display.getWidth() / 2, (int)winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, winZ);
gluUnProject(winX, winY, winZ.get(), modelview, projection, viewport, position);
glEnable(GL_TEXTURE);
return new Vector3D(position.get(0) / 2 + 0.5f, position.get(1) / 2 + 0.5f, position.get(2) / 2 + 0.5f);
}
It returns "/ 2 + 0.5f" because that is needed because of the offsets I have for the blocks (if I removed the 0.5f, the offset would be in the center instead of the corner)
I seams to me that the error, based on the video, comes from when you are facing in the positive z direction (or whatever your back direction is). My guess is that you aren't taking the facing direction into account as I see in your code that you are just adding a constant 0.5F to the position of your cursor.
Therfore, when you are facing backwards, it adds 0.5 which makes it be behind the wall (since back is negative Z). one simple check would be weather the Z component of your forward vector is positive or negative, and deciding the factor added to the cursor based on that, then doing the same for the X.
Depending on how you implemented your camera (IE: if you used Euler angles (rx, ry, rz) or if you used Quaternions / forward vectors), the way you would do that check would vary, feel free to ask me for examples based on your system if you need.
hope this helped!
PS: if you're using angles, you can either check for the range of the y-axis rotation value and determine which direction you are facing and thus weather to add or subtract, OR you can calculate the forward vector based on your angles, and then check the for sign of the component.

Mimick photoshop/painter smooth draw on HTML5 canvas?

As many people knew, HTML5 Canvas lineTo() is going to give you a very jaggy line at each corner. At this point, a more preferable solution would be to implement quadraticCurveTo(), which is a very great way to generate smooth drawing. However, I desire to create a smooth, yet accurate, draw on canvas HTML5. Quadratic curve approach works well in smoothing out the draw, but it does not go through all the sample points. In other word, when I try to draw a quick curve using quadratic curve, sometime the curve appears to be "corrected" by the application. Instead of following my drawing path, some of the segment is curved out of its original path to follow a quadratic curve.
My application is intended for a professional drawing on HTML5 canvas, so it is very crucial for the drawing to be both smooth and precise. I am not sure if I am asking for the impossible by trying to put HTML5 canvas on the same level as photoshop or any other painter applications (SAI, painterX, etc.)
Thanks
What you want is a Cardinal spline as cardinal splines goes through the actual points you draw.
Note: to get a professional result you will also need to implement moving average for short thresholds while using cardinal splines for larger thresholds and using knee values to break the lines at sharp corner so you don't smooth the entire line. I won't be addressing the moving average or knee here (nor taper) as these are outside the scope, but show a way to use cardinal spline.
A side note as well - the effect that the app seem to modify the line is in-avoidable as the smoothing happens post. There exists algorithms that smooth while you draw but they do not preserve knee values and the lines seem to "wobble" while you draw. It's a matter of preference I guess.
Here is an fiddle to demonstrate the following:
ONLINE DEMO
First some prerequisites (I am using my easyCanvas library to setup the environment in the demo as it saves me a lot of work, but this is not a requirement for this solution to work):
I recommend you to draw the new stroke to a separate canvas that is on top of the main one.
When stroke is finished (mouse up) pass it through the smoother and store it in the stroke stack.
Then draw the smoothed line to the main.
When you have the points in an array order by X / Y (ie [x1, y1, x2, y2, ... xn, yn]) then you can use this function to smooth it:
The tension value (ts, default 0.5) is what smooths the curve. The higher number the more round the curve becomes. You can go outside the normal interval [0, 1] to make curls.
The segment (nos, or number-of-segments) is the resolution between each point. In most cases you will probably not need higher than 9-10. But on slower computers or where you draw fast higher values is needed.
The function (optimized):
/// cardinal spline by Ken Fyrstenberg, CC-attribute
function smoothCurve(pts, ts, nos) {
// use input value if provided, or use a default value
ts = (typeof ts === 'undefined') ? 0.5 : ts;
nos = (typeof nos === 'undefined') ? 16 : nos;
var _pts = [], res = [], // clone array
x, y, // our x,y coords
t1x, t2x, t1y, t2y, // tension vectors
c1, c2, c3, c4, // cardinal points
st, st2, st3, st23, st32, // steps
t, i, r = 0,
len = pts.length,
pt1, pt2, pt3, pt4;
_pts.push(pts[0]); //copy 1. point and insert at beginning
_pts.push(pts[1]);
_pts = _pts.concat(pts);
_pts.push(pts[len - 2]); //copy last point and append
_pts.push(pts[len - 1]);
for (i = 2; i < len; i+=2) {
pt1 = _pts[i];
pt2 = _pts[i+1];
pt3 = _pts[i+2];
pt4 = _pts[i+3];
t1x = (pt3 - _pts[i-2]) * ts;
t2x = (_pts[i+4] - pt1) * ts;
t1y = (pt4 - _pts[i-1]) * ts;
t2y = (_pts[i+5] - pt2) * ts;
for (t = 0; t <= nos; t++) {
// pre-calc steps
st = t / nos;
st2 = st * st;
st3 = st2 * st;
st23 = st3 * 2;
st32 = st2 * 3;
// calc cardinals
c1 = st23 - st32 + 1;
c2 = st32 - st23;
c3 = st3 - 2 * st2 + st;
c4 = st3 - st2;
res.push(c1 * pt1 + c2 * pt3 + c3 * t1x + c4 * t2x);
res.push(c1 * pt2 + c2 * pt4 + c3 * t1y + c4 * t2y);
} //for t
} //for i
return res;
}
Then simply call it from the mouseup event after the points has been stored:
stroke = smoothCurve(stroke, 0.5, 16);
strokes.push(stroke);
Short comments on knee values:
A knee value in this context is where the angle between points (as part of a line segment) in the line is greater than a certain threshold (typically between 45 - 60 degrees). When a knee occur the lines is broken into a new line so that only the line consisting of points with a lesser angle than threshold between them are used (you see the small curls in the demo as a result of not using knees).
Short comment on moving average:
Moving average is typically used for statistical purposes, but is very useful for drawing applications as well. When you have a cluster of many points with a short distance between them splines doesn't work very well. So here you can use MA to smooth the points.
There is also point reduction algorithms that can be used such as the Ramer/Douglas/Peucker one, but it has more use for storage purposes to reduce amount of data.

GLSL Vertex Shader causes either flashing colors or all red

I'm writing my first vertex shader for a (here it comes) homework assignment and can't get it to function properly.
I need to implement a vertex shader (and only a vertex shader) than completely mimics the fixed function pipeline vertex shader in openGL, and use the FFP fragment shader (so write nothing for a FS). I am aware of built-in uniform variables, and I'm using them to calculate a vertice's final color based on the openGL lighting equation. I've renamed some of the values for readability's sake (normal and lightVec are normalized):
//given as part of the assignment, not modifable
vec4 position = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
//my (partial) code below here
lightVec = (lightPos - position).xyz;
distance = length(lightVec);
vec3 normal = gl_NormalMatrix * gl_Normal;
//I've stored a lot of the values in variables I defined locally
//to make the code easier to read.
//eg - constAtten = gl_LightSource[].constantAttenuation
atten = 1/max((constAtten + distance*linearAtten + distance*distance*quadAtten),1);
ambient = ambInt * matAmbInt * atten;
diffuse = difCol * matDifInt * max(dot(normal, lightVec),0.0) * atten;
specular = specInt * matSpecInt * atten * pow(max(dot(normal, (gl_LightSource[i].halfVector).xyz),0.0), gl_FrontMaterial.shininess);
I'm summing ambient, diffuse, and specular for each light in the scene and storing it as "sum", followed by:
gl_FrontColor = gl_FrontMaterial.emission + sum;
gl_FrontColor.a = 1.0;
The result is crazy flashing colors every time I move the scene's camera. This is openGL v1.2.
Edit: Link to picture
http://i305.photobucket.com/albums/nn208/wolverine1190/shadercomp_zpsc546ac31.png
Whenever I move the camera, the colors on the left change. Could that possibly indicate an incorrectly calculated normal, or possibly the use camera coordinates somewhere I shouldn't have?
=================================================================
Resolved
I've fixed the issue. Not exactly sure why it works now, but it does. I scrapped everything and re-wrote the code from scratch with the same approach and calculations. What I did differently is:
1) I set values with constructors instead of just assignment. Eg - instead of vec3 test = someOtherVec3, I did vec3 test = vec3(someOtherVec3).
2) When normalizing I assigned the normalized result of a variable to itself instead of just calling normalize(). Eg - instead of normalize(normal), normal = normalize(normal);
3) I added to gl_FrontColor directly at each step instead of storing intermediate values in sum and adding sum to gl_FrontColor at the end.
Other than that, everything stayed the same. I can't say for sure why that fixed things, so if someone does know please comment with an explanation.
From your description I wonder if your colors are overflowing. What happens if you try:
gl_FrontColor = clamp(gl_FrontMaterial.emission + sum, 0.0, 1.0);

How can I determine whether it's faster to face an object rotating clockwise or counter clockwise?

I've been trying this to no avail for some days now, but basically I have some creatures and the player on the screen. What I want to happen is for the enemies to turn to face the player at a variable speed, rather than 'lock' into position and face the player immediately.
What I am trying to do is work out whether it is faster for a given enemy to rotate clockwise or counter clockwise to face the player, but it's proving to be beyond my capabilities with trigonometry.
Example:
x in these figures represents the 'shorter' path and the direction I want to rotate in each situation.
What is the simplest way to work out either 'clockwise' or 'counter-clockwise' in this situation, using any of the following:
The direction the enemy is facing.
The angle between the enemy to the player, and player to the enemy.
There is no need to calculate angles or use trigonometric functions here, assuming you have a direction vector.
var pos_x, pos_y, dir_x, dir_y, target_x, target_y;
if ((pos_x - target_x) * dir_y > (pos_y - target_y) * dir_x) {
// Target lies clockwise
} else {
// Target lies anticlockwise
}
This simply draws an imaginary line through the object in the direction it's facing, and figures out which side of that line the target is on. This is basic linear algebra, so you should not need to use sin() or cos() etc. anywhere in this function, unless you need to calculate the direction vector from the angle.
This also uses a right-handed coordinate system, it will be backwards if you are using a left-handed coordinate system -- the formulas will be the same, but "clockwise" and "anticlockwise" will be swapped.
Deeper explanation: The function computes the outer product of the forward vector (dir_x, dir_y) and the vector to the target, (target_x - pos_x, target_y - pos_y). The resulting outer product is a pseudoscalar which is positive or negative, depending on whether the target is clockwise or anticlockwise.
Crash course on vectors
A vector is a magnitude and direction, e.g., 3 km north, or 6 centimeters down. You can represent a vector using cartesian coordinates (x, y), or you can represent it using polar coordinates (r,θ). Both representations give you the same vectors, but they use different numbers and different formulas. In general, you should stick with cartesian coordinates instead of polar coordinates. If you're writing a game, polar coordinates suck royally — they litter your code with sin() and cos() everywhere.
The code has three vectors in it:
The vector (pos_x, pos_y) is the position of the object, relative to the origin.
The vector (target_x, target_y) is the position of the target, relative to the origin.
The vector (dir_x, dir_y) is the direction that the object is facing.
const CLOCKWISE:int = 0;
const COUNTER_CLOCKWISE:int = 1;
const PI2:Number = Math.PI * 2
function determineSmallestAngle(from:Sprite, to:Sprite):int
{
var a1:Number = Math.atan2(to.y - from.y, to.x - from.x);
var a2:Number = from.rotation * Math.PI / 180;
a2 -= Math.floor(a2 / PI2) * PI2;
if(a2 > Math.PI) a2 -= PI2;
a2 -= a1;
if (a2 > Math.PI) a2 -= PI2;
if (a2 < -1 * Math.PI) a2 += PI2;
if (a2 > 0) return CLOCKWISE;
return COUNTER_CLOCKWISE;
}

How to draw paths specified in terms of straight and curved motion

I have information on paths I would like to draw. The information consists of a sequence of straight sections and curves. For straight sections, I have only the length. For curves, I have the radius, direction and angle. Basically, I have a turtle that can move straight or move in a circular arc from the current position (after which moving straight will be in a different direction).
I would like some way to draw these paths with the following conditions:
Minimal (preferably no) trigonometry.
Ability to center on a canvas and scale to fit any arbitrary size.
From what I can tell, GDI+ gives me number 2, Cairo gives me number 1, but neither one makes it particularly easy to get both. I'm open to suggestions of how to make GDI+ or Cairo (preferably pycairo) work, and I'm also open to any other library (preferably C# or Python).
I'm even open to abstract mathematical explanations of how this would be done that I can convert into code.
For 2D motion, the state is [x, y, a]. Where the angle a is relative to the positive x-axis. Assuming initial state of [0, 0, 0]. 2 routines are needed to update the state according to each type of motion. Each path yields a new state, so the coordinates can be used to configure the canvas accordingly. The routines should be something like:
//by the definition of the state
State followLine(State s, double d) {
State s = new State();
s.x = s0.x + d * cos(s0.a);
s.y = s0.y + d * sin(s0.a);
s.a = s0.a;
return s;
}
State followCircle(State s0, double radius, double arcAngle, boolean clockwise) {
State s1 = new State(s0);
//look at the end point on the arc
if(clockwise) {
s1.a = s0.a - arcAngle / 2;
} else {
s1.a = s0.a + arcAngle / 2;
}
//move to the end point of the arc
State s = followLine(s1, 2 * radius * sin(arcAngle/ 2));
//fix new angle
if(clockwise) {
s.a = s0.a - arcAngle;
} else {
s.a = s0.a + arcAngle;
}
return s;
}