Logic of reflecting light for 3D specular reflections - actionscript-3

I'm trying to add specular reflections to a Stage3D game and it is ALMOST working, but I don't think I have a good enough grasp of the maths happening at the actual reflection stage.
I have an incoming light vector and I have the face normal; I know what to do with the resulting reflected vector (normalize and then dot product with the normalized vector pointing to the player 'camera') but what are the AGAL opcodes to reflect the original light vector off the face? I can't get my head around that. Any help appreciated...

Unlike GLSL with its reflect(), AGAL does not have any kind of helper for doing this. But you can still calculate reflection vector yourself - even GLSL reference page for reflect() mentions the formula used for calculation:
For a given incident vector I and surface normal N reflect returns the reflection direction calculated as I - 2.0 * dot(N, I) * N.

Related

AS3: How to intersect vectors at runtime?

Let's say I use the Graphics class at runtime to draw some vector shapes dynamically. For example a square and a circle.
Is there a way to create a new shape at runtime where those 2 vectors shapes overlap?
Those kind of operations are very common in all vector design programs such as Illustrator, Corel, etc... but I haven't found anything in Adobe's documentation, nor anywhere else, to do it by code.
Although drawing operations on the Graphics class are described in terms of lines, points etc this is - as far as you're concerned - just telling it what to draw onto a bitmap. There's no way to remove a shape once drawn, short of clear(), which just wipes the whole thing clean.
I don't fully understand why, as the vector data must be retained - there's no loss of quality on scaling after drawing, for example.
If you don't want to get into some hardcore maths (for anything beyond straight lines, you'll need to) there's an answer here which might help if you've ever used PixelBender:
How to calculate intersection between shapes in flash / action script ? (access to shape's segments and nodes?)
Failing that, if it's just cosmetic you could play around with masking shapes (will probably end up quite hacky though) - however, if you actually want to use the intersection to draw or describe a shape you will need to dig out your maths book or look for a good graphics library.
Hope this helps

Orthographic projection - What is the process converting 3d point to 2d

I'm implementing a simple penalty shootout game using actionscript 3.0. The view of the game is similar to view of the old "Sensible World of Soccer". I want to use 3d game logic by using dimension z as I think that it could help me in order to achieve better collision detection - response results. However, I would like to keep the graphics style and view equivalent to old 2d soccers'. Hence, I assume that orthographic projection is suitable for this implementation. Although there is plenty of information in the internet regarding orthographic projection, I'm a little bit confused about how someone can apply it in his/her code.
So my questions are:
Which is the procedure step by step in order for someone to convert a 3d (x, y, z) point to 2d (x', y') point in orthographic projection?
Can we avoid using matrices? If yes, what are the equations that associate coordinates x', y' with x, y, z?
Do we have to define a camera position and angle before applying the conversion? In my case, camera will be in a fixed position and angle.
DisplayObjects and their descendants (ie MovieClip and Sprite) have a z property you can use to do this without the headaches - they also have rotationX/Y/Z and scaleX/Y/Z properties too!
Using 'z' will adjust the position and scale of an object accordingly (though it will convert vectors to bitmaps), there's no depth sorting, so it will stay on top of objects even if its z co-ord suggests it should be behind them, but for the project you have in mind I can't see this being a problem - it's pretty easy to fix anyway, have an array of objects in the scene, sort it according to z-position and reset the depth index of each/re-add to stage in sorted order.
You can use the perspectiveProjection member of a clip to adjust the FOV, origin etc -
Perspective Tutorial
..but you don't need to get any more sophisticated than that. Certainly you don't need to dabble with matrices with a fixed camera view, even if you wanted to calculate this manually as an experiment.
Hope this helps

How do I pass barycentric coordinates to an AGAL shader? (AGAL wireframe shader)

I would like to create a wire frame effect using a shader program written in AGAL for Stage3D.
I have been Googling and I understand that I can determine how close a pixel is to the edge of a triangle using barycentric coordinates (BC) passed into the fragment program via the vertex program, then colour it accordingly if it is close enough.
My confusion is in what method I would use to pass this information into the shader program. I have a simple example set up with a cube, 8 vertices and an index buffer to draw triangles between using them.
If I was to place the BC's into the vertex buffer then that wouldn't make sense as they would need to be different depending on which triangle was being rendered; e.g. Vetex1 might need (1,0,0) when rendered with Vetex2 and Vetex3, but another value when rendered with Vetex5 and Vetex6. Perhaps I am not understanding the method completely.
Do I need to duplicate vertex positions and add the aditional data into the vertex buffer, essentially making 3 vertices per triangle and tripling my vertex count?
Do I always give the vertex a (1,0,0), (0,1,0) or (0,0,1) value or is this just an example?
Am I over complicating this and is there an easier way to do wire-frame with shaders and Stage3d?
Hope that fully explains my problems. Answers are much appreciated, thanks!
It all depends on your geomtery, and this problem is in fact a problem of graph vertex coloring: you need your geometry graph to be 3-colorable. The good starting point is the Wikipedia article.
Just for example, let's assume that (1, 0, 0) basis vector is red, (0, 1, 0) is green and (0, 0, 1) is blue. It's obvious that if you build your geometry using the following basic element
then you can avoid duplicating vertices, because such graph will be 3-colorable (i.e. each edge, and thus each triangle, will have differently colored vertices). You can tile this basic element in any direction, and the graph will remain 3-colorable:
You've stumbled upon the thing that drives me nuts about AGAL/Stage3D. Limitations in the API prevent you from using shared vertices in many circumstances. Wireframe rendering is one example where things break down...but simple flat shading is another example as well.
What you need to do is create three unique vertices for each triangle in your mesh. For each vertex, add an extra param (or design your engine to accept vertex normals and reuse those, since you wont likely be shading your wireframe).
Assign each triangle a unit vector A[1,0,0], B[0,1,0], or C[0,0,1] respectively. This will get you started. Note, the obvious solution (thresholding in the fragment shader and conditionally drawing pixels) produces pretty ugly aliased results. Check out this page for some insight in techniques to anti-alias your fragment program rendered wireframes:
http://cgg-journal.com/2008-2/06/index.html
As I mentioned, you need to employ a similar technique (unique vertices for each triangle) if you wish to implement flat shading. Since there is no equivalent to GL_FLAT and no way to make the varying registers return an average, the only way to implement flat shading is for each vertex pass for a given triangle to calculate the same lighting...which implies that each vertex needs the same vertex normal.

What does a pixel shader actually do?

I'm relatively new to graphics programming, and I've just been reading some books and have been scanning through tutorials, so please pardon me if this seems a silly question.
I've got the basics of directx11 up and running, and now i'm looking to have some fun. so naturally I've been reading heavily into the shader pipeline, and i'm already fascinated. The idea of writing a simple, minuscule piece of code that has to be efficient enough to run maybe tens of thousands of times every 60th of a second without wasting resources has me in a hurry to grasp the concept before continuing on and possibly making a mess of things. What i'm having trouble with is grasping what the pixel shader is actually doing.
Vertex shaders are simple to understand, you organize the vertices of an object in uniform data structures that relate information about it, like position and texture coordinates, and then pass each vertex into the shader to be converted from 3d to 2d by way of trasformation matrices. As long as i understand it, i can work out how to code it.
But i don't get pixel shaders. What i do get is that the output of the vertex shader is the input of the pixel shader. So wouldn't that just be handing the pixel shader the 2d coordinates of the polygon's vertices? What i've come to understand is that the pixel shader receives individual pixels and performs calculations on them to determine things like color and lighting. But if that's true, then which pixels? the whole screen or just the pixels that lie within the transformed 2d polygon?
or have i misunderstood something entirely?
Vertex shaders are simple to understand, you organize the vertices of an object in uniform data structures that relate information about it, like position and texture coordinates, and then pass each vertex into the shader to be converted from 3d to 2d by way of trasformation matrices.
After this, primitives (triangles or multiples of triangles) are generated and clipped (in Direct3D 11, it is actually a little more complicated thanks to transform feedback, geometry shaders, tesselation, you name it... but whatever it is, in the end you have triangles).
Now, fragments are "generated", i.e. a single triangle is divided into little cells with a regular grid, the output attributes of the vertex shader are interpolated according to each grid cell's relative position to the three vertices, and a "task" is set up for each little grid cell. Each of these cells is a "fragment" (if multisampling is used, several fragments may be present for one pixel1).
Finally, a little program is executed over all these "tasks", this is the pixel shader (or fragment shader).
It takes the interpolated vertex attributes, and optionally reads uniform values or textures, and produces one output (it can optionally produce several outputs, too). This output of the pixel shader refers to one fragment, and is then either discarded (for example due to depth test) or blended with the frame buffer.
Usually, many instances of the same pixel shader run in parallel at the same time. This is because it is more silicon efficient and power efficient to have a GPU run like this. One pixel shader does not know about any of the others running at the same time.
Pixel shaders commonly run in a group (also called "warp" or "wavefront"), and all pixel shaders within one group execute the exact same instruction at the same time (on different data). Again, this allows to build more powerful chips that user less energy, and cheaper.
1Note that in this case, the fragment shader still only runs once for every "cell". Multisampling only decides whether or not it stores the calculated value in one of the higher resolution extra "slots" (subsamples) according to the (higher resolution) depth test. For most pixels on the screen, all subsamples are the same. However, on edges, only some subsamples will be filled by close-up geometry whereas some will keep their value from further away "background" geometry. When the multisampled image is resolved (that is, converted to a "normal" image), the graphics card generates a "mix" (in the easiest case, simply the arithmetic mean) of these subsamples, which results in everything except edges coming out the same as usual, and edges being "smoothed".
Your understanding of pixel shaders is correct in that it "receives individual pixels and performs calculations on them to determine things like color and lighting."
The pixels the shader receives are the individual ones calculated during the rasterization of the transformed 2d polygon (the triangle to be specific). So whereas the vertex shader processes the 3 points of the triangle, the pixel shader processes the pixels, one at a time, that "fill in" the triangle.

Vectors for Game Programming

Im not sure how to use vectors correctly in game programming. I have been reading advanced game design with flash which shows you how to create a vector with a start point and endpoint and how to use that for games, for example the start point would be used for a characters position in a game and the x and y length would be used for velocity. But since I have started looking online I have found that vectors are usually just x and y with no start point or end point and a character would be moved by having a position vector and a velocity vector and acceleration vector. I have started creating my own vector class. And wondered what the reasons for and against each method are. Or is it completely not important?
Initially a vector means direction. Classical vector is used in physics to present a velocity so that the vector direction stands for the heading and the vector length is a speed.But in graphics vectors are used also to present position. So if you have some point, let's say, in 2d space noted by x ,y it remains point if you don't want to know in what direction it set relating to the origin which is usually a center of the coordinate system. In 2d graphics we deal with Cartesian coordinate system which has an origin in top left corner of the screen. But you can also have a direction of some vector relative to any other point in the space.That is why you have also vector operations like addition, subtracting ,dot product, cross product. All those help you to measure distances and angles between vectors. I would suggest you to buy a book on graphics programming. Most of them introduce an easy to grasp primer to vector math.And you don't need to write a vector class in AS 3.0 You have a generic one - Vector3d