Stage3D AGAL Diffuse Light Shader, Dynamic Object Questions - actionscript-3

There are quite a few ActionScript Stage3D tutorial examples 'out there'. I am thankful to all those who attempt to provide us newbies with working code. That said I don't believe I have yet found an example which I think is correctly handling the Lambertian reflectance as outlined by the Wikipedia entry, but I hesitate to add that, as a newbie, perhaps I just have failed to understand the implementations.
Here's what I think is the basic requirement of any implementation -- that it be able to compare the orientation of the light source [I am purposely limiting this discussion to the simpler case of a 'directional light' mimicking the Sun, rather than a 'spot light'.] to the orientation of the normal to the face to be illuminated.
And here is what I think is the heart of the problem I am seeing -- that the computation of the normal of the face, in almost every case, is being performed when the geometry of the object is being created. So, the value of the normal being passed to the shader is expressed in terms of the local object space.
Now I know that the way this wonderful forum works, you would prefer that I just give some sample code, so someone could identify a specific mistake in either the setup code used on the CPU or the shader code used on the GPU. Because I can find examples of this problem using many different frameworks, I think what I [and I imagine many others] need, is not a specific coded solution, but some specific clarification of what is actually required to get a photo-realistic rendering of an object in the simple, base case of fixed camera view point, a non-moving light source, and ignoring considerations of specularity, etc.
So, when performing the dot product of the two vectors:
A. Should the Vector3D value representing the normal to the triangular face to be illuminated be calculated using the object space values of the three vertices or the world space values after transformation?
B. If world space values are required, as I believe, should the dynamically-calculated normal workload be performed each render cycle on the CPU or the GPU?
Thank you.

Terry I ran into the same problem recently, I suspect you have solved it or moved on. But I thought I would answer the question anyways.
I choose to transform the normal in the vertex shader.
var vertexShader:Array = [
"m44 vt0, va0, vc0", // transform vertex positions (va0) by the world camera data (vc0)
// this result has the camera angle as part of the matrix, which is not good when calculating light
"mov op, vt0", // move the transformed vertex data (vt0) in output position (op)
"add v0, va1, vc12.xy", // add in the UV offset (va1) and the animated offset (vc12) (may be 0 for non animated), and put in v0 which holds the UV offset
"mov v1, va3", // pass texture color and brightness (va3) to the fragment shader via v1
"m44 v2, va2, vc4", // transform vertex normal, send to fragment shader
// the transformed vertices without the camera data
"m44 v3, va0, vc8", // the transformed vertices with out the camera data, works great for default AND for translated cube, rotated cube broken still
in the fragment shader
// normalize the light position
"sub ft1, v3, fc0", // subtract the light position from the transformed vertex postion
"nrm ft1.xyz, ft1", // normalize the light position (ft1)
// non flat shading
"dp3 ft2, ft1, v2", // dot the transformed normal with light direction
"sat ft2, ft2", // Clamp dot between 1 and 0, put result in ft2.
Last but not least, what data you pass in! Took quite a bit of experimenting for me to find the magic.
// mvp is the camera data mixed with each object world matrix
_context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, mvp, true); // aka vc0
// $model is the model to be drawn
var invmat:Matrix3D = $model.modelMatrix.clone();
invmat.transpose();
_context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 4, invmat, true); // aka vc4
var wsmat:Matrix3D = $model.worldSpaceMatrix.clone();
_context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 8, wsmat, true); // aka vc8
I hope this helps someone in the future.

Related

How to animate textures in a 3d model?

I wish to have a animated 3d texture in my LibGDX code but I am struggling to find out how to do it.
I assume how this "should" be done is either;
a) Directly accessing and modifying the texture on the model. (via a pixmap? ByteBuffer?)
or
b) Prerendering a big image containing all the frames (say, 20) and then moving the UV co-ordinates to create the illusion of the animation. (akin to ImageStrips in 2d/webdesign).
I did work out how I could completely replace the material eachtime, but that seems a much worse way of doing it. So if anyone could show the commands I need to do either a) or b) (or a similar optimal method) I would be great-fall.
Maths I am fine with. The intricacies of OpenGLES or GDX I am not :)
(The solution should at least work HTML/Android compiles, ideally everything)
Since the latest release it is very easy to play a 2d animation on a 3d surface. First make sure to get familiar with the 2d animation concept, as explained over here: https://github.com/libgdx/libgdx/wiki/2D-Animation. Then, instead of using a spritebatch, you can use the TextureRegion (which Animation#getKeyFrame returns) to set the material of the surface, as shown here: https://github.com/libgdx/libgdx/blob/master/tests/gdx-tests/src/com/badlogic/gdx/tests/g3d/TextureRegion3DTest.java. So basically you would get in your render method:
attribute.set(animation.getKeyFrame(stateTime, true));
Or if you want a more generic approach:
instance.getMaterial("<name of material>").get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
Or, if there's only one material in the ModelInstance:
instance.materials.get(0).get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
If you have the memory for it I would definetly choose b), it is easier on the processor. Also, you would only change a uniform's value. However, due to preprocessing it might take some time to open the application.
Get you uniform variable, where you compile your shaders, animationPos should be global.
Gluint animationPos = glGetUniformLocation(shaderProgram, "nameoftheuniform");
Your main loop should pass animationPos value to the shader:
Gluniform1i ( animationPos, curentAnimationIndex);
Add this your fragment shader variables:
uniform int animationPos;
Fragment shader main:
float texCoordY = texCoord.y; //texture coordinates should be passed from vertex shader
float texCoordX = texCoord.x/20.0f; //we are dividing it with 20 since it is the amount of textures that we have and if we use it directly it would try to use all the texture. Whereas the texture stores at 20 different textures.
float textureIndex = 1.0f*animationPos/20.0f; //Pointer to the start of the animation texture.
gl_fragColor = texture2D ( yourTexture, vec2( textureIndex + texCoordX, texCoordY));
Above code assumes that you expanded your textures in the x direction, you can also try to expand it like a matrix, then you need to change the texCoord calculation part. Also that we are using 20 textures.
The option a) is more heavy on the processor and you will be changing the texture every time so it will use pci a bit more, but easier on memory. The question is more like a design decision, but I guess 20 images can be handled so go with option b).
Edit: Added code.

Primitives and sprites Z index in Cocos2D-x 3.0 is not consistent?

I have two layers. Each layer has a primitive drawing in it with OpenGL like this:
void Layer1::drawPolygon()
{
glLineWidth(1);
DrawPrimitives::setDrawColor4B(255,255,255,255);
DrawPrimitives::setPointSize(1);
// Anti-Aliased
glEnable(GL_LINE_SMOOTH);
// filled poly
glLineWidth(1);
Point filledVertices[] = { Point(10,120), Point(50,120), Point(50,170), Point(25,200), Point(10,170) };
DrawPrimitives::drawSolidPoly(filledVertices, 5, Color4F(0.5f, 0.5f, 1, 1 ) );
}
When I addChild these layers to a scene and set Z orders 1 and 2, I see that I can bring one primitive on top of another and vice versa - when I exchange the Z order values. The strange things start when I addChild a sprite into one of these layers. If I addChild a sprite, then sprite lays on top of the primitive of that layer, and not only that layer. Even if the layer has smaller Z index, anyway its sprite is on top of other layer's primitive, while its primitive is below the other primitive shape - as was expected. Is this OK? How I should understand this? What if I want to draw primitives on top of all sprites?
EDIT:
I could manipulate their order, but not drawing order, with the following:
CCDirector::getInstance()->setDepthTest(true);
myLayer->setVertexZ(-1);
But I don't understand why sprites in a layer with smaller Z order are being drawn later than the primitives of the layer with bigger Z order. In other words, seems that all the primitives from all the layers is being drawn according to their order, then the same is being done for the sprites.
Due to the new multithreader renderer on cocos2d-x 3.0, drawing with primitives requires a different approach. Take a look at my reply at this thread:
https://stackoverflow.com/a/22724319/1468700
I believe there is a bug in cocos2d-x V3 beta 2 that makes primitive drawing always appear below all layers.
It is fixed (I understand) in V3.0 RC
This is incorrect - there is no bug (I was mislead by other posts - my apologies).
See the post below for a link explaining what needs to happen to get primitives to draw in the 'right' z-order.
The summary is that all drawing operations are added to a queue in the game loop, then the queue processed - so you need to add your primitive drawing into the queue rather than drawing immediately.

Windows Store App - SwapChainPanel DrawLine Performance

I am developing a Windows Store App using XAML / C#. The app also has a Windows Runtime Component, which is used for showing a Chart ouput using DirectX.
I am using SwapChainPanel approach for drawing the lines (x-axis, y-axis and waveform).
I chose this approach from the below MSDN sample (refer scenario 3 - D2DPanel)
http://code.msdn.microsoft.com/windowsapps/XAML-SwapChainPanel-00cb688b
Here is my question,
My waveform contains a huge number of data (ranging from 1,000 to 20,000 set of points). I am calling DrawLine continuously for all these points during each Render function call.
The control also provides panning and zooming but keeps the StrokeWidth constant irrespective of zoom level, hence the visible area (render target) might be much less than the lines I am drawing.
Does calling DrawLine for the area which are going to be off-screen cause performance issues?
I tried PathGeometry & GeometryRealization but I am not able to control the StrokeWidth at various zoom level.
My Render method typically resembles the below snippet. The lineThickness is controlled to be same irrespective of zoom level.
m_d2dContext->SetTransform(m_worldMatrix);
float lineThickness = 2.0f / m_zoom;
for (unsigned int i = 0; i < points->Size; i += 2)
{
double wavex1 = points->GetAt(i);
double wavey1 = points->GetAt(i + 1);
if (i != 0)
{
m_d2dContext->DrawLine(Point2F(prevX, prevY), Point2F(wavex1, wavey1), brush, lineThickness);
}
prevX = wavex1;
prevY = wavey1;
}
I'm kind of new to DirectX, but not to C++. Any thoughts?
Short answer: It probably will. It's good practice to push a clip before drawing. For instance, in your case, you'd do a call to ID2D1DeviceContext::PushAxisAlignedClip with the bounds of your drawing surface. That'll ensure no drawing calls attempt to draw outside the surface's bounds.
Long answer: Really, it depends on a handful of factors, including but not limited to what target the device context is drawing to, the display hardware, and the display driver. For instance, if you're drawing to a CPU-backed ID2D1Bitmap, it's probably fair to assume that there won't be much of a difference.
However, if you're directly drawing to some hardware-backed surface (a GPU bitmap, or a bitmap created from an IDXGISurface), it can get a little hairy. For example, consider this comment from an excellently documented MSDN sample. Here, the where the code is about to Clear an ID2D1Bitmap created from an IDXGISurface:
// The Clear call must follow and not precede the PushAxisAlignedClip call.
// Placing the Clear call before the clip is set violates the contract of the
// virtual surface image source in that the application draws outside the
// designated portion of the surface the image source hands over to it. This
// violation won't actually cause the content to spill outside the designated
// area because the image source will safeguard it. But this extra protection
// has a runtime cost associated with it, and in some drivers this cost can be
// very expensive. So the best performance strategy here is to never create a
// situation where this protection is required. Not drawing outside the appropriate
// clip does that the right way.

Trying to convert openGL to MFC coordinates and having Problems with "gluProject"

To clarify things, what i am trying to do is to get the openGL coordinates and manipulate them in my mfc code. not to get an openGL object. i'm using the mfc to control the position of the objects in the openGL.
Hi, i'm trying to find the naswer on the web and can't find a full solution that i can use and that will work...
I'm developing a MFC project with static picture as the canvas for an openGL class that draw the grphics for my game.
On moush down, i need to retrive a shape coordinate from the openGL class.
I'm looking for a way to convert the openGL coordinates to MFC coordinates but no matter what i try i get junk after using the gluProject or gluUnProject (i've tried to do both ways but non is working)
GLdouble modelMatrix[16];
glGetDoublev(GL_MODELVIEW_MATRIX,modelMatrix);
GLdouble projMatrix[16];
glGetDoublev(GL_PROJECTION_MATRIX,projMatrix);
int viewport[4];
glGetIntegerv(GL_VIEWPORT,viewport);
POINT mouse; // Stores The X And Y Coords For The Current Mouse Position
GetCursorPos(&mouse); // Gets The Current Cursor Coordinates (Mouse Coordinates)
ScreenToClient(hWnd, &mouse);
GLdouble winX, winY, winZ; // Holds Our X, Y and Z Coordinates
winX; = (float)point.x; // Holds The Mouse X Coordinate
winY; = (float)point.y; // Holds The Mouse Y Coordinate
winY = (float)viewport[3] - winY;
glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
GLdouble posX=s1->getPosX(), posY=s1->getPosY(), posZ=s1->getPosZ(); // Hold The Final Values
gluUnProject( winX, winY, winZ, modelMatrix, projMatrix, viewport, &posX, &posY, &posZ);
gluProject(posX, posY, posZ, modelMatrix, projMatrix, viewport, &winX, &winY, &winZ);
This is just part of the code i've tried. ofcourse not gluProject and gluUnProject together. just had them both here to show.....and i know there is lots of junk over there, its from some of my tries...
p.s. i've tried many many more combinations and examples from the web and nothing seem to work in my case....
Can any one show me what is the right way to do the transformation?
10x
It looks like you're trying to retrieve the object (or objects) that is/are at a particular point. If this is the case, gluProject and/or gluUnProject isn't really a very suitable tool for the task. OpenGL has a selection mode intended specifically for this kind of task.
In typical use, you specify a small square (e.g., 5x5 pixels) around the mouse click spot with gluPickMatrix, set selection mode with glRenderMode, set a buffer with glSwelectBuffer, and then draw your scene. The drawing doesn't go to the screen, but fills the buffer you specified wiyh records of what was drawn within the specified area.

Actionscript 3 pixel perfect collision. How to? (learning purposes)

I know that there are people out there creating classes for this (ie http://coreyoneil.com/portfolio/index.php?project=5). But I want to learn how to do it myself so I can create everything I need the way I need.
I've read about BitMap and BitMapData. I should be able to .draw the MovieClips onto a BitMap so I could then cycle the pixels looking for the collisions. However, It's weird and confusing dealing with the offsets.. And it seams like the MyBitMap.rect has always x = 0 and y = 0... and I can't seam to find the original position of the things...
I'm thinking of doing a hitTestObject first, then if this was positive, I would investigate the intersection betwen the movieclips rectangles for the pixel collisions.
But then there is also another problem (the rotation of movieclips)...
...I need some enlightment here on how to do it.
Please, any help would be appreciated..
If you're using BitmapData objects with transparency you can use BitmapData.hitTest(firstPoint:Point, firstAlphaThreshold:uint, secondObject:Object, secondBitmapDataPoint:Point = null, secondAlphaThreshold:uint = 1):Boolean.
You'll have to change from global coords to the local BitmapData coords which will require a bit of math if it is rotated. That's easily achieved (look up affine transform for more info on wiki):
var coordTransform:Matrix = new Matrix();
coordTransform.rotate(rotationRadians);
coordTransform.translate(x, y);
coordTransform.transformPoint(/* your point */);
A classic reference for pixel perfect collision detection in flash is this Grant Skinner's article. It's AS2, but the logic is the same for AS3 (there are ports available if you google a bit).
If I recall correctly, this particular implementation worked as long as both tested objects had the same parent, but that can be fixed.
About BitmapData x and y values, I understand it could be confusing; however, the way it works makes sense to me. A BitmapData is just what the name implies: pixel data. It's not a display object, and cannot be in the display list; so having x or y different than 0 doesn't really make sense, if you think about it. The easiest way to deal with this is probably storing the (x,y) offset of the source object (the display object you have drawn from) and translate it to the global coordinate space so you can compare any objects, no matter what's their position in the display list (using something like var globalPoint:Point = source.parent.localToGlobal(new Point(source.x,source.y)).
I've previously used Troy Gilbert's pixel perfect collision detection class (adapted from Andre Michelle, Grant Skinner and Boulevart) which works really well (handles rotation, different parents, etc.):
http://troygilbert.com/2007/06/pixel-perfect-collision-detection-in-actionscript3/
http://troygilbert.com/2009/08/pixel-perfect-collision-detection-revisited/
and from there he has also linked to this project (which I've not used, but looks really impressive):
http://www.coreyoneil.com/portfolio/index.php?project=5
I managed to do it after all, and I already wrote my class for collision detections,/collisions angle and other extras.
The most confusing process is maybe to align the bitmaps correctly for comparing. When whe draw() a movieclip into a a BitmapData, if we addChild() the corresponding Bitmap we can see that part of it is not visible. it appears to be drawn from the center to right and down only, leaving the top and left parts away from beeing drawn. The solution is giving a transform matrix in the second argument of the draw method that aligns the bitmap and makes it all be drawn.
this is an example of a function in my class to create a bitmap for comparing:
static public function createAlignedBitmap(mc: MovieClip, mc_rect: Rectangle): BitmapData{
var mc_offset: Matrix;
var mc_bmd: BitmapData;
mc_offset = mc.transform.matrix;
mc_offset.tx = mc.x - mc_rect.x;
mc_offset.ty = mc.y - mc_rect.y;
mc_bmd = new BitmapData(mc_rect.width, mc_rect.height, true, 0);
mc_bmd.draw(mc, mc_offset);
return mc_bmd;
}
in order to use it, if you are on the timeline, you do:
className.createAlignedBitmap(myMovieClip, myMovieClip.getBounds(this))
Notice the use of getBounds which return the rectangle in which the movie clip is embedded. This allows the calculation of the offset matrix.
This method is quite similar to the on shown here http://www.mikechambers.com/blog/2009/06/24/using-bitmapdata-hittest-for-collision-detection/
By the ways, if this is an interesting matter for you, check my other question which I'll post in a few moments.