How to animate textures in a 3d model? - libgdx

I wish to have a animated 3d texture in my LibGDX code but I am struggling to find out how to do it.
I assume how this "should" be done is either;
a) Directly accessing and modifying the texture on the model. (via a pixmap? ByteBuffer?)
or
b) Prerendering a big image containing all the frames (say, 20) and then moving the UV co-ordinates to create the illusion of the animation. (akin to ImageStrips in 2d/webdesign).
I did work out how I could completely replace the material eachtime, but that seems a much worse way of doing it. So if anyone could show the commands I need to do either a) or b) (or a similar optimal method) I would be great-fall.
Maths I am fine with. The intricacies of OpenGLES or GDX I am not :)
(The solution should at least work HTML/Android compiles, ideally everything)

Since the latest release it is very easy to play a 2d animation on a 3d surface. First make sure to get familiar with the 2d animation concept, as explained over here: https://github.com/libgdx/libgdx/wiki/2D-Animation. Then, instead of using a spritebatch, you can use the TextureRegion (which Animation#getKeyFrame returns) to set the material of the surface, as shown here: https://github.com/libgdx/libgdx/blob/master/tests/gdx-tests/src/com/badlogic/gdx/tests/g3d/TextureRegion3DTest.java. So basically you would get in your render method:
attribute.set(animation.getKeyFrame(stateTime, true));
Or if you want a more generic approach:
instance.getMaterial("<name of material>").get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
Or, if there's only one material in the ModelInstance:
instance.materials.get(0).get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));

If you have the memory for it I would definetly choose b), it is easier on the processor. Also, you would only change a uniform's value. However, due to preprocessing it might take some time to open the application.
Get you uniform variable, where you compile your shaders, animationPos should be global.
Gluint animationPos = glGetUniformLocation(shaderProgram, "nameoftheuniform");
Your main loop should pass animationPos value to the shader:
Gluniform1i ( animationPos, curentAnimationIndex);
Add this your fragment shader variables:
uniform int animationPos;
Fragment shader main:
float texCoordY = texCoord.y; //texture coordinates should be passed from vertex shader
float texCoordX = texCoord.x/20.0f; //we are dividing it with 20 since it is the amount of textures that we have and if we use it directly it would try to use all the texture. Whereas the texture stores at 20 different textures.
float textureIndex = 1.0f*animationPos/20.0f; //Pointer to the start of the animation texture.
gl_fragColor = texture2D ( yourTexture, vec2( textureIndex + texCoordX, texCoordY));
Above code assumes that you expanded your textures in the x direction, you can also try to expand it like a matrix, then you need to change the texCoord calculation part. Also that we are using 20 textures.
The option a) is more heavy on the processor and you will be changing the texture every time so it will use pci a bit more, but easier on memory. The question is more like a design decision, but I guess 20 images can be handled so go with option b).
Edit: Added code.

Related

Is there a way to have smooth/subpixel motion without turning on smoothing on graphics?

I'm creating this 2D, pixel art game. When the camera follows the player (it uses easing), on the final approach, the position gets several subpixel adjustments.
If I have smoothing ON (on my graphic assets), the graphics look good (sharp. it's pixel art) but the subpixel motion is jerky/jumpy.
If I have smoothing OFF, the subpixel motion is smooth, but the pixel art graphics look blurry.
I'm using Flash player v21. I've tried this with Starling and with Flash's display list.
You have a pixelated object that is moving in increments of less than the pixel size, but you don't want to restrict your mathematical easing to integers, or even worse, factors of 8 or what have you. The solution I am using in my project for this exact issue is posted below (I just got it working last week!)
Concept
create a driver that is controlled by the easing using floating point numbers.
Allow this driver to then control where the actual display object is rendered. We can use a constraint to only allow the display object to render on your chosen resolution.
Code Example
// you'll put these lines or equivalent in the correct spots for your particular needs.
// SCALE_UP will be your resolution control. If your pixels are 4 pixels wide, use 4.
const SCALE_UP: int = 4;
var d:CharacterDriver = new CharacterDriver();
var c:Character = new Character();
c._driver = d; // I've found it useful to be able to reference the driver
d._drives = c; // or the thing the driver drives via the linked object.
// you don't have to do this.
then when you are ready to do your easing of the driver:
function yourEase(c:Character, d:CharacterDriver):void{
c.x = Math.ceil(d.x - Math.ceil(d.x)%SCALE_UP);//this converts a floating point number into a factor of SCALE_UP
c.y = Math.ceil(d.y - Math.ceil(d.y)%SCALE_UP);
Now this will make your character move around 4 pixels at a time, but still be able to experience easing!
The bit with the modulo (%) operator is the key. For instance, 102-102%4 = 100. 103-103%4 = 100. 104-104%4 = 104.
In case anyone is confused by that, look at what 102%4 does: 4 goes into 102 25 times with a remainder of 2. so 102%4 = 2. Then 102 - 2 = 100.
In your case, since the "camera" is following the player (i.e. the background is moving, right?) then you really need to apply drivers to everything in the background instead, but it is basically the same idea.
Hope this helps.
since you specifically mentioned the "final approach" i think your problem comes from the fact that the easing equations puts your graphics at fractional coordinates, especially while getting closer to the target, but you should also notice it during the rest of the animation.
depending on the easing "engine" that you're using you should be able to set a "round values" flag, so all the coordinates set will be integer values and not fractional
if that's not possible, find a way in your display objects to round the x and y values every time they change

Transparency issues with 3d particles and 3d models, libgdx

I got some strange issues with transparency and 3d particles. A short vid to illustrate:
https://youtu.be/ZHKI1X3MjhY
As you can see I have a 3d particle effect, fire burning. Inside it is a 3 model with no alpha blending and it shows just fine. then in the far distance there is a small skeleton (with blending and alphatest turned on) and it also shows just fine through the fire. Then I turn camera and look at the warrior skeleton and it just disappear and instead you see what is behind him. I turn camera again and the mage skeleton also vanishes, but you can see the trees a bit further away just fine and they have the exact same settings for blending and alpha test. If I move the character say 20 yards away it also starts showing through the fire effect.
So it seems to have something to do with distance from the 3d particle effect...
The 3d particle batch is an extended BillboardParticleBatch like this:
protected Renderable allocRenderable(){
BlendingAttribute ba=new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE,1f);
Renderable r = super.allocRenderable();
r.material = new Material( ba,
// new DepthTestAttribute(GL20.GL_LEQUAL, 0.0f, 0.5f, true),
// r.material.set(new FloatAttribute(FloatAttribute.AlphaTest, 0.0f),
TextureAttribute.createDiffuse(texture));
return r;
}
All the characters and the trees are created with following attributes:
if (alpha) {
FloatAttribute floatAttribute = new FloatAttribute(FloatAttribute.AlphaTest, 0.5f);
BlendingAttribute blendingAttribute = new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA, 1f);
for (int i = 0; i < bulletEntity.modelInstance.materials.size; i++){
bulletEntity.modelInstance.materials.get(i).set(blendingAttribute);
bulletEntity.modelInstance.materials.get(i).set(floatAttribute);
}
}
The models are drawn first then the particles, I tried changing order but no difference. I have tried a lot of different setups for alphatest, depthtest and blendingattribute but can not find anything that works.
EDIT
I removed the Blending attribute from the 3d-models and now it looks as it should regarding the particle effect. However I need most materials on my character models to have blending set..
Anyone got any clue why this is happening when I enable blending?
I also tried to use the BillboardParticleBatch without extending it in case I had done something there but the effect then is even worse. All models with blending enabled appear in-front of the particle effect even though they stand behind it.
ModelBatch sorts your render calls (check this link, really, it is a must read), to avoid incorrect behavior (as you're experiencing). The actual sorting/rendering happens at the call to ModelBatch#end. By default it uses the DefaultRenderableSorter, which is a default implementation. Of course, because that implementation isn't aware of your scene, it might not fit exactly your needs.
The DefaultRenderableSorter tries to guess the location of each model based on their transformation matrix. Based on that location and the camera's location it will sort them so that:
First all opaque objects are rendered from front to back (because whatever is behind an opaque object isn't visible anyway, so that reduces unneeded calls to the fragment shader).
Secondly all transparent objects are rendered from back to front (because as soon as a transparent object is rendered then everything that is rendered after that and is behind it, will not be visible).
To decide whether an object is transparent, the BlendingAttribute#blended member is used. (So you could, if you really wanted to, set that member to false to force it to be treated (sorted) as if it was opaque)
So, the order in which you call ModelBatch#render is not necessarily the order in which they are actually executed. If you want to force to render whatever you've added to the batch in between, then call the ModelBatch#flush(). Of course, doing this a lot defeats some of the purpose of ModelBatch in the first place.
Instead you could implement your own RenderableSorter which has more knowledge about your scene and can therefor do a better job sorting than the default implementation. (however if flush() works for you and there's no other issue, then just flush might be the easiest solution for you).
That said, there a various other solutions you could try as well. E.g. the regions of the particles are fully transparent, so the fragment shader might as well discard those all together. Try adding FloatAttribute.AlphaTest with a value of 0.5f to the particles. If that messes with your blending then gradually reduce the value to e.g. 0.05f.
Also, you could add a DepthTestAttribute with depthMask set to false (new DepthTestAttribute(false)). This will prevent the particles from writing to the depth buffer. (but also might cause other things to show in front of the particles).

libGDX: same texture with shaders, different textures without

my name ist Tom (Ger) and i am developing a small 3D game with libGDX.
when i am using a Model, ModelInstance with a ModelBatch and the Environment, i can render different ModelInstances (with different Models) with there right textures.
But i need to use a shader for some wobble effects.
But when i use a shader everything works finde, except for the textures. there are the same for every ModelInscance i want to render.
i guess there is a texture binding problem. I load my Models this way:
assets = new AssetManager();
assets.load("blob.g3db", Model.class);
and fetch them with a simple:
public static Model getModel(String name) {
return assets.get(name + ".g3db", Model.class);
}
So i guess the assetsManager is loading the textures as well (cause it works without the shader).
My Question is:
How can i render differend 3D Objects with a Shader with there correct Textures?
Thanks in Advance...
Tom
The Models and the ModelInstances have a Material, where you can set a Texture, Color and other things to it.
So if 2 ModelInstances share the same Model you can set different Materials to their ModelInstances. By doing this you have different Textures. The DefaultShader implementation takes care about them. If you create your own Shader you need to take care about them.
Important: It does not work without Shader, cause you always render with Shader. You don't set the Shader manually, but libgdx uses DefaultShader by default.
I suggest you read some of Xoppas tutorials.

Stage3D AGAL Diffuse Light Shader, Dynamic Object Questions

There are quite a few ActionScript Stage3D tutorial examples 'out there'. I am thankful to all those who attempt to provide us newbies with working code. That said I don't believe I have yet found an example which I think is correctly handling the Lambertian reflectance as outlined by the Wikipedia entry, but I hesitate to add that, as a newbie, perhaps I just have failed to understand the implementations.
Here's what I think is the basic requirement of any implementation -- that it be able to compare the orientation of the light source [I am purposely limiting this discussion to the simpler case of a 'directional light' mimicking the Sun, rather than a 'spot light'.] to the orientation of the normal to the face to be illuminated.
And here is what I think is the heart of the problem I am seeing -- that the computation of the normal of the face, in almost every case, is being performed when the geometry of the object is being created. So, the value of the normal being passed to the shader is expressed in terms of the local object space.
Now I know that the way this wonderful forum works, you would prefer that I just give some sample code, so someone could identify a specific mistake in either the setup code used on the CPU or the shader code used on the GPU. Because I can find examples of this problem using many different frameworks, I think what I [and I imagine many others] need, is not a specific coded solution, but some specific clarification of what is actually required to get a photo-realistic rendering of an object in the simple, base case of fixed camera view point, a non-moving light source, and ignoring considerations of specularity, etc.
So, when performing the dot product of the two vectors:
A. Should the Vector3D value representing the normal to the triangular face to be illuminated be calculated using the object space values of the three vertices or the world space values after transformation?
B. If world space values are required, as I believe, should the dynamically-calculated normal workload be performed each render cycle on the CPU or the GPU?
Thank you.
Terry I ran into the same problem recently, I suspect you have solved it or moved on. But I thought I would answer the question anyways.
I choose to transform the normal in the vertex shader.
var vertexShader:Array = [
"m44 vt0, va0, vc0", // transform vertex positions (va0) by the world camera data (vc0)
// this result has the camera angle as part of the matrix, which is not good when calculating light
"mov op, vt0", // move the transformed vertex data (vt0) in output position (op)
"add v0, va1, vc12.xy", // add in the UV offset (va1) and the animated offset (vc12) (may be 0 for non animated), and put in v0 which holds the UV offset
"mov v1, va3", // pass texture color and brightness (va3) to the fragment shader via v1
"m44 v2, va2, vc4", // transform vertex normal, send to fragment shader
// the transformed vertices without the camera data
"m44 v3, va0, vc8", // the transformed vertices with out the camera data, works great for default AND for translated cube, rotated cube broken still
in the fragment shader
// normalize the light position
"sub ft1, v3, fc0", // subtract the light position from the transformed vertex postion
"nrm ft1.xyz, ft1", // normalize the light position (ft1)
// non flat shading
"dp3 ft2, ft1, v2", // dot the transformed normal with light direction
"sat ft2, ft2", // Clamp dot between 1 and 0, put result in ft2.
Last but not least, what data you pass in! Took quite a bit of experimenting for me to find the magic.
// mvp is the camera data mixed with each object world matrix
_context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, mvp, true); // aka vc0
// $model is the model to be drawn
var invmat:Matrix3D = $model.modelMatrix.clone();
invmat.transpose();
_context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 4, invmat, true); // aka vc4
var wsmat:Matrix3D = $model.worldSpaceMatrix.clone();
_context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 8, wsmat, true); // aka vc8
I hope this helps someone in the future.

Actionscript 3 pixel perfect collision. How to? (learning purposes)

I know that there are people out there creating classes for this (ie http://coreyoneil.com/portfolio/index.php?project=5). But I want to learn how to do it myself so I can create everything I need the way I need.
I've read about BitMap and BitMapData. I should be able to .draw the MovieClips onto a BitMap so I could then cycle the pixels looking for the collisions. However, It's weird and confusing dealing with the offsets.. And it seams like the MyBitMap.rect has always x = 0 and y = 0... and I can't seam to find the original position of the things...
I'm thinking of doing a hitTestObject first, then if this was positive, I would investigate the intersection betwen the movieclips rectangles for the pixel collisions.
But then there is also another problem (the rotation of movieclips)...
...I need some enlightment here on how to do it.
Please, any help would be appreciated..
If you're using BitmapData objects with transparency you can use BitmapData.hitTest(firstPoint:Point, firstAlphaThreshold:uint, secondObject:Object, secondBitmapDataPoint:Point = null, secondAlphaThreshold:uint = 1):Boolean.
You'll have to change from global coords to the local BitmapData coords which will require a bit of math if it is rotated. That's easily achieved (look up affine transform for more info on wiki):
var coordTransform:Matrix = new Matrix();
coordTransform.rotate(rotationRadians);
coordTransform.translate(x, y);
coordTransform.transformPoint(/* your point */);
A classic reference for pixel perfect collision detection in flash is this Grant Skinner's article. It's AS2, but the logic is the same for AS3 (there are ports available if you google a bit).
If I recall correctly, this particular implementation worked as long as both tested objects had the same parent, but that can be fixed.
About BitmapData x and y values, I understand it could be confusing; however, the way it works makes sense to me. A BitmapData is just what the name implies: pixel data. It's not a display object, and cannot be in the display list; so having x or y different than 0 doesn't really make sense, if you think about it. The easiest way to deal with this is probably storing the (x,y) offset of the source object (the display object you have drawn from) and translate it to the global coordinate space so you can compare any objects, no matter what's their position in the display list (using something like var globalPoint:Point = source.parent.localToGlobal(new Point(source.x,source.y)).
I've previously used Troy Gilbert's pixel perfect collision detection class (adapted from Andre Michelle, Grant Skinner and Boulevart) which works really well (handles rotation, different parents, etc.):
http://troygilbert.com/2007/06/pixel-perfect-collision-detection-in-actionscript3/
http://troygilbert.com/2009/08/pixel-perfect-collision-detection-revisited/
and from there he has also linked to this project (which I've not used, but looks really impressive):
http://www.coreyoneil.com/portfolio/index.php?project=5
I managed to do it after all, and I already wrote my class for collision detections,/collisions angle and other extras.
The most confusing process is maybe to align the bitmaps correctly for comparing. When whe draw() a movieclip into a a BitmapData, if we addChild() the corresponding Bitmap we can see that part of it is not visible. it appears to be drawn from the center to right and down only, leaving the top and left parts away from beeing drawn. The solution is giving a transform matrix in the second argument of the draw method that aligns the bitmap and makes it all be drawn.
this is an example of a function in my class to create a bitmap for comparing:
static public function createAlignedBitmap(mc: MovieClip, mc_rect: Rectangle): BitmapData{
var mc_offset: Matrix;
var mc_bmd: BitmapData;
mc_offset = mc.transform.matrix;
mc_offset.tx = mc.x - mc_rect.x;
mc_offset.ty = mc.y - mc_rect.y;
mc_bmd = new BitmapData(mc_rect.width, mc_rect.height, true, 0);
mc_bmd.draw(mc, mc_offset);
return mc_bmd;
}
in order to use it, if you are on the timeline, you do:
className.createAlignedBitmap(myMovieClip, myMovieClip.getBounds(this))
Notice the use of getBounds which return the rectangle in which the movie clip is embedded. This allows the calculation of the offset matrix.
This method is quite similar to the on shown here http://www.mikechambers.com/blog/2009/06/24/using-bitmapdata-hittest-for-collision-detection/
By the ways, if this is an interesting matter for you, check my other question which I'll post in a few moments.