Is there a way to have smooth/subpixel motion without turning on smoothing on graphics? - actionscript-3

I'm creating this 2D, pixel art game. When the camera follows the player (it uses easing), on the final approach, the position gets several subpixel adjustments.
If I have smoothing ON (on my graphic assets), the graphics look good (sharp. it's pixel art) but the subpixel motion is jerky/jumpy.
If I have smoothing OFF, the subpixel motion is smooth, but the pixel art graphics look blurry.
I'm using Flash player v21. I've tried this with Starling and with Flash's display list.

You have a pixelated object that is moving in increments of less than the pixel size, but you don't want to restrict your mathematical easing to integers, or even worse, factors of 8 or what have you. The solution I am using in my project for this exact issue is posted below (I just got it working last week!)
Concept
create a driver that is controlled by the easing using floating point numbers.
Allow this driver to then control where the actual display object is rendered. We can use a constraint to only allow the display object to render on your chosen resolution.
Code Example
// you'll put these lines or equivalent in the correct spots for your particular needs.
// SCALE_UP will be your resolution control. If your pixels are 4 pixels wide, use 4.
const SCALE_UP: int = 4;
var d:CharacterDriver = new CharacterDriver();
var c:Character = new Character();
c._driver = d; // I've found it useful to be able to reference the driver
d._drives = c; // or the thing the driver drives via the linked object.
// you don't have to do this.
then when you are ready to do your easing of the driver:
function yourEase(c:Character, d:CharacterDriver):void{
c.x = Math.ceil(d.x - Math.ceil(d.x)%SCALE_UP);//this converts a floating point number into a factor of SCALE_UP
c.y = Math.ceil(d.y - Math.ceil(d.y)%SCALE_UP);
Now this will make your character move around 4 pixels at a time, but still be able to experience easing!
The bit with the modulo (%) operator is the key. For instance, 102-102%4 = 100. 103-103%4 = 100. 104-104%4 = 104.
In case anyone is confused by that, look at what 102%4 does: 4 goes into 102 25 times with a remainder of 2. so 102%4 = 2. Then 102 - 2 = 100.
In your case, since the "camera" is following the player (i.e. the background is moving, right?) then you really need to apply drivers to everything in the background instead, but it is basically the same idea.
Hope this helps.

since you specifically mentioned the "final approach" i think your problem comes from the fact that the easing equations puts your graphics at fractional coordinates, especially while getting closer to the target, but you should also notice it during the rest of the animation.
depending on the easing "engine" that you're using you should be able to set a "round values" flag, so all the coordinates set will be integer values and not fractional
if that's not possible, find a way in your display objects to round the x and y values every time they change

Related

Transparency issues with 3d particles and 3d models, libgdx

I got some strange issues with transparency and 3d particles. A short vid to illustrate:
https://youtu.be/ZHKI1X3MjhY
As you can see I have a 3d particle effect, fire burning. Inside it is a 3 model with no alpha blending and it shows just fine. then in the far distance there is a small skeleton (with blending and alphatest turned on) and it also shows just fine through the fire. Then I turn camera and look at the warrior skeleton and it just disappear and instead you see what is behind him. I turn camera again and the mage skeleton also vanishes, but you can see the trees a bit further away just fine and they have the exact same settings for blending and alpha test. If I move the character say 20 yards away it also starts showing through the fire effect.
So it seems to have something to do with distance from the 3d particle effect...
The 3d particle batch is an extended BillboardParticleBatch like this:
protected Renderable allocRenderable(){
BlendingAttribute ba=new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE,1f);
Renderable r = super.allocRenderable();
r.material = new Material( ba,
// new DepthTestAttribute(GL20.GL_LEQUAL, 0.0f, 0.5f, true),
// r.material.set(new FloatAttribute(FloatAttribute.AlphaTest, 0.0f),
TextureAttribute.createDiffuse(texture));
return r;
}
All the characters and the trees are created with following attributes:
if (alpha) {
FloatAttribute floatAttribute = new FloatAttribute(FloatAttribute.AlphaTest, 0.5f);
BlendingAttribute blendingAttribute = new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA, 1f);
for (int i = 0; i < bulletEntity.modelInstance.materials.size; i++){
bulletEntity.modelInstance.materials.get(i).set(blendingAttribute);
bulletEntity.modelInstance.materials.get(i).set(floatAttribute);
}
}
The models are drawn first then the particles, I tried changing order but no difference. I have tried a lot of different setups for alphatest, depthtest and blendingattribute but can not find anything that works.
EDIT
I removed the Blending attribute from the 3d-models and now it looks as it should regarding the particle effect. However I need most materials on my character models to have blending set..
Anyone got any clue why this is happening when I enable blending?
I also tried to use the BillboardParticleBatch without extending it in case I had done something there but the effect then is even worse. All models with blending enabled appear in-front of the particle effect even though they stand behind it.
ModelBatch sorts your render calls (check this link, really, it is a must read), to avoid incorrect behavior (as you're experiencing). The actual sorting/rendering happens at the call to ModelBatch#end. By default it uses the DefaultRenderableSorter, which is a default implementation. Of course, because that implementation isn't aware of your scene, it might not fit exactly your needs.
The DefaultRenderableSorter tries to guess the location of each model based on their transformation matrix. Based on that location and the camera's location it will sort them so that:
First all opaque objects are rendered from front to back (because whatever is behind an opaque object isn't visible anyway, so that reduces unneeded calls to the fragment shader).
Secondly all transparent objects are rendered from back to front (because as soon as a transparent object is rendered then everything that is rendered after that and is behind it, will not be visible).
To decide whether an object is transparent, the BlendingAttribute#blended member is used. (So you could, if you really wanted to, set that member to false to force it to be treated (sorted) as if it was opaque)
So, the order in which you call ModelBatch#render is not necessarily the order in which they are actually executed. If you want to force to render whatever you've added to the batch in between, then call the ModelBatch#flush(). Of course, doing this a lot defeats some of the purpose of ModelBatch in the first place.
Instead you could implement your own RenderableSorter which has more knowledge about your scene and can therefor do a better job sorting than the default implementation. (however if flush() works for you and there's no other issue, then just flush might be the easiest solution for you).
That said, there a various other solutions you could try as well. E.g. the regions of the particles are fully transparent, so the fragment shader might as well discard those all together. Try adding FloatAttribute.AlphaTest with a value of 0.5f to the particles. If that messes with your blending then gradually reduce the value to e.g. 0.05f.
Also, you could add a DepthTestAttribute with depthMask set to false (new DepthTestAttribute(false)). This will prevent the particles from writing to the depth buffer. (but also might cause other things to show in front of the particles).

How to animate textures in a 3d model?

I wish to have a animated 3d texture in my LibGDX code but I am struggling to find out how to do it.
I assume how this "should" be done is either;
a) Directly accessing and modifying the texture on the model. (via a pixmap? ByteBuffer?)
or
b) Prerendering a big image containing all the frames (say, 20) and then moving the UV co-ordinates to create the illusion of the animation. (akin to ImageStrips in 2d/webdesign).
I did work out how I could completely replace the material eachtime, but that seems a much worse way of doing it. So if anyone could show the commands I need to do either a) or b) (or a similar optimal method) I would be great-fall.
Maths I am fine with. The intricacies of OpenGLES or GDX I am not :)
(The solution should at least work HTML/Android compiles, ideally everything)
Since the latest release it is very easy to play a 2d animation on a 3d surface. First make sure to get familiar with the 2d animation concept, as explained over here: https://github.com/libgdx/libgdx/wiki/2D-Animation. Then, instead of using a spritebatch, you can use the TextureRegion (which Animation#getKeyFrame returns) to set the material of the surface, as shown here: https://github.com/libgdx/libgdx/blob/master/tests/gdx-tests/src/com/badlogic/gdx/tests/g3d/TextureRegion3DTest.java. So basically you would get in your render method:
attribute.set(animation.getKeyFrame(stateTime, true));
Or if you want a more generic approach:
instance.getMaterial("<name of material>").get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
Or, if there's only one material in the ModelInstance:
instance.materials.get(0).get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
If you have the memory for it I would definetly choose b), it is easier on the processor. Also, you would only change a uniform's value. However, due to preprocessing it might take some time to open the application.
Get you uniform variable, where you compile your shaders, animationPos should be global.
Gluint animationPos = glGetUniformLocation(shaderProgram, "nameoftheuniform");
Your main loop should pass animationPos value to the shader:
Gluniform1i ( animationPos, curentAnimationIndex);
Add this your fragment shader variables:
uniform int animationPos;
Fragment shader main:
float texCoordY = texCoord.y; //texture coordinates should be passed from vertex shader
float texCoordX = texCoord.x/20.0f; //we are dividing it with 20 since it is the amount of textures that we have and if we use it directly it would try to use all the texture. Whereas the texture stores at 20 different textures.
float textureIndex = 1.0f*animationPos/20.0f; //Pointer to the start of the animation texture.
gl_fragColor = texture2D ( yourTexture, vec2( textureIndex + texCoordX, texCoordY));
Above code assumes that you expanded your textures in the x direction, you can also try to expand it like a matrix, then you need to change the texCoord calculation part. Also that we are using 20 textures.
The option a) is more heavy on the processor and you will be changing the texture every time so it will use pci a bit more, but easier on memory. The question is more like a design decision, but I guess 20 images can be handled so go with option b).
Edit: Added code.

Actionscript collisions: solving exceptions and strange cases

I have created a collision class to detect collisions using pixels. In my class I've also developed some functions for determining the collsion angle. Based on this I created some examples:
http://megaswf.com/serve/25437/
http://megaswf.com/serve/25436/
(Space to change gravity, right/left to give some speed to the ball.)
As you will probably notice, there are strange things that happen:
When the ball speed is very low
When the direction of the ball is
almost tangent to the obstacle.
collision http://img514.imageshack.us/img514/4059/colisao.png
The above image shows how I calculate the collision angle.
I call the red dots the keypoints. I search for a maximum number of keypoints (normally 4). If I find more then 2 keypoints, I choose the 2 farthest ones (as shown in one of the blue objects). Thats how I then find the normal angle to where in the surface the object collided. I don't know if this is an obsolete way of doing things.
based on that angle, I rotate the speed vector to do the bouncing.
The piece of code to do the maths is here:
static public function newSpeedVector(speedX: Number, speedY: Number, normalAngle: Number): Object{
var vector_angle: Number = Math.atan2(speedY, speedX) * (180/Math.PI);
var rotating_angle: Number = (2*(normalAngle - vector_angle) + 180) % 360;
var cos_ang: Number = Math.cos(rotating_angle/DEGREES_OF_1RAD);
var sin_ang: Number = Math.sin(rotating_angle/DEGREES_OF_1RAD);
var final_speedX: Number = speedX * cos_ang - speedY * sin_ang;
var final_speedY: Number = speedX * sin_ang + speedY * cos_ang;
return {x_speed: final_speedX, y_speed: final_speedY};
}
This is how the new speed vector is calculated...
My question is, has anyone faced this kind of problem or has some idea on how to avoid this from happening?
Without seeing your code, this is the best I can provide.
Collision physics should have some velocity threshold that is considered "stopped". That is, once the velocity gets small enough you should explicitly mark an object as stopped and exempt it from your collision code. This will help stabilize slow moving objects. Trial and error is required for a good threshold.
When a collision happens, it is important to at least attempt to correct any inter-penetration. This is most likely the reason why tangent collisions are behaving strangely. Attempt to move the colliding object away from what it hit in a reasonable manner.
Your method of determining the normal should work fine. Game physics is all about cheating (cheating in a way that still looks good). I take it rotation and friction isn't a part of what you're going for?
Try this. You have the contact normal. When you detect a collision, calculate the penetration depth and move your object along the normal so that is isn't penetrating anymore. You can use the two points you have already calculated to get one point of penetration, you'll need to calculate the other though. For circles it's easy (center point + radius in direction of normal). There are more complex ways of going about this but see if the simple method works well for you.
I have read and recommend this book: Game Physics Engine Development
I did something a little like that and got a similar problem.
What I did is that when the pixels from the bouncing object (a ball in your case) overlap with the pixels from the obstacle, you need to move the ball out of the obstacle so that they are not overlapping.
Say the ball is rolling on the obstacle, before your render the ball again you need to move it back by the amount of 'overlap'. If you know at what angle the ball hit the obstable just move it out along that angle.
You also need to add some damping otherwise the ball will never stop. Take a (very) small value, if the ball velocity is bellow that value, set the velocity to 0.
You can try one more thing, which I think is very effective.
You can make functions such as getNextX() and getNextY()(Which of course give their position coordinated after the next update) in your game objects and Check collision on objects based on their next position instead of their current position.
This way, the objects will never overlap, you'll know when they are about collide and apply your after collision physics gracefully!

Trying to convert openGL to MFC coordinates and having Problems with "gluProject"

To clarify things, what i am trying to do is to get the openGL coordinates and manipulate them in my mfc code. not to get an openGL object. i'm using the mfc to control the position of the objects in the openGL.
Hi, i'm trying to find the naswer on the web and can't find a full solution that i can use and that will work...
I'm developing a MFC project with static picture as the canvas for an openGL class that draw the grphics for my game.
On moush down, i need to retrive a shape coordinate from the openGL class.
I'm looking for a way to convert the openGL coordinates to MFC coordinates but no matter what i try i get junk after using the gluProject or gluUnProject (i've tried to do both ways but non is working)
GLdouble modelMatrix[16];
glGetDoublev(GL_MODELVIEW_MATRIX,modelMatrix);
GLdouble projMatrix[16];
glGetDoublev(GL_PROJECTION_MATRIX,projMatrix);
int viewport[4];
glGetIntegerv(GL_VIEWPORT,viewport);
POINT mouse; // Stores The X And Y Coords For The Current Mouse Position
GetCursorPos(&mouse); // Gets The Current Cursor Coordinates (Mouse Coordinates)
ScreenToClient(hWnd, &mouse);
GLdouble winX, winY, winZ; // Holds Our X, Y and Z Coordinates
winX; = (float)point.x; // Holds The Mouse X Coordinate
winY; = (float)point.y; // Holds The Mouse Y Coordinate
winY = (float)viewport[3] - winY;
glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
GLdouble posX=s1->getPosX(), posY=s1->getPosY(), posZ=s1->getPosZ(); // Hold The Final Values
gluUnProject( winX, winY, winZ, modelMatrix, projMatrix, viewport, &posX, &posY, &posZ);
gluProject(posX, posY, posZ, modelMatrix, projMatrix, viewport, &winX, &winY, &winZ);
This is just part of the code i've tried. ofcourse not gluProject and gluUnProject together. just had them both here to show.....and i know there is lots of junk over there, its from some of my tries...
p.s. i've tried many many more combinations and examples from the web and nothing seem to work in my case....
Can any one show me what is the right way to do the transformation?
10x
It looks like you're trying to retrieve the object (or objects) that is/are at a particular point. If this is the case, gluProject and/or gluUnProject isn't really a very suitable tool for the task. OpenGL has a selection mode intended specifically for this kind of task.
In typical use, you specify a small square (e.g., 5x5 pixels) around the mouse click spot with gluPickMatrix, set selection mode with glRenderMode, set a buffer with glSwelectBuffer, and then draw your scene. The drawing doesn't go to the screen, but fills the buffer you specified wiyh records of what was drawn within the specified area.

Actionscript 3 pixel perfect collision. How to? (learning purposes)

I know that there are people out there creating classes for this (ie http://coreyoneil.com/portfolio/index.php?project=5). But I want to learn how to do it myself so I can create everything I need the way I need.
I've read about BitMap and BitMapData. I should be able to .draw the MovieClips onto a BitMap so I could then cycle the pixels looking for the collisions. However, It's weird and confusing dealing with the offsets.. And it seams like the MyBitMap.rect has always x = 0 and y = 0... and I can't seam to find the original position of the things...
I'm thinking of doing a hitTestObject first, then if this was positive, I would investigate the intersection betwen the movieclips rectangles for the pixel collisions.
But then there is also another problem (the rotation of movieclips)...
...I need some enlightment here on how to do it.
Please, any help would be appreciated..
If you're using BitmapData objects with transparency you can use BitmapData.hitTest(firstPoint:Point, firstAlphaThreshold:uint, secondObject:Object, secondBitmapDataPoint:Point = null, secondAlphaThreshold:uint = 1):Boolean.
You'll have to change from global coords to the local BitmapData coords which will require a bit of math if it is rotated. That's easily achieved (look up affine transform for more info on wiki):
var coordTransform:Matrix = new Matrix();
coordTransform.rotate(rotationRadians);
coordTransform.translate(x, y);
coordTransform.transformPoint(/* your point */);
A classic reference for pixel perfect collision detection in flash is this Grant Skinner's article. It's AS2, but the logic is the same for AS3 (there are ports available if you google a bit).
If I recall correctly, this particular implementation worked as long as both tested objects had the same parent, but that can be fixed.
About BitmapData x and y values, I understand it could be confusing; however, the way it works makes sense to me. A BitmapData is just what the name implies: pixel data. It's not a display object, and cannot be in the display list; so having x or y different than 0 doesn't really make sense, if you think about it. The easiest way to deal with this is probably storing the (x,y) offset of the source object (the display object you have drawn from) and translate it to the global coordinate space so you can compare any objects, no matter what's their position in the display list (using something like var globalPoint:Point = source.parent.localToGlobal(new Point(source.x,source.y)).
I've previously used Troy Gilbert's pixel perfect collision detection class (adapted from Andre Michelle, Grant Skinner and Boulevart) which works really well (handles rotation, different parents, etc.):
http://troygilbert.com/2007/06/pixel-perfect-collision-detection-in-actionscript3/
http://troygilbert.com/2009/08/pixel-perfect-collision-detection-revisited/
and from there he has also linked to this project (which I've not used, but looks really impressive):
http://www.coreyoneil.com/portfolio/index.php?project=5
I managed to do it after all, and I already wrote my class for collision detections,/collisions angle and other extras.
The most confusing process is maybe to align the bitmaps correctly for comparing. When whe draw() a movieclip into a a BitmapData, if we addChild() the corresponding Bitmap we can see that part of it is not visible. it appears to be drawn from the center to right and down only, leaving the top and left parts away from beeing drawn. The solution is giving a transform matrix in the second argument of the draw method that aligns the bitmap and makes it all be drawn.
this is an example of a function in my class to create a bitmap for comparing:
static public function createAlignedBitmap(mc: MovieClip, mc_rect: Rectangle): BitmapData{
var mc_offset: Matrix;
var mc_bmd: BitmapData;
mc_offset = mc.transform.matrix;
mc_offset.tx = mc.x - mc_rect.x;
mc_offset.ty = mc.y - mc_rect.y;
mc_bmd = new BitmapData(mc_rect.width, mc_rect.height, true, 0);
mc_bmd.draw(mc, mc_offset);
return mc_bmd;
}
in order to use it, if you are on the timeline, you do:
className.createAlignedBitmap(myMovieClip, myMovieClip.getBounds(this))
Notice the use of getBounds which return the rectangle in which the movie clip is embedded. This allows the calculation of the offset matrix.
This method is quite similar to the on shown here http://www.mikechambers.com/blog/2009/06/24/using-bitmapdata-hittest-for-collision-detection/
By the ways, if this is an interesting matter for you, check my other question which I'll post in a few moments.