I am trying to create an animation from a textureRegion. The regions are stored in an atlas file that is created by the LibGdx Texture Packer. The region I am interested at the moment is this:
mygame/myregion
rotate: true
xy: 2, 544
size: 2574, 279
orig: 2574, 279
offset: 0, 0
index: -1
Note the rotate true property. I am following the example from the documentation here: https://github.com/libgdx/libgdx/wiki/2D-Animation
The region is composed by 4 frames [it is a single png image that is composed by 4 frames not individual images] which are split later using the split method:
int framesCount = 4;
myRegion = skin.getRegion("mygame/myregion"),
int frameWidth = (int) Math.floor(myRegion.getRegionWidth() / (float) framesCount);
int frameHeight = myRegion.getRegionHeight();
TextureRegion[][] tmp = myRegion.split(frameWidth, frameHeight);
System.arraycopy(tmp[0], 0, animationFrames, 0, framesCount);
The split work well when the texture region is NOT rotated but in this case it never worked properly. In this case the regionHeight > regionWidth.
I can find if the texture is rotated or not by checking its height / width or the AtlasRegion rotate property.
Is there a way that I can rotate the region before it is split, so before splitting it would be something like here below:
Or maybe there is a better way to obtain the regions from the skin, so when in case the region is rotated by the TexturePacker, all would still be working as expected ?
Found also this related question : Libgx Rotate a Texture Region but did not help.
As workaround I disabled the rotation when generating the pack, to avoid the problem.
Related
The past few days I've been trying to figure out a display bug I don't understand. I've been working on a simple 2d platformer with box2d and orthogonal Tiled maps. So far so good, the physics work and using the b2d debug renderer I can assert proper player fixture and camera movement through the level.
Now next step I've tried to load textures to display sprites instead of debug shapes. This is where I stumble. I can load animations for my player body/fixture, but when I use the setCenter() method to center the texture on the fixture it is always out of center.
I've tried approaches via halving texture witdths and heights hoping to center the texture on the player fixture but I get the exact same off position rendering. I've played aorund with world/camera/screen unit coordinates but the misalignement persists.
I'm creating the player in my Player class with the following code.
First I define the player in box2d:
//define player's physical behaviour
public void definePlayer() {
//definitions to later use in a body
BodyDef bdef = new BodyDef();
bdef.position.set(120 / Constants.PPM, 60 / Constants.PPM);
bdef.type = BodyDef.BodyType.DynamicBody;
b2body = world.createBody(bdef);
//Define needed components of the player's main fixture
FixtureDef fdef = new FixtureDef();
PolygonShape shape = new PolygonShape();
shape.setAsBox(8 / Constants.PPM, 16 / Constants.PPM); //size of the player hitbox
//set the player's category bit
fdef.filter.categoryBits = Constants.PLAYER_BIT;
//set which category bits the player should collide with. If not mentioned here, no collision occurrs
fdef.filter.maskBits = Constants.GROUND_BIT |
Constants.GEM_BIT |
Constants.BRICK_BIT |
Constants.OBJECT_BIT |
Constants.ENEMY_BIT |
Constants.TREASURE_CHEST_BIT |
Constants.ENEMY_HEAD_BIT |
Constants.ITEM_BIT;
fdef.shape = shape;
b2body.createFixture(fdef).setUserData(this);
}
Then I call the texture Region to be drawn in the Player class constructor:
//define in box2d
definePlayer();
//set initial values for the player's location, width and height, initial animation.
setBounds(0, 0, 64 / Constants.PPM, 64 / Constants.PPM);
setRegion(playerStand.getKeyFrame(stateTimer, true));
And finally, I update() my player:
public void update(float delta) {
//center position of the sprite on its body
// setPosition(b2body.getPosition().x - getWidth() / 2, b2body.getPosition().y - getHeight() / 2);
setCenter(b2body.getPosition().x, b2body.getPosition().y);
setRegion(getFrame(delta));
//set all the boolean flags during update cycles approprietly. DO NOT manipulate b2bodies
//while the simulation happens! therefore, only set flags there, and call the appropriate
//methods outside the simulation step during update
checkForPitfall();
checkIfAttacking();
}
And my result is
this, facing right
and this, facing left
Update:
I've been trying to just run
setCenter(b2body.getPosition().x, b2body.getPosition().y);
as suggested, and I got the following result:
facing right and facing left.
The sprite texture flip code is as follows:
if((b2body.getLinearVelocity().x < 0 || !runningRight) && !region.isFlipX()) {
region.flip(true, false);
runningRight = false;
} else if ((b2body.getLinearVelocity().x > 0 || runningRight) && region.isFlipX()) {
region.flip(true, false);
runningRight = true;
}
I'm testing if either the boolean flag for facing right is set or the x-axis velocity of my player b2body has a positive/negative value and if my texture region is already flipped or not and then use libGDX's flip() accordingly. I should not be messing with fixture coords anywhere here, hence my confusion.
The coordinates of box2d fixtures are offsets from the position, the position isn't necessarily the center (although it could be depending on your shape definition offsets). So in your case i think the position is actually the lower left point of the box2d polygon shape.
In which case you don't need to adjust for width and height because sprites are also drawn from bottom left position. So all you need is ;
setPosition(b2body.getPosition().x , b2body.getPosition().y );
I'm guessing you flip the box2d body when the player looks left the position of the shape is now bottom right so the sprite offset of width/2 and height/2 is from the bottom right instead. So specifically when you are looking left you need an offset of
setPosition(b2body.getPosition().x - getWidth() , b2body.getPosition().y );
I think looking right will be fixed from this, but i don't know for sure how you handle looking left in terms of what you do to the body, but something is done because the offset changes entirely as shown in your capture. If you aren't doing some flipping you could add how you handle looking right to the question.
EDIT
It seems the answer was that the sprite wasn't centered in the sprite sheet and this additional space around the sprite caused the visual impression of being in the wrong place (see comments).
Let's say your Starling display-list is as follows:
Stage
|___MainApp
|______Canvas (filter's target)
Then, you decide your MainApp should be rotated 90 degrees and offset a bit:
mainApp.rotation = Math.PI * 0.5;
mainApp.x = stage.stageWidth;
But all of a sudden, the filter keeps on applying itself to the target (canvas) in the angle it was originally (as if the MainApp was still at 0 degrees).
(notice in the GIF how the Blur's strong horizontal value continues to only apply horizontally although the parent object turned 90 degrees).
What would need to be changed to apply the filter to the target object before it gets it's parents transform? That way (I'm assuming) the filter's result would get transformed by the parent objects.
Any guess as to how this could be done?
https://github.com/bigp/StarlingShaderIssue
(PS: the filter I'm actually using is custom-made, but this BlurFilter example shows the same issue I'm having with the custom one. If there's any patching-up to do in the shader code, at least it wouldn't necessarily have to be done on the built-in BlurFilter specifically).
I solved this myself with numerous trial and error attempts over the course of several hours.
Since I only needed the shader to run in either at 0 or 90 degrees (not actually tweened like the gif demo shown in the question), I created a shader with two specialized sets of AGAL instructions.
Without going in too much details, the rotated version basically requires a few extra instructions to flip the x and y fields in the vertex and fragment shader (either by moving them with mov or directly calculating the mul or div result into the x or y field).
For example, compare the 0 deg vertex shader...
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
"mul posScaled, va1, viewportScale", // pass displacement positions (scaled)
].join("\n");
... with the 90 deg vertex shader:
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
//Calculate the rotated vertex "displacement" UVs
"mov temp1, va1",
"mov temp2, va1",
"mul temp2.y, temp1.x, viewportScale.y", //Flip X to Y, and scale with viewport Y
"mul temp2.x, temp1.y, viewportScale.x", //Flip Y to X, and scale with viewport X
"sub temp2.y, 1.0, temp2.y", //Invert the UV for the Y axis.
"mov posScaled, temp2",
].join("\n");
You can ignore the special aliases in the AGAL example, they're essentially posOriginal = v0, posScaled = v1 variants and viewportScale = vc4constants, then I do a string-replace to change them back to their respective registers & fields ).
Just a human-readable trick I use to avoid going insane. \☻/
The part that I struggled with the most was calculating the correct scale to adjust the UV's scale (with proper detection to Stage / Viewport resize and render-texture size shifts).
Eventually, this is what I came up with in the AS3 code:
var pt:Texture = _passTexture,
dt:RenderTexture = _displacement.texture,
notReady:Boolean = pt == null,
star:Starling = Starling.current;
var finalScaleX:Number, viewRatioX:Number = star.viewPort.width / star.stage.stageWidth;
var finalScaleY:Number, viewRatioY:Number = star.viewPort.height / star.stage.stageHeight;
if (notReady) {
finalScaleX = finalScaleY = 1.0;
} else if (isRotated) {
//NOTE: Notice how the native width is divided with height, instead of same side. Weird, but it works!
finalScaleY = pt.nativeWidth / dt.nativeHeight / _imageRatio / paramScaleX / viewRatioX; //Eureka!
finalScaleX = pt.nativeHeight / dt.nativeWidth / _imageRatio / paramScaleY / viewRatioY; //Eureka x2!
} else {
finalScaleX = pt.nativeWidth / dt.nativeWidth / _imageRatio / viewRatioX / paramScaleX;
finalScaleY = pt.nativeHeight / dt.nativeHeight / _imageRatio / viewRatioY / paramScaleY;
}
Hopefully these extracted pieces of code can be helpful to others with similar shader issues.
Good luck!
If both use hardware acceleration (GPU) to execute code, why WebGL is so most faster than Canvas?
I mean, I want to know why at low level, the chain from the code to the processor.
What happens? Canvas/WebGL comunicates directly with Drivers and then with Video Card?
Canvas is slower because it's generic and therefore is hard to optimize to the same level that you can optimize WebGL. Let's take a simple example, drawing a solid circle with arc.
Canvas actually runs on top of the GPU as well using the same APIs as WebGL. So, what does canvas have to do when you draw an circle? The minimum code to draw an circle in JavaScript using canvas 2d is
ctx.beginPath():
ctx.arc(x, y, radius, startAngle, endAngle);
ctx.fill();
You can imagine internally the simplest implementation is
beginPath creates a buffer (gl.bufferData)
arc generates the points for triangles that make a circle and uploads with gl.bufferData.
fill calls gl.drawArrays or gl.drawElements
But wait a minute ... knowing what we know about how GL works canvas can't generate the points at step 2 because if we call stroke instead of fill then based on what we know about how GL works we need a different set of points for a solid circle (fill) vs an outline of a circle (stroke). So, what really happens is something more like
beginPath creates or resets some internal buffer
arc generates the points that make a circle into the internal buffer
fill takes the points in that internal buffer, generates the correct set of triangles for the points in that internal buffer into a GL buffer, uploads them with gl.bufferData, calls gl.drawArrays or gl.drawElements
What happens if we want to draw 2 circles? The same steps are likely repeated.
Let's compare that to what we would do in WebGL. Of course in WebGL we'd have to write our own shaders (Canvas has its shaders as well). We'd also have to create a buffer and fill it with the triangles for a circle, (note we already saved time as we skipped the intermediate buffer of points). We then can call gl.drawArrays or gl.drawElements to draw our circle. And if we want to draw a second circle? We just update a uniform and call gl.drawArrays again skipping all the other steps.
const m4 = twgl.m4;
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
attribute vec4 position;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * position;
}
`;
const fs = `
precision mediump float;
uniform vec4 u_color;
void main() {
gl_FragColor = u_color;
}
`;
const program = twgl.createProgram(gl, [vs, fs]);
const positionLoc = gl.getAttribLocation(program, 'position');
const colorLoc = gl.getUniformLocation(program, 'u_color');
const matrixLoc = gl.getUniformLocation(program, 'u_matrix');
const positions = [];
const radius = 50;
const numEdgePoints = 64;
for (let i = 0; i < numEdgePoints; ++i) {
const angle0 = (i ) * Math.PI * 2 / numEdgePoints;
const angle1 = (i + 1) * Math.PI * 2 / numEdgePoints;
// make a triangle
positions.push(
0, 0,
Math.cos(angle0) * radius,
Math.sin(angle0) * radius,
Math.cos(angle1) * radius,
Math.sin(angle1) * radius,
);
}
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);
gl.enableVertexAttribArray(positionLoc);
gl.vertexAttribPointer(positionLoc, 2, gl.FLOAT, false, 0, 0);
gl.useProgram(program);
const projection = m4.ortho(0, gl.canvas.width, 0, gl.canvas.height, -1, 1);
function drawCircle(x, y, color) {
const mat = m4.translate(projection, [x, y, 0]);
gl.uniform4fv(colorLoc, color);
gl.uniformMatrix4fv(matrixLoc, false, mat);
gl.drawArrays(gl.TRIANGLES, 0, numEdgePoints * 3);
}
drawCircle( 50, 75, [1, 0, 0, 1]);
drawCircle(150, 75, [0, 1, 0, 1]);
drawCircle(250, 75, [0, 0, 1, 1]);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Some devs might look at that and think Canvas caches the buffer so it can just reuse the points on the 2nd draw call. It's possible that's true but I kind of doubt it. Why? Because of the genericness of the canvas api. fill, the function that does all the real work doesn't know what's in the internal buffer of points. You can call arc, then moveTo, lineTo, then arc again, then call fill. All of those points will be in the internal buffer of points when we get to fill.
const ctx = document.querySelector('canvas').getContext('2d');
ctx.beginPath();
ctx.moveTo(50, 30);
ctx.lineTo(100, 150);
ctx.arc(150, 75, 30, 0, Math.PI * 2);
ctx.fill();
<canvas></canvas>
In other words, fill needs to always look at all the points. Another thing, I suspect arc tries to optimize for size. If you call arc with a radius of 2 it probably generates less points than if you call it with a radius of 2000. It's possible canvas caches the points but given the hit rate would likely be small it seems unlikely.
In any case, the point is WebGL let's you take control at a lower level allowing you skip steps that canvas can't skip. It also lets you reuse data that canvas can't reuse.
In fact if we know we want to draw 10000 animated circles we even have other options in WebGL. We could generate the points for 10000 circles which is a valid option. We could also use instancing. Both of those techniques would be vastly faster than canvas since in canvas we'd have to call arc 10000 times and one way or another it would have to generate points for 10000 circles every single frame instead of just once at the beginning and it would have to call gl.drawXXX 10000 times instead of just once.
Of course the converse is canvas is easy. Drawing the circle took 3 lines of code. In WebGL, because you need to setup and write shaders it probably takes at least 60 lines of code. In fact the example above is about 60 lines not including the code to compile and link shaders (~10 lines). On top of that canvas supports transforms, patterns, gradients, masks, etc. All options we'd have to add with lots more lines of code in WebGL. So canvas is basically trading ease of use for speed over WebGL.
Canvas does not execute a pipeline of layers of processing to transition sets of vertices and indices into triangles which then are given textures and lighting all in hardware as does OpenGL/WebGL ... this is the root cause of such speed differences ... Canvas counterparts to such formulations are all done on CPU with only the final rendering sent to the graphics hardware ... speed differences are particularly evident when massive number of such vertices are attempted to be synthesized/animated on Canvas versus WebGL ...
Alas we are on the cusp on hearing the public announcement of the modern replacement to OpenGL : Vulkan who's remit includes exposing general purpose compute in a more pedestrian way than OpenCL/CUDA as well as baking in use of multi-core processors which might just shift Canvas like processing onto hardware
I'm looking for a solution to implement alpha masking with stencil buffer in libgdx with open gles 2.0.
I have managed to implement simple alpha masking with stencil buffer and shaders, where if alpha channel of fragment is greater then some specified value it gets discarted. That works fine.
The problem is when I want to use some gradient image mask, or fethered png mask, I don't get what I wanned (I get "filled" rectangle mask with no alpha channel), instead I want smooth fade out mask.
I know that the problem is that in stencil buffer there are only 0s and 1s, but I want to write to stencil some other values, that represent actual alpha value of fragment that passed in fragment shader, and to use that value from stencil to somehow do some blending.
I hope that I've explained what I want to get, actually if it's possible.
I've recently started playing with OpenGL ES, so I still have some misunderstandings.
My questions is: How to setup and stencil buffer to store values other then 0s and 1s, and how to use that values later for alpha masking?
Tnx in advance.
This is currently my stencil setup:
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_STENCIL_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
// setup drawing to stencil buffer
Gdx.gl20.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl20.glStencilFunc(GL20.GL_ALWAYS, 0x1, 0xffffffff);
Gdx.gl20.glStencilOp(GL20.GL_REPLACE, GL20.GL_REPLACE, GL20.GL_REPLACE);
Gdx.gl20.glColorMask(false, false, false, false);
Gdx.gl20.glDepthMask(false);
spriteBatch.setShader(shaderStencilMask);
spriteBatch.begin();
// push to the batch
spriteBatch.draw(Assets.instance.actor1, Gdx.graphics.getWidth() / 2, Gdx.graphics.getHeight() / 2, Assets.instance.actor1.getRegionWidth(), Assets.instance.actor1.getRegionHeight());
spriteBatch.end();
// fix stencil buffer, enable color buffer
Gdx.gl20.glColorMask(true, true, true, true);
Gdx.gl20.glDepthMask(true);
Gdx.gl20.glStencilOp(GL20.GL_KEEP, GL20.GL_KEEP, GL20.GL_KEEP);
// draw where pattern has NOT been drawn
Gdx.gl20.glStencilFunc(GL20.GL_EQUAL, 0x1, 0xff);
decalBatch.add(decal);
decalBatch.flush();
Gdx.gl20.glDisable(GL20.GL_STENCIL_TEST);
decalBatch.add(decal2);
decalBatch.flush();
The only ways I can think of doing this are with a FrameBuffer.
Option 1
Draw your scene's background (the stuff that will not be masked) to a FrameBuffer. Then draw your entire scene without masks to the screen. Then draw your mask decals to the screen using the FrameBuffer's color attachment. Downside to this method is that in OpenGL ES 2.0 on Android, a FrameBuffer can have RGBA4444, not RGBA8888, so there will be visible seams along the edges of the masks where the color bit depth changes.
Option 2
Draw you mask decals as B&W opaque to your FrameBuffer. Then draw your background to the screen. When you draw anything that can be masked, draw it with multi-texturing, multiplying by the FrameBuffer's color texture. Potential downside is that absolutely anything that can be masked must be drawn multi-textured with a custom shader. But if you're just using decals, then this isn't really any more complicated than Option 1.
The following is untested...might require a bit of debugging.
In both options, I would subclass CameraGroupStrategy to be used with the DecalBatch when drawing the mask decals, and override beforeGroups to also set the second texture.
public class MaskingGroupStrategy extends CameraGroupStrategy{
private Texture fboTexture;
//call this before using the DecalBatch for drawing mask decals
public void setFBOTexture(Texture fboTexture){
this.fboTexture = fboTexture;
}
#Override
public void beforeGroups () {
super.beforeGroups();
fboTexture.bind(1);
shader.setUniformi("u_fboTexture", 1);
shader.setUniformf("u_screenDimensions", Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
}
And in your shader, you can get the FBO texture color like this:
vec4 fboColor = texture2D(u_fboTexture, gl_FragCoord.xy/u_screenDimensions.xy);
Then for option 1:
gl_FragColor = vec4(fboColor.rgb, 1.0-texture2D(u_texture, v_texCoords).a);
or for option 2:
gl_FragColor = v_color * texture2D(u_texture, v_texCoords);
gl_FragColor.a *= fboColor.r;
I am trying to resize a bitmap for a project we are working on at work in as 3.0. Basically we have a bunch of sprites that get drawn on a bitmapData and then are stored in a vector. The data in the vector eventually gets stored in a bitmap object. Now I want to make the BitmapData sprites smaller but don't want to have to update 100 matrix to do it. Is there another way?
I had some success by scaling the bitmap that gets displayed but the image is a bit jagged looking and the models don't turn around just moon walk.
I have also tired Matrix.a = 0.4 and matrix.d = 0.4 but that did nothing.
When I did bitmap.scalex = 0.7 and the same for scaleY it made it smaller but now they are in the air as the x and y aren't right and the code for them to go in reverse was just doing scalX *= -1 which now doesn't seem to work either. Also I figured out how to get them out of the air but they are as said before jagged and moon walking. Please help as I am attempting to fix code that was written before I got here.
Bascially here is some code, I got approval from the CEO:
we have this:
var b:BitmapData = new BitmapData(CustomerRenderer.BLIT_WIDTH,
CustomerRenderer.BLIT_HEIGHT, true, 0x00000000);
for(var i:int=0; i<WRAPPER.numChildren; i++)
{
b.draw(Sprite(WRAPPER.getChildAt(i)),
WRAPPER.getChildAt(i).transform.matrix, null, null, b.rect, true);
}
_spriteSheet[_currentFrame] = b;
Then we use that data in
BAKED_BITMAP.bitmapData = _spriteSheet[_currentFrame];
to display it where BAKED_BITMAP is a Bitmap
then to flip all the person was doing was:
BAKED_BITMAP.scaleX *= -1;
BAKED_BITMAP.x = (BAKED_BITMAP.scaleX >= 0) ? 0 : BLIT_WIDTH;
thanks
You can try setting the smoothing property of the Bitmap object to see if it gives you the desired effect.