LibGDX - 60FPS - drop frame rate and stuttering - libgdx

I have a big problem with frame rate. When I draw only 1 sprite and try to move it when FPS is below 400 without VSync I have lags. It isn't smooth movement. When I use VSync it works almost good, but every second instead 60 fps, frame rate jump to ~2000 for 1 frame. When this happen movement stutters. Also without this frame rate jumps, movement stutters, but less than without VSync. When FPS > 500 everything is fine. This isn't due to bad hardware. Can I do something with it?
#Override
public void create () {
batch = new SpriteBatch();
img = new Texture("badlogic.jpg");
img.setFilter(TextureFilter.Nearest, TextureFilter.Nearest);
font = new BitmapFont();
player = new Sprite(img);
x = 100;
y=100;
player.setPosition(x, y);
//camera = new OrthographicCamera(640, 480);
//multiGame = new MultiGame();
//multiGame.create();
//camera.position.set(camera.viewportWidth / 2f, camera.viewportHeight / 2f, 0);
//camera.update();
}
#Override
public void render ()
{
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
update();
batch.begin();
font.draw(batch, "" + (int)(1/time), 600, 450);
player.draw(batch);
batch.end();
}
private void update()
{
time = Gdx.graphics.getDeltaTime();
if(Gdx.input.isKeyPressed(Keys.D))
{
x+=100 * time;
}
if(Gdx.input.isKeyPressed(Keys.D))
{
x-=60 * time;
}
player.setPosition(x, y);
}
}
Only generated code with some lines of keys handling and sprite drawing. Every second FPS grow up to 2000 for 1 frame... and sprite is stuttering.
It's desktop launcher:
public static void main (String[] arg) {
LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();
config.vSyncEnabled = true;
config.foregroundFPS=60;
config.backgroundFPS=60;
new LwjglApplication(new IndustrialServer(), config);}
EDIT:
I discovered that when it's in window mode... it stuttered... many times in 1 second... but when it's in fullscreen mode only every second... (it's connected with fps jumping because when application work in window mode... it stuttered without jump FPS... :/ )
And i rejestred this fps when app worked for 30 seconds (FPS < 40 or > 70):
742,
104,
19,
1749,
39,
132,
76,
76,
77,
26,
878,
84,
39,
118,
89,
91,
112,
105,
39,
133,
37,
149,
37,
159,
33,
331,
37,
148,
35,
195,
36,
185,
2,
3,
2848,
74,
74
I also discovered that every action in system, change of process and change of procesor frequency (because it oscillate between 2000MHz and 3200 MHZ) generate FPS jump (between 3 to 12000!!!)
And... when i set max FPS to 59 or smaller... it not stutter but... you can see typical for vsync when it's off video artifacts.
Any video card settings don't help. Any "adaptive vsync" or something... The problem is also when i delete everything from my code and only black screen is set...
I really don't want to use full power of CPU and GPU all the time when someone play game... Especially if this person play on laptop.

Related

lwjgl not drawing vertex buffer

Im try to get a super simple lwjgl app going that draws a simple triangle, but cant find the right order of calls to make anything meaningful appear. im using a simple vertex buffer and no shaders.
for some reason each frame is a white triangle on the lower right side of the frame, regardless of what is in the actual vertex buffer.
public void run() {
GLFWErrorCallback.createThrow().set();
glfwInit();
glfwDefaultWindowHints();
glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE);
glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE);
long h_window = glfwCreateWindow(640, 480, "test", 0, 0);
glfwShowWindow(h_window);
glfwMakeContextCurrent(h_window);
GL.createCapabilities();
glEnableClientState(GL_VERTEX_ARRAY);
int b_vertex = glGenBuffers();
float[] vertex = {
0,0,0,
0,0,0,
0,0,0
};
glBindBuffer(GL_ARRAY_BUFFER, b_vertex);
glBufferData(GL_ARRAY_BUFFER, vertex, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
while (!glfwWindowShouldClose(h_window)) {
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, b_vertex);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0L);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
glfwSwapBuffers(h_window);
glfwWaitEventsTimeout(1f / 30f);
}
}
im definitely missing something, but have no idea what it is. here is what this shows:
Screenshot of Running app
Found the problem.
glVertexPointer(3, GL_FLOAT, 0, 0L);
is for shaderless buffer descriptions
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0L);
is for buffer descriptions that also bind buffers to shader inputs

LibGDX FrameBuffer

I'm trying to make a game where you build a spaceship from parts, and fly it around and such.
I would like to create the ship from a series of components (from a TextureAtlas for instance). I'd like to draw the ship from it's component textures, and then save the drawn ship as one large Texture. (So i don't have to draw 50 component textures, just one ship texture. What would be the best way to go about this?
I've been trying to do so using a FrameBuffer. I've been able to draw the components to a Texture, and draw the texture to the screen, but no matter what I try the Texture ends up with a solid background the size of the frame buffer. It's like the clear command can't clear with transparency. Here's the drawing code I have at the moment. The ship is just a Sprite which i save the FrameBuffer texture to.
public void render(){
if (ship == null){
int screenwidth = Gdx.graphics.getWidth();
int screenheight = Gdx.graphics.getHeight();
SpriteBatch fb = new SpriteBatch();
FrameBuffer fbo = new FrameBuffer(Format.RGB888, screenwidth, screenheight, false);
fbo.begin();
fb.enableBlending();
Gdx.gl.glBlendFuncSeparate(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA, GL20.GL_ONE, GL20.GL_ONE_MINUS_SRC_ALPHA);
Gdx.gl.glClearColor(1, 0, 1, 0);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
fb.begin();
atlas.createSprite("0").draw(fb);
fb.end();
fbo.end();
ship = new Sprite(fbo.getColorBufferTexture());
ship.setPosition(0, -screenheight);
}
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.enableBlending();
batch.setBlendFunction(GL20.GL_ONE, GL20.GL_ZERO);
ship.draw(batch);
batch.end();
}
The problem here lies in this line:
FrameBuffer fbo = new FrameBuffer(Format.RGB888, screenwidth, screenheight, false);
specifically with Format.RGB888. This line is saying that your FrameBuffer should be Red (8 bits) followed by Green (8 bits) followed by Blue (8 bits). Notice however, that this format doesn't have any bits for Alpha (transparency). In order to get transparency out of your frame buffer, you probably instead want to use the Format.RGBA8888, which includes an additional 8 bits for Alpha.
Hope this helps.

Texture from Texture Atlas Find Region or Create Sprite

I am trying to get the Texture from Sprite or Atlas Region created by Texture Atlas.
atlas=new TextureAtlas(Gdx.files.internal("Test.pack"));
region=atlas.findRegion("green");
or
Sprite s = atlas.createSprite("red");
texture = s.getTexture();
I have given the name of image in findRegion and createSprite method, it always picks the other image object. I want to get the texture of the image so that part of that texture can be rendered.
Here is the Packer image:
http://i61.tinypic.com/neupo2.png
Even if i try to get green region or red, it always returns blue. Any help would be appreciated.
When you do "textureAtlas.findRegion("objectTexture").getTexture();" (from James Skemp's answer) you're getting the whole texture from the atlas, not just from the region "objectTexture".
Don't bother creating Texture to create the Sprite, you can use the TextureRegion to create the Sprite. Something like this:
atlas=new TextureAtlas(Gdx.files.internal("Test.pack"));
regionGreen=atlas.findRegion("green");
Sprite s = new Sprite(regionGreen);
My use case is slightly different, but I was running into the same issue.
I have an object, which extends Actor, that I want to draw using a region in an atlas.
public class MyObject extends Actor {
private Texture texture;
public MyObject() {
TextureAtlas textureAtlas = new TextureAtlas(Gdx.files.internal("xxx.atlas"));
texture = textureAtlas.findRegion("objectTexture").getTexture();
Gdx.app.log(TAG, "Texture width: " + texture.getWidth());
Gdx.app.log(TAG, "Texture height: " + texture.getHeight());
setBounds(getX(), getY(), Constants.STANDARD_TILE_WIDTH, Constants.STANDARD_TILE_HEIGHT);
// ...
}
#Override
public void draw(Batch batch, float parentAlpha) {
batch.draw(texture,
worldPosition.x * Constants.STANDARD_TILE_WIDTH, worldPosition.y * Constants.STANDARD_TILE_WIDTH,
texture.getWidth(), texture.getHeight());
}
}
Instead of just the region I was expecting I instead received the entire atlas, so my logged texture width and height were 1024 x 128.
Unfortunate, and still not sure why getTexture() returns too much, but switching over to batchDraw(TextureRegion, ...) at least got me in a better place.
public class MyObject extends Actor {
private TextureRegion texture;
public MyObject() {
TextureAtlas textureAtlas = new TextureAtlas(Gdx.files.internal("xxx.atlas"));
texture = textureAtlas.findRegion("objectTexture");
setBounds(getX(), getY(), Constants.STANDARD_TILE_WIDTH, Constants.STANDARD_TILE_HEIGHT);
// ...
}
#Override
public void draw(Batch batch, float parentAlpha) {
batch.draw(texture, getX(), getY());
}
}
Based upon what I saw with my sprites, the questioner was seeing a blue square for the same reason I was; the entire image is getting loaded by getTexture() and since it starts at the bottom left, you're always seeing a blue square.
Using the Sprite(TextureRegion) constructor may have also resolved their issue.
Seems to me your problem is the Test.pack file, not the texture.
It works with this one I packed with GDX Texture Packer:
Test.png
format: RGBA8888
filter: Nearest,Nearest
repeat: none
blue
rotate: false
xy: 1, 1
size: 51, 51
orig: 51, 51
offset: 0, 0
index: -1
green
rotate: false
xy: 54, 1
size: 51, 51
orig: 51, 51
offset: 0, 0
index: -1
red
rotate: false
xy: 107, 1
size: 51, 51
orig: 51, 51
offset: 0, 0
index: -1
yellow
rotate: false
xy: 160, 1
size: 51, 51
orig: 51, 51
offset: 0, 0
index: -1
Use index also to get it.Get the index from pack file or .atlas file.
atlas=new TextureAtlas(Gdx.files.internal("Test.pack"));
region=atlas.findRegion("green",index);
or
Sprite s = atlas.createSprite("red",index);
texture = s.getTexture();

rotating sprite on touch libgdx

I am trying to rotate my sprite when i push to go left. right now my character is idling and running to the right. but im having trouble rotating to the left.
here is my chunk of code. if anyone could help me, that would be awesome.
public void draw(SpriteBatch spriteBatch) {
stateTime += Gdx.graphics.getDeltaTime();
//continue to keep looping
if(Gdx.input.isTouched()){
int xTouch = Gdx.input.getX();
int yTouch = Gdx.input.getY();
//System.out.println(Gdx.graphics.getWidth()/4);
//go left
if(xTouch < (width/4) && yTouch > height - (height/6)){
currentRunFrame = runAnimation.getKeyFrame(stateTime, true);
spriteBatch.draw(currentRunFrame, runSprite.getX() - 32, runSprite.getY() + 150, 128, 128);
RealGame.leftButton = new Texture(Gdx.files.internal("leftButtonOver.png"));
moveLeft();
}
if(xTouch > (width/4) && xTouch < (width/4)*2 && yTouch > height - (height/6)){
currentRunFrame = runAnimation.getKeyFrame(stateTime, true);
spriteBatch.draw(currentRunFrame, runSprite.getX() - 32, runSprite.getY() + 150, 128, 128);
RealGame.rightButton = new Texture(Gdx.files.internal("rightButtonOver.png"));
moveRight();
}
if(xTouch > (width/4) * 2 && xTouch < (width/4) * 3 && yTouch > height - (height/6)){
RealGame.shootButton = new Texture(Gdx.files.internal("shootButtonOver.png"));
}
if(xTouch > (width/4) * 3 && xTouch < (width/4) * 4 && yTouch > height - (height/6)){
RealGame.jumpButton = new Texture(Gdx.files.internal("jumpButtonOver.png"));
}
}
if(!Gdx.input.isTouched()){
currentIdleFrame = idleAnimation.getKeyFrame(stateTime, true);
spriteBatch.draw(currentIdleFrame, idleSprite.getX() - 32, idleSprite.getY() + 150, 128, 128);
RealGame.leftButton = new Texture(Gdx.files.internal("leftButton.png"));
RealGame.rightButton = new Texture(Gdx.files.internal("rightButton.png"));
RealGame.shootButton = new Texture(Gdx.files.internal("shootButton.png"));
RealGame.jumpButton = new Texture(Gdx.files.internal("jumpButton.png"));
moveStop();
}
}
thank you in advance, and let me know if you need more info.
I assume your currentIdleFrame is a Texture or TextureRegion, not a Sprite. One of SpriteBatchs draw methode with Texture supports a flipX and flipY. Using this you can flip him, for example if you are walking to the left, but your Texture is facing to the right. Also this supports rotation, which should be the rotation in degrees.
Verry important note: You create new Texture every render loop. Don't do this. Instead load all your Textures in a Texture[] frames and draw the right one, depending on stateTime. Also take a look at Animations class, which will help you with this.
Hope i could help
Use a SpriteBatch draw method that takes boolean flipX parameter or call flip on the Sprite.
Oh, and if this is your main loop, stop loading the textures like you are. Load them in the beginning and just swap them as needed.

Poor results with source-over alpha blending (HTML5 canvas)

Edit: I don't necessarily need a solution to this problem--rather I'd like to understand why it's occurring. I don't see why I should be getting the odd results below...
Although this question is directed towards an issue I'm having with an HTML5 canvas application, I think the problem is less specific.
I have an HTML5 canvas app that allows you to stamp images on the screen. These images are 32bit PNG's, so I'm working with transparency. If I stamp a highly transparent image in the same location many times (roughly 100), I end up with an absolutely terrible result:
The color of the image that I'm using as a stamp is RGB(167, 22, 22) and the background that I'm stamping onto is RGB(255, 255, 255). Here's the source image, if anyone's interested:
As you can tell, the image has extremely low alpha levels. Likely about 2/255 to 5/255 or so. What I would expect to happen is that if you repeatedly apply the image stamp to the canvas enough times, you'll get pixels of color RGBA(167, 22, 22, 255). Unfortunately, I'm getting a mixed bag of colors including some very odd regions of gray with a value of RGB(155, 155, 155).
I just loaded up Excel and plugged in the equation for source-over alpha blending (Wikipedia reference) and I seem to be converging to RGB(167, 22, 22) after enough iterations. I'm probably missing something fundamental about alpha blending operations and how the HTML5 canvas implements source-over compositing... can anyone help straighten me out?
Thanks!
Note: this question is similar to my issue, but I don't quite understand why I'm getting the results I've posted here.
The precision and rounding rules of canvas math internals are mostly undefined, so it's hard to say exactly what's happening here. All we really know is that the pixels are unsigned bytes, and the alpha is premultiplied.
However, we can get some information by using getImageData to inspect the pixels as the stamp is drawn, like so:
var px = 75;
var py = 100;
var stamp = new Image;
stamp.onload = function() {
for (var i = 0; i < 100; ++i) {
imageData = context.getImageData(px, py, 1, 1);
console.log(Array.prototype.slice.call(imageData.data, 0, 4));
context.drawImage(stamp, 0, 0);
}
};
stamp.src = 'stamp.png';
The sample at px = 75, py = 100 is right in the middle of a gray blob. After drawing the stamp once on a white canvas, the log reads:
[254, 254, 254, 255]
At px = 120, py = 150, the sample is in the middle of a red area. After drawing the stamp once, the log reads:
[254, 253, 253, 255]
So, it looks like the canvas was modified by (-1, -1, -1) for the grey pixel, and (-1, -2, -2) for the red pixel.
Sampling these same pixels in the stamp image using RMagick gives:
[167, 22, 22, 1] // x = 75, y = 100
[167, 22, 22, 2] // x = 120, y = 150
Working through the math, using the standard alpha blending equation, you can test each of the color values:
function blend(dst, src) {
var a = src[3] / 255.0
return [
(1.0 - a) * dst[0] + a * src[0],
(1.0 - a) * dst[1] + a * src[1],
(1.0 - a) * dst[2] + a * src[2]
];
}
console.log(blend([255, 255, 255], [167, 22, 22, 1]));
// output: [254.6549..., 254.0862..., 254.0862...]
console.log(blend([255, 255, 255], [167, 22, 22, 2]));
// output: [254.3098..., 253.1725..., 253.1725...]
From this, we can guess that the canvas blending code is actually flooring the results, instead of rounding them. This would give you a result of [254, 254, 254] and [254, 253, 253], like we saw from canvas. They're likely not doing any rounding at all, and it's being floored implicitly when cast back to an unsigned byte.
This is why the other post recommends storing the image data as an array of floats, doing the math yourself, and then updating the canvas with the result. You get more precision that way, and can control things like rounding.
Edit: In fact, this blend() function isn't exactly right, even when the results are floored, as the canvas pixel values for 120, 150 stabilize at [127, 0, 0], and this function stabilizes at [167, 22, 22]. Similarly, when I drew the image just once into a transparent canvas, getImageData on the pixel at 120, 150 was [127, 0, 0, 2]. What?!
It turns out that this is caused by premultiplication, which seems to be applied to loaded Image elements. See this jsFiddle for an example.
Premultiplied pixels are stored as:
// r, g, b are 0 to 255
// a is 0 to 1
// dst is all 0 to 255
dst.r = Math.floor(r * a);
dst.g = Math.floor(g * a);
dst.b = Math.floor(b * a);
dst.a = a * 255;
They are unpacked later as:
inv = 1.0 / (a / 255);
r = Math.floor(dst.r * inv);
g = Math.floor(dst.g * inv);
b = Math.floor(dst.b * inv);
Running this pack/unpack against [167, 22, 22, 2] reveals:
a = 2 / 255; // 0.00784
inv = 1.0 / (2 / 255); // 127.5
r = Math.floor(Math.floor(167 * a) * inv); // 127
g = Math.floor(Math.floor(22 * a) * inv); // 0
b = Math.floor(Math.floor(22 * a) * inv); // 0