Libgdx problems frame rate (FPS) - libgdx

I am having problems with the frame rate in libgdx, alguin could enlighten me if I'm doing something wrong, because I do not think that is normal, the problem in question is I load a tilemap and simple and if put the highest resolution of 320 x 320, 800 x 600 say this drops to 12 fps or so, I hope to explain well PUE groin is not the native language thanks.
public class MyGdxGame extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
private OrthogonalTiledMapRenderer bachTileMapRender;
private OrthographicCamera camera;
private TiledMap map;
private int[] background = new int[] {0}, foreground = new int[] {1};
#Override
public void create () {
batch = new SpriteBatch();
img = new Texture("badlogic.jpg");
map = new TmxMapLoader().load("maps/untitled.tmx");
bachTileMapRender = new OrthogonalTiledMapRenderer(map, 1/32f);//Asignar mapa
camera = new OrthographicCamera(Gdx.graphics.getWidth(),Gdx.graphics.getHeight());
camera.setToOrtho(false, 32, 32); //Virtual dimensiones
}
#Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
//batch.begin();
//batch.draw(img, 0, 0);
//batch.end();
camera.update();
bachTileMapRender.setView(camera);
bachTileMapRender.render(background);
System.out.println("fps:"+Gdx.graphics.getFramesPerSecond());
}
}
320 x 320 the frame rate is about 60 but climbed the window or use the 800 x 600 drops rapidly
public class DesktopLauncher {
public static void main (String[] arg) {
LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();
config.width = 320;
config.height = 320;
new LwjglApplication(new MyGdxGame(), config);
}
}
P.D: it's worth my computer is a dual core 3000, in debian linux
untitled.tmx
<?xml version="1.0" encoding="UTF-8"?>
<map version="1.0" orientation="orthogonal" width="32" height="32" tilewidth="32" tileheight="32">
<tileset firstgid="1" name="prueba" tilewidth="32" tileheight="32">
<image source="title.png" width="384" height="128"/>
</tileset>
<layer name="Capa de Patrones 1" width="32" height="32">
<data encoding="base64" compression="zlib">
eJztwwEJAAAMBKHr8f17rsdQcNVUVVXV1w8ar2wB
</data>
</layer>
</map>

What if you do this?
//Swap these lines
map = new TmxMapLoader().load("maps/untitled.tmx"); //first load map
bachTileMapRender = new OrthogonalTiledMapRenderer(map); //then use map
camera.setToOrtho(false, 800, 600); //X, Y, Z using the aspect ratio you want.
//Gfx config
config.width = 800;
config.height = 600;
Also, how many tiles are you rendering?

it is possible, that the problem is the TextureFilter.
If the Texels (Pixel of a Texture) do not matchthe pixels they use on screen, they need to be recalculated.
For example: You have a Texture of 16*16 pixel, which should then use 20*20px on screen.
The TextureFilter then defines HOW to calculated the pixel on screen.
There are 2 important TextureFilters:
- Nearest: Fetches the nearest Texel
- Linear: Fetches the 4 nearest Texels
You can see, that the linear filter costs much more performance.
So you should try using Nearest, which may does not look as good...
You may notice, that you can set 2 TextureFilters:
- Magnification: If the Texture is smaller then the place it takes on screen, this filter is used
- Minification: If the Texture is bigger then the place it takes on screen, this filter is used.
You can als use MipMapping, which gives you the possibility to have different resolutions of the same Texture, and the best matching one is used.
I would recommend to read this: Know Your Texture Filters!

Related

The depth buffer is ignored depending the X/Y position using an orthographic projection

Using libgdx, I want to discard occluded sprites using a depth buffer. To do so I use the provided Decal and DecalBatch with an OrthographicCamera and I set the z position manually.
Depending my sprite position on the x and y axes, the depth buffer works or not as expected.
red square z = 98
green square z = 10
The square are 50% transparent so I can see if the depth test occurred as expected.
Here the test code:
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.backends.lwjgl.LwjglApplication;
import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.g2d.TextureRegion;
import com.badlogic.gdx.graphics.g3d.decals.CameraGroupStrategy;
import com.badlogic.gdx.graphics.g3d.decals.Decal;
import com.badlogic.gdx.graphics.g3d.decals.DecalBatch;
import com.badlogic.gdx.math.Vector3;
import com.badlogic.gdx.utils.Array;
import fr.t4c.ui.GdxTest;
public class DecalTest extends GdxTest {
DecalBatch batch;
Array<Decal> decals = new Array<Decal>();
OrthographicCamera camera;
OrthoCamController controller;
FPSLogger logger = new FPSLogger();
Decal redDecal;
Decal greenDecal;
public void create() {
camera = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
//camera.near = 1;
camera.position.set(600, 600, 100);
camera.near = 1;
camera.far = 100;
controller = new OrthoCamController(camera);
Gdx.input.setInputProcessor(controller);
batch = new DecalBatch(new CameraGroupStrategy(camera));
TextureRegion[] textures = {
new TextureRegion(new Texture(Gdx.files.internal("src/test/resources/redsquare.png"))),
new TextureRegion(new Texture(Gdx.files.internal("src/test/resources/greensquare.png")
))};
redDecal = Decal.newDecal(textures[0], true);
redDecal.setPosition(600, 600, 98f);
decals.add(redDecal);
greenDecal = Decal.newDecal(textures[1], true);
greenDecal.setPosition(630, 632f, 10f);
decals.add(greenDecal);
Decal decal = Decal.newDecal(textures[0], true);
decal.setPosition(400, 500, 98f);
decals.add(decal);
decal = Decal.newDecal(textures[1], true);
decal.setPosition(430f, 532f, 10f);
decals.add(decal);
}
public void render() {
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
Gdx.gl.glDepthFunc(GL20.GL_LEQUAL);
camera.update();
for (int i = 0; i < decals.size; i++) {
Decal decal = decals.get(i);
batch.add(decal);
}
batch.flush();
}
#Override
public void dispose() {
batch.dispose();
}
public static void main(String[] args) {
LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration();
cfg.useGL30 = false;
cfg.width = 640;
cfg.height = 480;
cfg.resizable = false;
cfg.foregroundFPS = 0; // Setting to 0 disables foreground fps
// throttling
cfg.backgroundFPS = 0; // Setting to 0 disables background fps
new LwjglApplication(new DecalTest(), cfg);
}
}
It is a depth buffer precision problem, the orientation of the camera that messed up the calcul or something else?
Edit:
I expect the sprites being occluded if they are behind another. So in my example the red square should occlude the green part he is in front of.
The left bottom squares have the right behaviour but the upper right squares does not. The things is red squares have the same Z value and green squares have the same Z value too(different than the red squares Z of course). So the only things that made the square couples different is their x and y position, which should not impact the depth test.
So, what I want is a consistent depth test behaviour that occluded hided texture as we see whith the bottom left squares no matter their x and y position.
According to the comment, I added information about what I expect.
Decal and DecalBatch rely on GroupStrategy for depth sorting, NOT the camera. Additionally, these strategies sort depth EITHER by the distance from the camera OR by only the Z axis, which would required for a perspective camera i.e. a decal that is closer and should occlude as measured by Z, might be further as measured by distance from camera.
i.e. (x,y,z)
Camera 0,0,1.
Decal A 1,1,0 (Z distance 1, vector distance 1.73)
Decal B 0,0,-0.1 (Z distance 1.1, vector distance 1.1)
The depth strategy you chose for the above decals could either consider A or B first.
The most common recommended GroupStrategy is CameraGroupStrategy, but this does not sort by Z, but uses camera distance. If you intialise DecalBatch instead with SimpleOrthoGroupStrategy then depth will be sorted purely by Z, here is depth sort for it, you can look at the other group strategies and see its purely absolute distance.
class Comparator implements java.util.Comparator<Decal> {
#Override
public int compare (Decal a, Decal b) {
if (a.getZ() == b.getZ()) return 0;
return a.getZ() - b.getZ() < 0 ? -1 : 1;
}
}

Units conversion Box2D with weird results

i'm using LibGDX and trying to learn box2D, but this units conversions are confusing me, what i think: i got a 256x256 pixels image and want to create a body representing this image, using the 1:32 scale, so everytime i want to pass values to the box2D scale i must divide it by 32 or mutiply by 1/32(it's the same equation), and everytime i want to get values from this world to pixel scale i must mutiply it by 32, but the problems still the same: can't get good simulations, i'll put my code here for you see how i'm doing things(i'll put the variables too so you can just ctrl+c and ctrl+v):
private SpriteBatch batch;
private ShapeRenderer srender;
private Texture img;
private World world;
private Body bad, bridge;
private OrthographicCamera cam, cam2;
private Box2DDebugRenderer render;
#Override
public void create () {
srender = new ShapeRenderer();
batch = new SpriteBatch();
img = new Texture("badlogic.jpg");
cam = new OrthographicCamera(500f, 500f);
cam.translate(250f, 250f);
cam.update();
//Used to draw the box2D world scaled to the pixels size
cam2 = new OrthographicCamera(500 * 0.03125f, 500 * 0.03125f);
cam2.translate(250f * 0.03125f, 250f * 0.03125f);
cam2.update();
render = new Box2DDebugRenderer();
world = new World(new Vector2(0f, -10f), true);
BodyDef bdef = new BodyDef();
bdef.type = BodyType.DynamicBody;
bad = world.createBody(bdef);
PolygonShape pshape = new PolygonShape();
pshape.setAsBox(img.getWidth() * 0.03125f, img.getHeight() * 0.03125f);
FixtureDef fdef = new FixtureDef();
fdef.shape = pshape;
bad.createFixture(fdef);
/*Criando a ponte*/
bdef.type = BodyType.StaticBody;
bridge = world.createBody(bdef);
pshape.setAsBox(500f * 0.03125f, 5f * 0.03125f);
bridge.createFixture(fdef);
bad.setTransform(new Vector2(250f * 0.03125f, 500f * 0.03125f) , 0);
bridge.setTransform(new Vector2(250f * 0.03125f, 0f * 0.03125f), 0);
batch.setProjectionMatrix(cam.combined);
srender.setProjectionMatrix(cam.combined);
}
#Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.draw(img, (bad.getPosition().x - img.getWidth()/2 * 0.03125f) * 32,
(bad.getPosition().y - img.getHeight()/2 * 0.03125f) * 32);
batch.end();
render.render(world, cam2.combined);
srender.begin(ShapeRenderer.ShapeType.Filled);
srender.circle(250f, 500f, 5f);
srender.circle(250f, 0f, 5f);
srender.end();
world.step(1/60f, 6, 2);
}
if you run this code you will see that the box created still not fitting the image size, the image stop walking on the middle of the screen while it should stop only at the bottom, can anyone help me here? thanks for attention.
Ps:i have already heard about changing the viewport, but i'm still trying to learn this conversions, after that i'll go look for the viewports.

Libgdx , When to use camera.position.set?

I am really confused with two examples related to viewport and orthagraphic. Although i understand that Viewport is the size of the dimensions we set to view on the screen and camera projects that. I am learning libgdx and cannot finish through orthographic camera and viewport examples which have left me completely confused. the code runs fine for both examples and with proper result on screen.
here's one example in which camera.position.set is used to position the camera.
public class AnimatedSpriteSample extends GdxSample {
private static final float WORLD_TO_SCREEN = 1.0f / 100.0f;
private static final float SCENE_WIDTH = 12.80f;
private static final float SCENE_HEIGHT = 7.20f;
private static final float FRAME_DURATION = 1.0f / 30.0f;
private OrthographicCamera camera;
private Viewport viewport;
private SpriteBatch batch;
private TextureAtlas cavemanAtlas;
private TextureAtlas dinosaurAtlas;
private Texture background;
private Animation dinosaurWalk;
private Animation cavemanWalk;
private float animationTime;
#Override
public void create() {
camera = new OrthographicCamera();
viewport = new FitViewport(SCENE_WIDTH, SCENE_HEIGHT, camera);
batch = new SpriteBatch();
animationTime = 0.0f;
...
...
..
camera.position.set(SCENE_WIDTH * 0.5f, SCENE_HEIGHT * 0.5f, 0.0f);
Here's another example which does not use camera.position.set and still the result is the same.
#Override
public void create() {
camera = new OrthographicCamera();
viewport = new FitViewport(SCENE_WIDTH, SCENE_HEIGHT, camera);
batch = new SpriteBatch();
oldColor = new Color();
cavemanTexture = new Texture(Gdx.files.internal("data/caveman.png"));
cavemanTexture.setFilter(TextureFilter.Nearest, TextureFilter.Nearest);
}
#Override
public void dispose() {
batch.dispose();
cavemanTexture.dispose();
}
#Override
public void render() {
Gdx.gl.glClearColor(BACKGROUND_COLOR.r,
BACKGROUND_COLOR.g,
BACKGROUND_COLOR.b,
BACKGROUND_COLOR.a);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
int width = cavemanTexture.getWidth();
int height = cavemanTexture.getHeight();
float originX = width * 0.5f;
float originY = height * 0.5f;
// flipX, flipY
// Render caveman centered on the screen
batch.draw(cavemanTexture, // Texture itselft
-originX, -originY, // pass in the world space coordinates where we to draw, Considering the camera is centered at (0,0). by default we need to position
// out cavement at -originX, -originY.
originX, originY, // coordinates in pixels of our texture that we consider to be the origin starting from the bottom-left corner.
// in our case, we want the origin to be the center of the texture. then we pass the dimensions of the texture and the scale
// and the scale along both axes (x and Y).
width, height, // width, height
WORLD_TO_SCREEN, WORLD_TO_SCREEN, // scaleX, scaleY
0.0f, // rotation
0, 0, // srcX, srcY
width, height, // srcWidth, srcHeight
false, false); // flipX, flipY
What is really confusing me is why does it not use camera.position.set on the second example to adjust the camera's view and why is it important to use this on the first example.
I really hope this question is legit and makes sense. I have searched the forum here and couldnt find any clues. Hope someone can guide in the right direction.
Many Thanks.
In the first example a 2 dimensional vector has been initialized for the position of the camera the x direction and the y direction. This for the specifically the camera.
camera = new OrthographicCamera();
So, this code creates a camera object from the OrthographicCamera class created by libgdx creators. Check out the documentation for the class here from that class you can see when that it is constructed it accepts both the viewport_height and viewport_width. (in your example you've left it blank, so these are 0 for the time being.)
viewport = new FitViewport(SCENE_WIDTH, SCENE_HEIGHT, camera);
This line of code defines the width, height and which camera should be used for the viewport. check out the documentation for FitViewport class here
So when camera.position.set is called, it sets for the x and y direction based on the viewport's width and height. This whole example defines the viewport dimensions for the overall viewport.
The difference between this and the second example is that the camera is set around the texture that has been loaded onto the screen. So the viewport's x and y direction has been positioned and the width, height, originX, originY of the texture/camera has been defined also:
int width = cavemanTexture.getWidth();
int height = cavemanTexture.getHeight();
float originX = width * 0.5f;
float originY = height * 0.5f;
Libgdx then allows you to draw the texture using the spritebatch class to draw both the texture and the viewport surrounding that texture.
Summary
Example one allows you to define a viewport on it's own, without any textures being drawn. This will allow you to draw multiple textures with the same viewport being set (a normal process of game creation)
But in Example two if you wanted the viewport to say, follow the main character around on the screen. you can define the viewport surrounding the texture to thus follow that texture.
Personally, i'd always pursue the first example as you can define a viewport for any game width or height and then i'd create a second viewport ontop to follow any textures i've drawn on the screen. They both work, just for different reasons.
Hope this helps you clear things up.
Happy coding,
Bradley.

How to make the camera follow the player?

I'm making a game in libgdx which includes the player being able to move vertically beyond the set screen size.
As for my question, if I have the screen size set at a certain width and height, what is required to make the actual game world larger for the camera to follow the player?
This is of course my targeted screen size in the Main game class:
public static final int WIDTH = 480, HEIGHT = 800;
Below that I currently have :
public static final int GameHeight = 3200;
GameHeight is the value I test for whether the player is going out of bounds.
Here is the problem. With this code, the player is centered on the screen, and moves horizontally, rebounding off the screen bounds (As it would without the camera, but neglecting the change in y-position)
public GameScreen(){
cam = new OrthographicCamera();
cam.setToOrtho(false, 480, 800);
}
#Override
public void render(float delta) {
// TODO Auto-generated method stub
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
cam.position.y = player.getPosition().y;
cam.update();
batch.setProjectionMatrix(cam.combined);
player.update();
player.draw(batch);
}
If I remove:
cam.position.y = player.getPosition().y;
The camera is placed at the bottom of the virtual world and the ball starts at the top (y = 3200) and travels downward. When it reaches y = 800, it shows up as it should.
I've found a lot of examples that indicate in writing that setting the cameras position to the players y position should force the camera to follow the player, whether it's moving up or down, but it either freezes y movement or sets the camera at the bottom the virtual world.
Any help is appreciated, thanks!
I would try doing cam.position.set(player.getPosition().x, player.getPosition().y). This will make the camera follow your player and it should not cause any "freezing."
private val worldTransform = Matrix4()
private val cameraPosition = Vector3()
private val objPosition = Vector3()
private var rot = Quaternion()
private var carTranslation = Vector3(0f, 0f, 0f)
fun focus(obj: BulletObject) {
// worldTransform
obj.entity?.motionState?.getWorldTransform(worldTransform)
// objPosition
worldTransform.getTranslation(objPosition)
obj.entity?.modelInstance?.transform?.getTranslation(carTranslation)
// get rotation
worldTransform.getRotation(rot)
println("rot.angle: ${rot.getAngleAround(Vector3.Y)}")
val rad = Math.toRadians(rot.getAngleAround(Vector3.Y).toDouble())
// pointFromCar
val pointFromCar = Vector2(-3f * sin(rad.toFloat()), -3f * cos(rad.toFloat()));
cameraPosition.set(Vector3(objPosition.x + pointFromCar.x, objPosition.y + 1f, objPosition.z + pointFromCar.y))
// camera set position
camera.position.set(cameraPosition)
camera.lookAt(objPosition)
camera.up.set(Vector3.Y)
camera.update()
}

Libgdx: screen resize and ClickListener (libgdx)

I develope a 2D game and use OrthographicCamera and Viewport to resize virtaul board to real display size. I add images to stage and use ClickListener to detect clicks. It works fine, but when I change resolution it works incorrent(can't detect correct actor, I think the problem with new and original x and y). Is there any way to fix this?
You will need to translate the screen coordinates to world coordinates.
Your camera can do that. You can do both ways, cam.project(...) and cam.unproject(...)
Or if you are already using Actors, don't initialize a camera yourself, but use a Stage. Create a Stage and add the actors to it. The Stage will then do coordinate translation for you.
Once me too suffered from this problem but at end i got the working solution, for drawing anything using SpriteBatch or Stage in libgdx. Using orthogrphic camera we can do this.
first choose one constant resolution which is best for game. Here i have taken 1280*720(landscape).
class ScreenTest implements Screen{
final float appWidth = 1280, screenWidth = Gdx.graphics.getWidth();
final float appHeight = 720, screenHeight = Gdx.graphics.getHeight();
OrthographicCamera camera;
SpriteBatch batch;
Stage stage;
Texture img1;
Image img2;
public ScreenTest(){
camera = new OrthographicCamera();
camera.setToOrtho(false, appWidth, appHeight);
batch = new SpriteBatch();
batch.setProjectionMatrix(camera.combined);
img1 = new Texture("your_image1.png");
img2 = new Image(new Texture("your_image2.png"));
img2.setPosition(0, 0); // drawing from (0,0)
stage = new Stage(new StretchViewport(appWidth, appHeight, camera));
stage.addActor(img2);
}
#Override
public void render(float delta) {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.draw(img, 0, 0);
batch.end();
stage.act();
stage.act(delta);
stage.draw();
// Also You can get touch input according to your Screen.
if (Gdx.input.isTouched()) {
System.out.println(" X " + Gdx.input.getX() * (appWidth / screenWidth));
System.out.println(" Y " + Gdx.input.getY() * (appHeight / screenHeight));
}
}
//
:
:
//
}
run this code in Any type of resolution it will going to adjust in that resolution without any disturbance.
I just think Stage is easy to use.
If there are some wrong,i consider you should check your code:
public Actor hit(float x, float y)