Libgdx: screen resize and ClickListener (libgdx) - libgdx

I develope a 2D game and use OrthographicCamera and Viewport to resize virtaul board to real display size. I add images to stage and use ClickListener to detect clicks. It works fine, but when I change resolution it works incorrent(can't detect correct actor, I think the problem with new and original x and y). Is there any way to fix this?

You will need to translate the screen coordinates to world coordinates.
Your camera can do that. You can do both ways, cam.project(...) and cam.unproject(...)
Or if you are already using Actors, don't initialize a camera yourself, but use a Stage. Create a Stage and add the actors to it. The Stage will then do coordinate translation for you.

Once me too suffered from this problem but at end i got the working solution, for drawing anything using SpriteBatch or Stage in libgdx. Using orthogrphic camera we can do this.
first choose one constant resolution which is best for game. Here i have taken 1280*720(landscape).
class ScreenTest implements Screen{
final float appWidth = 1280, screenWidth = Gdx.graphics.getWidth();
final float appHeight = 720, screenHeight = Gdx.graphics.getHeight();
OrthographicCamera camera;
SpriteBatch batch;
Stage stage;
Texture img1;
Image img2;
public ScreenTest(){
camera = new OrthographicCamera();
camera.setToOrtho(false, appWidth, appHeight);
batch = new SpriteBatch();
batch.setProjectionMatrix(camera.combined);
img1 = new Texture("your_image1.png");
img2 = new Image(new Texture("your_image2.png"));
img2.setPosition(0, 0); // drawing from (0,0)
stage = new Stage(new StretchViewport(appWidth, appHeight, camera));
stage.addActor(img2);
}
#Override
public void render(float delta) {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.draw(img, 0, 0);
batch.end();
stage.act();
stage.act(delta);
stage.draw();
// Also You can get touch input according to your Screen.
if (Gdx.input.isTouched()) {
System.out.println(" X " + Gdx.input.getX() * (appWidth / screenWidth));
System.out.println(" Y " + Gdx.input.getY() * (appHeight / screenHeight));
}
}
//
:
:
//
}
run this code in Any type of resolution it will going to adjust in that resolution without any disturbance.

I just think Stage is easy to use.
If there are some wrong,i consider you should check your code:
public Actor hit(float x, float y)

Related

Libgdx , When to use camera.position.set?

I am really confused with two examples related to viewport and orthagraphic. Although i understand that Viewport is the size of the dimensions we set to view on the screen and camera projects that. I am learning libgdx and cannot finish through orthographic camera and viewport examples which have left me completely confused. the code runs fine for both examples and with proper result on screen.
here's one example in which camera.position.set is used to position the camera.
public class AnimatedSpriteSample extends GdxSample {
private static final float WORLD_TO_SCREEN = 1.0f / 100.0f;
private static final float SCENE_WIDTH = 12.80f;
private static final float SCENE_HEIGHT = 7.20f;
private static final float FRAME_DURATION = 1.0f / 30.0f;
private OrthographicCamera camera;
private Viewport viewport;
private SpriteBatch batch;
private TextureAtlas cavemanAtlas;
private TextureAtlas dinosaurAtlas;
private Texture background;
private Animation dinosaurWalk;
private Animation cavemanWalk;
private float animationTime;
#Override
public void create() {
camera = new OrthographicCamera();
viewport = new FitViewport(SCENE_WIDTH, SCENE_HEIGHT, camera);
batch = new SpriteBatch();
animationTime = 0.0f;
...
...
..
camera.position.set(SCENE_WIDTH * 0.5f, SCENE_HEIGHT * 0.5f, 0.0f);
Here's another example which does not use camera.position.set and still the result is the same.
#Override
public void create() {
camera = new OrthographicCamera();
viewport = new FitViewport(SCENE_WIDTH, SCENE_HEIGHT, camera);
batch = new SpriteBatch();
oldColor = new Color();
cavemanTexture = new Texture(Gdx.files.internal("data/caveman.png"));
cavemanTexture.setFilter(TextureFilter.Nearest, TextureFilter.Nearest);
}
#Override
public void dispose() {
batch.dispose();
cavemanTexture.dispose();
}
#Override
public void render() {
Gdx.gl.glClearColor(BACKGROUND_COLOR.r,
BACKGROUND_COLOR.g,
BACKGROUND_COLOR.b,
BACKGROUND_COLOR.a);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
int width = cavemanTexture.getWidth();
int height = cavemanTexture.getHeight();
float originX = width * 0.5f;
float originY = height * 0.5f;
// flipX, flipY
// Render caveman centered on the screen
batch.draw(cavemanTexture, // Texture itselft
-originX, -originY, // pass in the world space coordinates where we to draw, Considering the camera is centered at (0,0). by default we need to position
// out cavement at -originX, -originY.
originX, originY, // coordinates in pixels of our texture that we consider to be the origin starting from the bottom-left corner.
// in our case, we want the origin to be the center of the texture. then we pass the dimensions of the texture and the scale
// and the scale along both axes (x and Y).
width, height, // width, height
WORLD_TO_SCREEN, WORLD_TO_SCREEN, // scaleX, scaleY
0.0f, // rotation
0, 0, // srcX, srcY
width, height, // srcWidth, srcHeight
false, false); // flipX, flipY
What is really confusing me is why does it not use camera.position.set on the second example to adjust the camera's view and why is it important to use this on the first example.
I really hope this question is legit and makes sense. I have searched the forum here and couldnt find any clues. Hope someone can guide in the right direction.
Many Thanks.
In the first example a 2 dimensional vector has been initialized for the position of the camera the x direction and the y direction. This for the specifically the camera.
camera = new OrthographicCamera();
So, this code creates a camera object from the OrthographicCamera class created by libgdx creators. Check out the documentation for the class here from that class you can see when that it is constructed it accepts both the viewport_height and viewport_width. (in your example you've left it blank, so these are 0 for the time being.)
viewport = new FitViewport(SCENE_WIDTH, SCENE_HEIGHT, camera);
This line of code defines the width, height and which camera should be used for the viewport. check out the documentation for FitViewport class here
So when camera.position.set is called, it sets for the x and y direction based on the viewport's width and height. This whole example defines the viewport dimensions for the overall viewport.
The difference between this and the second example is that the camera is set around the texture that has been loaded onto the screen. So the viewport's x and y direction has been positioned and the width, height, originX, originY of the texture/camera has been defined also:
int width = cavemanTexture.getWidth();
int height = cavemanTexture.getHeight();
float originX = width * 0.5f;
float originY = height * 0.5f;
Libgdx then allows you to draw the texture using the spritebatch class to draw both the texture and the viewport surrounding that texture.
Summary
Example one allows you to define a viewport on it's own, without any textures being drawn. This will allow you to draw multiple textures with the same viewport being set (a normal process of game creation)
But in Example two if you wanted the viewport to say, follow the main character around on the screen. you can define the viewport surrounding the texture to thus follow that texture.
Personally, i'd always pursue the first example as you can define a viewport for any game width or height and then i'd create a second viewport ontop to follow any textures i've drawn on the screen. They both work, just for different reasons.
Hope this helps you clear things up.
Happy coding,
Bradley.

libgdx display background using texturepacker

I have a image background size 480 * 800. I use texturepacker to create .pack file.
I try to display background into screen but background display very small and not fit screen device. Please help me display background correctly. Thanks
You need to simply adjust the image's size:
private SpriteBatch spriteBatch;
private Texture texture;
private Sprite textureSprite;
public void show(){ // It might also be called create();
spriteBatch = new SpriteBatch();
texture = new Texture(Gdx.files.internal("location of img in assets folder"));
textureSprite = new Sprite(texture);
//Set the image width/height to the width/height of the screen.
textureSprite.setBounds(0,0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
public void render(float delta) {
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
// Clear the screen every loop
spriteBatch.begin();
textureSprite.draw(spriteBatch);
spriteBatch.end();
}

How to make the camera follow the player?

I'm making a game in libgdx which includes the player being able to move vertically beyond the set screen size.
As for my question, if I have the screen size set at a certain width and height, what is required to make the actual game world larger for the camera to follow the player?
This is of course my targeted screen size in the Main game class:
public static final int WIDTH = 480, HEIGHT = 800;
Below that I currently have :
public static final int GameHeight = 3200;
GameHeight is the value I test for whether the player is going out of bounds.
Here is the problem. With this code, the player is centered on the screen, and moves horizontally, rebounding off the screen bounds (As it would without the camera, but neglecting the change in y-position)
public GameScreen(){
cam = new OrthographicCamera();
cam.setToOrtho(false, 480, 800);
}
#Override
public void render(float delta) {
// TODO Auto-generated method stub
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
cam.position.y = player.getPosition().y;
cam.update();
batch.setProjectionMatrix(cam.combined);
player.update();
player.draw(batch);
}
If I remove:
cam.position.y = player.getPosition().y;
The camera is placed at the bottom of the virtual world and the ball starts at the top (y = 3200) and travels downward. When it reaches y = 800, it shows up as it should.
I've found a lot of examples that indicate in writing that setting the cameras position to the players y position should force the camera to follow the player, whether it's moving up or down, but it either freezes y movement or sets the camera at the bottom the virtual world.
Any help is appreciated, thanks!
I would try doing cam.position.set(player.getPosition().x, player.getPosition().y). This will make the camera follow your player and it should not cause any "freezing."
private val worldTransform = Matrix4()
private val cameraPosition = Vector3()
private val objPosition = Vector3()
private var rot = Quaternion()
private var carTranslation = Vector3(0f, 0f, 0f)
fun focus(obj: BulletObject) {
// worldTransform
obj.entity?.motionState?.getWorldTransform(worldTransform)
// objPosition
worldTransform.getTranslation(objPosition)
obj.entity?.modelInstance?.transform?.getTranslation(carTranslation)
// get rotation
worldTransform.getRotation(rot)
println("rot.angle: ${rot.getAngleAround(Vector3.Y)}")
val rad = Math.toRadians(rot.getAngleAround(Vector3.Y).toDouble())
// pointFromCar
val pointFromCar = Vector2(-3f * sin(rad.toFloat()), -3f * cos(rad.toFloat()));
cameraPosition.set(Vector3(objPosition.x + pointFromCar.x, objPosition.y + 1f, objPosition.z + pointFromCar.y))
// camera set position
camera.position.set(cameraPosition)
camera.lookAt(objPosition)
camera.up.set(Vector3.Y)
camera.update()
}

Libgdx problems frame rate (FPS)

I am having problems with the frame rate in libgdx, alguin could enlighten me if I'm doing something wrong, because I do not think that is normal, the problem in question is I load a tilemap and simple and if put the highest resolution of 320 x 320, 800 x 600 say this drops to 12 fps or so, I hope to explain well PUE groin is not the native language thanks.
public class MyGdxGame extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
private OrthogonalTiledMapRenderer bachTileMapRender;
private OrthographicCamera camera;
private TiledMap map;
private int[] background = new int[] {0}, foreground = new int[] {1};
#Override
public void create () {
batch = new SpriteBatch();
img = new Texture("badlogic.jpg");
map = new TmxMapLoader().load("maps/untitled.tmx");
bachTileMapRender = new OrthogonalTiledMapRenderer(map, 1/32f);//Asignar mapa
camera = new OrthographicCamera(Gdx.graphics.getWidth(),Gdx.graphics.getHeight());
camera.setToOrtho(false, 32, 32); //Virtual dimensiones
}
#Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
//batch.begin();
//batch.draw(img, 0, 0);
//batch.end();
camera.update();
bachTileMapRender.setView(camera);
bachTileMapRender.render(background);
System.out.println("fps:"+Gdx.graphics.getFramesPerSecond());
}
}
320 x 320 the frame rate is about 60 but climbed the window or use the 800 x 600 drops rapidly
public class DesktopLauncher {
public static void main (String[] arg) {
LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();
config.width = 320;
config.height = 320;
new LwjglApplication(new MyGdxGame(), config);
}
}
P.D: it's worth my computer is a dual core 3000, in debian linux
untitled.tmx
<?xml version="1.0" encoding="UTF-8"?>
<map version="1.0" orientation="orthogonal" width="32" height="32" tilewidth="32" tileheight="32">
<tileset firstgid="1" name="prueba" tilewidth="32" tileheight="32">
<image source="title.png" width="384" height="128"/>
</tileset>
<layer name="Capa de Patrones 1" width="32" height="32">
<data encoding="base64" compression="zlib">
eJztwwEJAAAMBKHr8f17rsdQcNVUVVXV1w8ar2wB
</data>
</layer>
</map>
What if you do this?
//Swap these lines
map = new TmxMapLoader().load("maps/untitled.tmx"); //first load map
bachTileMapRender = new OrthogonalTiledMapRenderer(map); //then use map
camera.setToOrtho(false, 800, 600); //X, Y, Z using the aspect ratio you want.
//Gfx config
config.width = 800;
config.height = 600;
Also, how many tiles are you rendering?
it is possible, that the problem is the TextureFilter.
If the Texels (Pixel of a Texture) do not matchthe pixels they use on screen, they need to be recalculated.
For example: You have a Texture of 16*16 pixel, which should then use 20*20px on screen.
The TextureFilter then defines HOW to calculated the pixel on screen.
There are 2 important TextureFilters:
- Nearest: Fetches the nearest Texel
- Linear: Fetches the 4 nearest Texels
You can see, that the linear filter costs much more performance.
So you should try using Nearest, which may does not look as good...
You may notice, that you can set 2 TextureFilters:
- Magnification: If the Texture is smaller then the place it takes on screen, this filter is used
- Minification: If the Texture is bigger then the place it takes on screen, this filter is used.
You can als use MipMapping, which gives you the possibility to have different resolutions of the same Texture, and the best matching one is used.
I would recommend to read this: Know Your Texture Filters!

TextureRegion in ArrayList Won't draw to screen

I'm having some issues drawing TextureRegions with a spriteBatch in LibGDX.
So I have a main class that hosts the game logic.
In the constructor, I have:
atlas = new TextureAtlas(Gdx.files.internal("sheet.txt") );
this.loadTileGFX();
the loadTileGFX() method does this:
roseFrames = new ArrayList<AtlasRegion>();
roseFrames.add(atlas.findRegion("Dirt", 0));
roseFrames.add(atlas.findRegion("Dirt", 1));
roseFrames.add(atlas.findRegion("Dirt", 2));
roseFrames.add(atlas.findRegion("Dirt", 3));
Then I pass the arrayList of AtlasRegions into the object:
///in the main class
rsoe = new RoseSquare(roseFrames, st, col, row, tileWidth);
//in the constructor for the object to draw
this.textureRegions = roseFrames;
Then every render() loop I call:
batch.begin();
rose.draw(batch);
batch.end()
The rose.draw() method looks like this:
public void draw(SpriteBatch batch){
batch.draw(this.textureRegions.get(1), rect.x, rect.y, rect.width, rect.height);
}
But the thing is, this doesn't draw anything to the screen.
BUT HERE'S THE THING.
If I change the code to be:
public void draw(SpriteBatch batch){
batch.draw(new TextureAtlas(Gdx.files.internal("sheet.txt")).findRegion("Dirt", 0)), rect.x, rect.y, rect.width, rect.height);
}
Then it draws correctly.
Can anybody shed some light on what error I might have?
Keep in ming I don't get any errors with the "nothing drawn" code.
Also, I can trace the details of this.textureRegions.get(1), and they all are correct....
Thanks.
If you need to draw an array of something that has textures you can do like this:
batch.begin();
for (Ground ground : groundArray){
batch.draw(ground.getTextureRegion(), ground.x, ground.y);
}
batch.end();
As you see i am drawing the TextureRegion here.
You can check related classes and other information in my answers HERE and HERE
Answering drew's comment:
public TextureRegion customGetTextureRegion(int i){
switch(i){
case 1: return atlas.findRegion("dirt1"); break;
case 2: return atlas.findRegion("dirt2"); break;
case 3: return atlas.findRegion("dirt3"); break;
}
}
I have found a solution to my own problem.
I was also drawing some debug ShapeRenderer stuff.
The issue seemed to be that libGDX didn't like a SpriteBatch and a ShapeRenderer to be "on" at the same time:
//LibGDX Doesn't like this:
spriteBatch.begin();
shapeRenderer.begin(ShapeType.Line);
shapeRenderer.drawRect(x, y, width, height);
shapeRenderer.end();
sprtieBatch.draw(texRegion, x, y, width, height);
spriteBatch.end();
It prefers:
//LibGDX likes this:
shapeRenderer.begin(ShapeType.Line);
shapeRenderer.drawRect(x, y, width, height);
shapeRenderer.end();
spriteBatch.begin();
sprtieBatch.draw(texRegion, x, y, width, height);
spriteBatch.end();
Thanks for your responses everyone.