unitScale in TileMap Constructor - libgdx

How these tileMap Constructors actually work,especially with unitsacle ?
OrthogonalTiledMapRenderer(TiledMap map, float unitScale) ;
OrthogonalTiledMapRenderer(TiledMap map, float unitScale, Batch batch);
OrthogonalTiledMapRenderer(TiledMap map, Batch batch) ;
I created a tileMap and getting a hardTime to manage it.How Tilemap pixels varies with camera and viewport aspectratio?

You can get how tileMap Constructors actually work with unitscale
https://github.com/libgdx/libgdx/wiki/Tile-maps

Related

Libgdx Rendering Bitmap font to pixmap texture causes extremely slow rendering

I'm using libgdx scene2d to render 2d actors. Some of these actors originally included scene2d Label actors for rendering static text. The Labels work fine but drawing ~20 of them on the screen at once drops the frame rate by 10-15 frames, resulting in noticeably poor rendering while dragging.
I'm attempting to avoid the Labels by pre-drawing the text to textures, and rendering the textures as scene2d Image actors. I'm creating the texture using the code below:
BitmapFont font = manager.get(baseNameFont,BitmapFont.class);
GlyphLayout gl = new GlyphLayout(font,"Test Text");
int textWidth = (int)gl.width;
int textHeight = (int)gl.height;
LOGGER.info("textHeight: {}",textHeight);
//int width = Gdx.graphics.getWidth();
int width = textWidth;
//int height = 500;
int height = textHeight;
SpriteBatch spriteBatch = new SpriteBatch();
FrameBuffer m_fbo = new FrameBuffer(Pixmap.Format.RGB565, width,height, false);
m_fbo.begin();
Gdx.gl.glClearColor(1f,1f,1f,0f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
Matrix4 normalProjection = new Matrix4()
.setToOrtho2D(0, 0, width, height);
spriteBatch.setProjectionMatrix(normalProjection);
spriteBatch.begin();
font.draw(spriteBatch,gl,0,height);
spriteBatch.end();//finish write to buffer
Pixmap pm = ScreenUtils.getFrameBufferPixmap(0, 0, (int) width, (int) height);//write frame buffer to Pixmap
m_fbo.end();
m_fbo.dispose();
m_fbo = null;
spriteBatch.dispose();
Texture texture = new Texture(pm);
textTexture = new TextureRegion(texture);
textTexture.flip(false,true);
manager.add(texture);
I assumed, and have read, that textures are often faster. However when I replaced the Labels with the texture, it had the same, if not worse, affect on the frame rate. Oddly, I'm not experiencing this when adding textures from a file, which makes me think I'm doing something wrong in my code. Is there a different way I should be pre-rendering these pieces of text?
I have not tested this, but I think you can enable culling for the whole Stage by setting its root view to use a cullingArea matching the world width and height of the viewport. I would do this in resize after updating the Stage Viewport just in case the update affects the world width and height of the viewport.
#Override
public void resize(int width, int height) {
//...
stage.getViewport().update(width, height, true);
stage.getRoot().setCullingArea(
new Rectangle(0f, 0f, stage.getViewport().getWorldWidth(), stage.getViewport().getWorldHeight())
);
}
It will only be able to cull Actors that have their x, y, width, and height set properly. This is true I think of anything in the scene2d UI package, but for your own custom Actors you will need to do it yourself.
If a child of your root view is a Group that encompasses more than the screen and many actors, you might want to cull its children, too, such that even if the group as a whole is not culled by the root view, the group can still cull a portion of its own children. To figure out the culling rectangle for this, I think you would intersect a rectangle of the group's size with the viewport's world size rectangle offset by -x and -y of the group's position (since the culling rectangle is relative to the position of the group it's set on).

Libgdx get texture from Texture Atlas with findRegion

I have this code
textureAtlas = TextureAtlas("atlas.atlas")
val box = textureAtlas.findRegion("box")
I want to create a texture with "box". Is it possible? box.texture return the original texture, not the regioned. Oh and I don't want to use Sprite and SpriteBatch. I need this in 3D, not 2D.
Thanks
TextureAtlas actually not separating pieces. When you get region from atlas its just saying that this is the area you gonna use (u,v,u2,v2) and this is original reference to whole texture.
This is why batch.draw(Texture) and batch.draw(TextureRegion) are not same in use.
However taking part of picture as texture is possible.
You can use pixmap to do it.
First generate pixmap from atlas texture. Then create new empty pixmap in size of "box" area you want. Then assign pixel arrays and generate texture from your new pixmap.
It may be quite expensive due to your Textureatlas size.
You can use framebuffer.
Create FBbuilder and build new frame buffer.Draw texture region to this buffer and get texture from it.
Problem here is the sizes of texture will be same as viewport/screen sizes.I guess you can create new camera to change it to sizes you want.
GLFrameBuffer.FrameBufferBuilder frameBufferBuilder = new GLFrameBuffer.FrameBufferBuilder(widthofBox, heightofBox);
frameBufferBuilder.addColorTextureAttachment(GL30.GL_RGBA8, GL30.GL_RGBA, GL30.GL_UNSIGNED_BYTE);
frameBuffer = frameBufferBuilder.build();
OrthographicCamera c = new OrthographicCamera(widthofBox, heightofBox);
c.up.set(0, 1, 0);
c.direction.set(0, 0, -1);
c.position.set(widthofBox / 2, heightofBox / 2, 0f);
c.update();
batch.setProjectionMatrix(c.combined);
frameBuffer.begin();
batch.begin();
batch.draw(boxregion...)
batch.end();
frameBuffer.end();
Texture texturefbo = frameBuffer.getColorBufferTexture();
Texturefbo will be y flipped. You can fix this with texture draw method by setting scaleY to -1 or You can scale scaleY to -1 while drawing on framebuffer or can change camera like this
up.set(0, -1, 0);
direction.set(0, 0, 1);
to flip to camera on y axis.
Last thing came to my mind is mipmapping this texture.Its also not so hard.
texturefbo.bind();
Gdx.gl.glGenerateMipmap(GL20.GL_TEXTURE_2D);
texturefbo.setFilter(Texture.TextureFilter.MipMapLinearLinear,
Texture.TextureFilter.MipMapLinearLinear);
You can do this:
Texture boxTexture = new TextureRegion(textureAtlas.findRegion("box")).getTexture();

Flipping a shaperenderer object in libGDX

I am flipping one of the object in my libGDX project. At the same time I want to flip its shape rendering circle also. How can I do it?
here is my code for shaperenderer:
shapeRenderer.setProjectionMatrix(camera.projection);
shapeRenderer.setTransformMatrix(camera.view);
shapeRenderer.begin(ShapeRenderer.ShapeType.Line);
if (obsObj.isSpider())
circle(shapeRenderer, ob.getCollisionCircle());
shapeRenderer.end();
and circle method is:
private void circle(ShapeRenderer renderer, Circle circle) {
shapeRenderer.circle(circle.x, circle.y, circle.radius, 100);
}
I am flipping sprite object like this..
obsSprite.setFlip(true,false);
Instead of using circle/Rectangle shape rendering,I tried shape rendering with polygons.
It worked well for rotation and flipping.
You can use transform matrix like this :
shapeRenderer.setProjectionMatrix(camera.combined);
shapeRenderer.setTransformMatrix(...your transformation matrix...);
camera.combined contains both camera projection and view.
your transformation matrix might be scaling matrix in your case (scaleX = -1 for horizontal flipping and/or scaleY = -1 for vertical flipping)

Convert OpenGL HQX shader to LibGDX

I was getting in to shaders for LibGDX and noticed there are some attributes that are only being used in LibGDX.
The standard Vertex and Fragment shaders from https://github.com/libgdx/libgdx/wiki/Shaders work perfect and gets applied to my SpriteBatch.
When i try to use a HQX shader like https://github.com/SupSuper/OpenXcom/blob/master/bin/common/Shaders/HQ2x.OpenGL.shader i get a lot of errors.
Probably because i need to send some LibGDX dependant variables to the shader but i can't find out which that should be.
I'd like to use these shaders on desktops with large screens so the game keeps looking great on these screens.
I used this code to load the shader:
try {
shaderProgram = new ShaderProgram(Gdx.files.internal("vertex.glsl").readString(), Gdx.files.internal("fragment.glsl").readString());
shaderProgram.pedantic = false;
System.out.println("Shader Log:");
System.out.println(shaderProgram.getLog());
} catch(Exception ex) { }
The Shader Log outputs:
No errors.
Thanks in advance.
This is a post processing shader, so your flow should go like this:
Draw your scene to a FBO at pixel perfect resolution using SpriteBatch's default shader.
Draw the FBO's texture to the screen's frame buffer using the upscaling shader. You can do this with SpriteBatch if you modify the shader to match the attributes and uniforms that SpriteBatch uses. (You could alternatively create a simple mesh with the attribute names that the shader expects, but SpriteBatch is probably easiest.)
First of all, we are not using a typical shader with SpriteBatch so you need to call ShaderProgram.pedantic = false; somewhere before loading anything.
Now you need a FrameBuffer at the right size. It should be sized for your sprites to be pixel perfect (one pixel of texture scales to one pixel of world). Something like this:
public void resize (int width, int height){
float ratio = (float)width / (float) height;
int gameWidth = (int)(GAME_HEIGHT / ratio);
boolean needNewFrameBuffer = false;
if (frameBuffer != null && (frameBuffer.getWidth() != gameWidth || frameBuffer.getHeight() != GAME_HEIGHT)){
frameBuffer.dispose();
needNewFrameBuffer = true;
}
if (frameBuffer == null || needNewFrameBuffer)
frameBuffer = new FrameBuffer(Format.RGBA8888, gameWidth, GAME_HEIGHT);
camera.viewportWidth = gameWidth;
camera.viewportHeight = GAME_HEIGHT;
camera.update();
}
Then you can draw to the frame buffer as if it's your screen. And after that, you draw the frame buffer's texture to the screen.
public void render (){
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
frameBuffer.begin();
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.setShader(null); //use default shader
batch.begin();
//draw your game
batch.end();
frameBuffer.end();
batch.setShader(upscaleShader);
batch.begin();
upscaleShader.setUniformf("rubyTextureSize", frameBuffer.getWidth(), frameBuffer.getHeight());//this is the uniform in your shader. I assume it's wanting the scene size in pixels
batch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2); //full screen quad for no projection matrix, with Y flipped as needed for frame buffer textures
batch.end();
}
There are also some changes you need to make to your shader so it will work with OpenGL ES, and because SpriteBatch is wired for specific attribute and uniform names:
At the top of your vertex shader, add this to define your vertex attributes and varyings (which your linked shader doesn't need because it's relying on built-in variables that aren't available in GL ES):
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord[5];
Then in the vertex shader, change the gl_Position line to
gl_Position = a_position; //since we can't rely on built-in variables
and replace all occurrences of gl_TexCoord with v_texCoord for the same reason.
In the fragment shader, to be compatible with OpenGL ES, you need to declare precision. You also need to declare the same varying, so add this to the top:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord[5];
As with the vertex shader, replace all occurrences of gl_TexCoord with v_texCoord. And also replace all occurrences of rubyTexture with u_texture, which is the texture name that SpriteBatch uses.
I think that's everything. I didn't actually test this and I'm going off of memory, but hopefully it gets you close.

In AS3, how to update a portion of a bitmap with a Pixelbender instead of the whole bitmap?

In pure AS3, I have a pixelbender and a large bitmap. The pixelbender is configurable with a distance parameter to affect only a small area of the bitmap. The problem is that the pixelbender is executing over the whole bitmap. What would be the best way to update only the effected region of the bitmap?
Given this config:
shader.data.image.input = referenceBitmap.bitmapData; // 300x200
shader.data.position = [150,100];
shader.data.distance = [20];
The following does not work:
new ShaderJob(shader,
bitmap.bitmapData.getPixels(
new Rectangle(particle.x -10,
particle.y -10,
20,
20))).start();
I could make a temporary array to hold the computed values and then copy them back into the bitmapData array. Although I would like for the shader to update the bitmapData pixels directly and only on the affected area.
The following works, but the shader runs on the whole 300x200 bitmap:
new ShaderJob(shader, bitmap.bitmapData).start();
Any suggestions?
EDIT: a filter will not work as there are 3 input images
BitmapData allows you to use a function called applyfilter.
Instead of trying to use a bitmapdata with a shaderJob you could alternately use the shader as a shaderFilter and apply the shaderfilter on the bitmapdata.
shader = new Shader(fr.data);
shaderFilter = new ShaderFilter();
shaderFilter.shader = shader;
Once you have your shaderFilter you could use applyFilter on your bitmapData. Something like :
referenceBitmap.bitmapData.applyFilter(bitmap.bitmapData,new Rectangle(particle.x -10,
particle.y -10,
20,
20),new Point(0,0),shaderFilter);
Hope this helps.