I am creating a single pixel with a specified color using a common method createSinglePixelTexture() as I have mentioned below.
Question:
1. Do I need to dispose "singlePixelPixmap" and texture "t" ?
2. If I need to dispose that where I can dispose it ?
singlePixelTexture = createSinglePixelTexture(0.129f, 0.129f, 0.129f, .7f);
private Texture createSinglePixelTexture(float r,float g,float b,float a) {
Pixmap singlePixelPixmap;
singlePixelPixmap = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
singlePixelPixmap.setColor(r, g, b, a);
singlePixelPixmap.fill();
PixmapTextureData textureData = new PixmapTextureData(singlePixelPixmap, Pixmap.Format.RGBA8888, false, false, true);
Texture t = new Texture(textureData);
t.setFilter(TextureFilter.Nearest, TextureFilter.Nearest);
return t;
}
You don't need the intermediary PixmapTextureData; it's entirely optional.
As soon as you create a Texture with a Pixmap, you can dispose the Pixmap. So you can insert the disposal of everything but the texture just before the return.
Once you dispose a Texture, it cannot be drawn. Do not dispose t unless you are SURE you will never try to draw it again.
Related
I have this code
textureAtlas = TextureAtlas("atlas.atlas")
val box = textureAtlas.findRegion("box")
I want to create a texture with "box". Is it possible? box.texture return the original texture, not the regioned. Oh and I don't want to use Sprite and SpriteBatch. I need this in 3D, not 2D.
Thanks
TextureAtlas actually not separating pieces. When you get region from atlas its just saying that this is the area you gonna use (u,v,u2,v2) and this is original reference to whole texture.
This is why batch.draw(Texture) and batch.draw(TextureRegion) are not same in use.
However taking part of picture as texture is possible.
You can use pixmap to do it.
First generate pixmap from atlas texture. Then create new empty pixmap in size of "box" area you want. Then assign pixel arrays and generate texture from your new pixmap.
It may be quite expensive due to your Textureatlas size.
You can use framebuffer.
Create FBbuilder and build new frame buffer.Draw texture region to this buffer and get texture from it.
Problem here is the sizes of texture will be same as viewport/screen sizes.I guess you can create new camera to change it to sizes you want.
GLFrameBuffer.FrameBufferBuilder frameBufferBuilder = new GLFrameBuffer.FrameBufferBuilder(widthofBox, heightofBox);
frameBufferBuilder.addColorTextureAttachment(GL30.GL_RGBA8, GL30.GL_RGBA, GL30.GL_UNSIGNED_BYTE);
frameBuffer = frameBufferBuilder.build();
OrthographicCamera c = new OrthographicCamera(widthofBox, heightofBox);
c.up.set(0, 1, 0);
c.direction.set(0, 0, -1);
c.position.set(widthofBox / 2, heightofBox / 2, 0f);
c.update();
batch.setProjectionMatrix(c.combined);
frameBuffer.begin();
batch.begin();
batch.draw(boxregion...)
batch.end();
frameBuffer.end();
Texture texturefbo = frameBuffer.getColorBufferTexture();
Texturefbo will be y flipped. You can fix this with texture draw method by setting scaleY to -1 or You can scale scaleY to -1 while drawing on framebuffer or can change camera like this
up.set(0, -1, 0);
direction.set(0, 0, 1);
to flip to camera on y axis.
Last thing came to my mind is mipmapping this texture.Its also not so hard.
texturefbo.bind();
Gdx.gl.glGenerateMipmap(GL20.GL_TEXTURE_2D);
texturefbo.setFilter(Texture.TextureFilter.MipMapLinearLinear,
Texture.TextureFilter.MipMapLinearLinear);
You can do this:
Texture boxTexture = new TextureRegion(textureAtlas.findRegion("box")).getTexture();
I was getting in to shaders for LibGDX and noticed there are some attributes that are only being used in LibGDX.
The standard Vertex and Fragment shaders from https://github.com/libgdx/libgdx/wiki/Shaders work perfect and gets applied to my SpriteBatch.
When i try to use a HQX shader like https://github.com/SupSuper/OpenXcom/blob/master/bin/common/Shaders/HQ2x.OpenGL.shader i get a lot of errors.
Probably because i need to send some LibGDX dependant variables to the shader but i can't find out which that should be.
I'd like to use these shaders on desktops with large screens so the game keeps looking great on these screens.
I used this code to load the shader:
try {
shaderProgram = new ShaderProgram(Gdx.files.internal("vertex.glsl").readString(), Gdx.files.internal("fragment.glsl").readString());
shaderProgram.pedantic = false;
System.out.println("Shader Log:");
System.out.println(shaderProgram.getLog());
} catch(Exception ex) { }
The Shader Log outputs:
No errors.
Thanks in advance.
This is a post processing shader, so your flow should go like this:
Draw your scene to a FBO at pixel perfect resolution using SpriteBatch's default shader.
Draw the FBO's texture to the screen's frame buffer using the upscaling shader. You can do this with SpriteBatch if you modify the shader to match the attributes and uniforms that SpriteBatch uses. (You could alternatively create a simple mesh with the attribute names that the shader expects, but SpriteBatch is probably easiest.)
First of all, we are not using a typical shader with SpriteBatch so you need to call ShaderProgram.pedantic = false; somewhere before loading anything.
Now you need a FrameBuffer at the right size. It should be sized for your sprites to be pixel perfect (one pixel of texture scales to one pixel of world). Something like this:
public void resize (int width, int height){
float ratio = (float)width / (float) height;
int gameWidth = (int)(GAME_HEIGHT / ratio);
boolean needNewFrameBuffer = false;
if (frameBuffer != null && (frameBuffer.getWidth() != gameWidth || frameBuffer.getHeight() != GAME_HEIGHT)){
frameBuffer.dispose();
needNewFrameBuffer = true;
}
if (frameBuffer == null || needNewFrameBuffer)
frameBuffer = new FrameBuffer(Format.RGBA8888, gameWidth, GAME_HEIGHT);
camera.viewportWidth = gameWidth;
camera.viewportHeight = GAME_HEIGHT;
camera.update();
}
Then you can draw to the frame buffer as if it's your screen. And after that, you draw the frame buffer's texture to the screen.
public void render (){
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
frameBuffer.begin();
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.setShader(null); //use default shader
batch.begin();
//draw your game
batch.end();
frameBuffer.end();
batch.setShader(upscaleShader);
batch.begin();
upscaleShader.setUniformf("rubyTextureSize", frameBuffer.getWidth(), frameBuffer.getHeight());//this is the uniform in your shader. I assume it's wanting the scene size in pixels
batch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2); //full screen quad for no projection matrix, with Y flipped as needed for frame buffer textures
batch.end();
}
There are also some changes you need to make to your shader so it will work with OpenGL ES, and because SpriteBatch is wired for specific attribute and uniform names:
At the top of your vertex shader, add this to define your vertex attributes and varyings (which your linked shader doesn't need because it's relying on built-in variables that aren't available in GL ES):
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord[5];
Then in the vertex shader, change the gl_Position line to
gl_Position = a_position; //since we can't rely on built-in variables
and replace all occurrences of gl_TexCoord with v_texCoord for the same reason.
In the fragment shader, to be compatible with OpenGL ES, you need to declare precision. You also need to declare the same varying, so add this to the top:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord[5];
As with the vertex shader, replace all occurrences of gl_TexCoord with v_texCoord. And also replace all occurrences of rubyTexture with u_texture, which is the texture name that SpriteBatch uses.
I think that's everything. I didn't actually test this and I'm going off of memory, but hopefully it gets you close.
I want to create some effects in cocos2d-x by updating raw color data of sprite, does cocos2d-x supply any ways to do that?
Update: My buffer is 4-bytes (A-R-G-B) for each pixels, viewport dimensions are 640x480. So, the buffer has 640 * 480 * 4 = 1228800 bytes in length and I update its content frequently.
This solution regenerates the texture each time it is changed.
Note: the texture in this code uses the format RGBA - not ARGB.
The data(/texel) array m_TextureData and the sprite are allocated only once but the Texture2D object has to be released and recreated every time which might be a performance issue.
Note: the class names are the new ones from Cocos2d-x 3.1.x. In the main loop there's an alternative part for 2.2.x users. To use that one you have to use also the old class names (like ccColor4B, CCTexture2D, CCSprite).
in header:
Color4B *m_TextureData;
Texture2D *m_Texture;
Sprite *m_Sprite;
in implementation:
int w = 640; // width of texture
int h = 480; // height of texture
m_TextureData = new Color4B[w * h];
set colors directly - e.g.:
Color4B white;
white.r = 255;
white.g = 255;
white.b = 255;
white.a = 255;
m_TextureData[i] = white; // i is an index running from 0 to w*h-1
use data to initialize texture:
CCSize contentSize;
contentSize.width = w;
contentSize.height = h;
m_Texture = new Texture2D;
m_Texture->initWithData(m_TextureData, kCCTexture2DPixelFormat_RGBA8888, w, h, contentSize);
create a Sprite with this texture:
m_Sprite = Sprite::createWithTexture(m_Texture);
m_Sprite->retain();
add m_Sprite to your scene
in main loop:
to change color/texels of texture dynamically modify m_TextureData:
m_TextureData[i] = ...;
in Cocos2d-x 2.x:
In 2.2.x you actually have to release the old texture and create a new one:
m_Texture->release(); // make sure that ccGLDeleteTexture() is called internally to prevent memory leakage
m_Texture = new Texture2D;
m_Texture->initWithData(m_TextureData, kCCTexture2DPixelFormat_RGBA8888, w, h, contentSize);
m_Sprite->setTexture(m_Texture); // update sprite with new texture
in Cocos2d-x 3.1.x
m_Texture->updateWithData(m_TextureData, 0, 0, w, h);
Later, don't forget to clean up.
in destructor:
m_Sprite->release();
m_Texture->release();
delete [] m_TextureData;
I have an array of simple 2D objects, mostly made up of two triangles.
Although the objects are quite simple, each one is drawn using stencil operations
so every object will require it's own drawTriangles() call.
What is the best way to store and handle these objects and their vertex and index buffers?
I can think of a few different ways of doing it :
At start up create one large vertex and index buffer and add each object to it, eg :
public function initialize():void{
for each(object in objectList){
vertexData.push(object.vertices);
indexData.push(triangle1, triangle2);
}
indices = context3D.createIndexBuffer( numTriangles * 3 );
vertices = context3D.createVertexBuffer( numVertices, dataPerVertex );
indices.uploadFromVector( indexData, 0, numIndices );
vertices.uploadFromVector( vertexData, 0, numVertices );
}
During rendering loop through all objects, check which ones are visible on screen and update their vertices. Then re-upload the entire vertex buffer.
Afterwards loop through the visible objects and draw each object's triangles with individual calls to drawTriangles()
public function onRender():void{
for each(object in objectList){
if(object.visibleOnScreen)
object.updateVertexData();
}
vertices.uploadFromVector( vertexData, 0, numVertices );
for each(object in objectsOnScreen){
drawTriangles(indices, object.vertexDataOffset, object.numTriangles);
}
}
You could also do something similar to number 1 except re-upload each object's vertex data only when needed :
public function onRender():void{
for each(object in objectList){
if(object.visibleOnScreen){
if(object.hasChanged)
vertices.uploadFromVector( vertexData, object.vertexDataOffset, object.numTriangles );
drawTriangles(indices, object.vertexDataOffset, object.numTriangles);
}
}
}
You could also create a new vertex buffer consisting of only visible objects on each frame :
public function onRender():void{
for each(object in objectList){
if(object.visibleOnScreen){
vertexData.push(object.vertices);
indexData.push(triangle1, triangle2);
}
}
indices = context3D.createIndexBuffer( numTriangles * 3 );
vertices = context3D.createVertexBuffer( numVertices, dataPerVertex );
indices.uploadFromVector( indexData, 0, numIndices );
vertices.uploadFromVector( vertexData, 0, numVertices );
for each(object in objectsOnScreen){
drawTriangles(indices, object.vertexDataOffset, object.numTriangles);
}
}
Another option is to create individual vertex buffers for each object :
public function initialize():void{
for each(object in objectList){
object.indices = context3D.createIndexBuffer( object.numTriangles * 3 );
object.vertices = context3D.createVertexBuffer( object.numVertices, dataPerVertex );
}
}
public function onRender():void{
for each(object in objectList){
if(object.visibleOnScreen){
if(object.hasChanged)
object.vertices.uploadFromVector( object.vertexData, 0, object.numTriangles );
drawTriangles(indices, 0, object.numTriangles);
}
}
}
I can imagine that option 1 will be slow because the entire vertex buffer needs to be uploaded each frame.
Option 2 has the advantage that it only needs to to upload vertex data that has changed but might suffer from multiple calls to uploadFromVector.
UPDATE
I've had a look at the Starling framework source, specifically the QuadBatch class :
http://gamua.com/starling/
https://github.com/PrimaryFeather/Starling-Framework
It seems that all vertices are manually transformed before hand and the the entire vertex buffer is rebuilt and uploaded every frame.
You do need to upload your data on each frame no matter what because Stage3D needs to know what to draw where. Ultimately optimization starts by reducing draw calls. You can speed up uploading of data by using byteArray instead on data that has not changed and using Vector on data that has changed. Uploading vector is slower but setting vector data is faster, uploading bytearray is faster but setting bytearray data is slower (so use only for cached data). You also do not need to create index and vertex buffers each time. Create them once with a confortable length and create new ones only if that length becomes too small. All this should speed up everything nicely but still the amount of draw calls will slow down everything (after 25+ draw calls you should start to see the downside).
In pure AS3, I have a pixelbender and a large bitmap. The pixelbender is configurable with a distance parameter to affect only a small area of the bitmap. The problem is that the pixelbender is executing over the whole bitmap. What would be the best way to update only the effected region of the bitmap?
Given this config:
shader.data.image.input = referenceBitmap.bitmapData; // 300x200
shader.data.position = [150,100];
shader.data.distance = [20];
The following does not work:
new ShaderJob(shader,
bitmap.bitmapData.getPixels(
new Rectangle(particle.x -10,
particle.y -10,
20,
20))).start();
I could make a temporary array to hold the computed values and then copy them back into the bitmapData array. Although I would like for the shader to update the bitmapData pixels directly and only on the affected area.
The following works, but the shader runs on the whole 300x200 bitmap:
new ShaderJob(shader, bitmap.bitmapData).start();
Any suggestions?
EDIT: a filter will not work as there are 3 input images
BitmapData allows you to use a function called applyfilter.
Instead of trying to use a bitmapdata with a shaderJob you could alternately use the shader as a shaderFilter and apply the shaderfilter on the bitmapdata.
shader = new Shader(fr.data);
shaderFilter = new ShaderFilter();
shaderFilter.shader = shader;
Once you have your shaderFilter you could use applyFilter on your bitmapData. Something like :
referenceBitmap.bitmapData.applyFilter(bitmap.bitmapData,new Rectangle(particle.x -10,
particle.y -10,
20,
20),new Point(0,0),shaderFilter);
Hope this helps.