I'm looking for a solution to implement alpha masking with stencil buffer in libgdx with open gles 2.0.
I have managed to implement simple alpha masking with stencil buffer and shaders, where if alpha channel of fragment is greater then some specified value it gets discarted. That works fine.
The problem is when I want to use some gradient image mask, or fethered png mask, I don't get what I wanned (I get "filled" rectangle mask with no alpha channel), instead I want smooth fade out mask.
I know that the problem is that in stencil buffer there are only 0s and 1s, but I want to write to stencil some other values, that represent actual alpha value of fragment that passed in fragment shader, and to use that value from stencil to somehow do some blending.
I hope that I've explained what I want to get, actually if it's possible.
I've recently started playing with OpenGL ES, so I still have some misunderstandings.
My questions is: How to setup and stencil buffer to store values other then 0s and 1s, and how to use that values later for alpha masking?
Tnx in advance.
This is currently my stencil setup:
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_STENCIL_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
// setup drawing to stencil buffer
Gdx.gl20.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl20.glStencilFunc(GL20.GL_ALWAYS, 0x1, 0xffffffff);
Gdx.gl20.glStencilOp(GL20.GL_REPLACE, GL20.GL_REPLACE, GL20.GL_REPLACE);
Gdx.gl20.glColorMask(false, false, false, false);
Gdx.gl20.glDepthMask(false);
spriteBatch.setShader(shaderStencilMask);
spriteBatch.begin();
// push to the batch
spriteBatch.draw(Assets.instance.actor1, Gdx.graphics.getWidth() / 2, Gdx.graphics.getHeight() / 2, Assets.instance.actor1.getRegionWidth(), Assets.instance.actor1.getRegionHeight());
spriteBatch.end();
// fix stencil buffer, enable color buffer
Gdx.gl20.glColorMask(true, true, true, true);
Gdx.gl20.glDepthMask(true);
Gdx.gl20.glStencilOp(GL20.GL_KEEP, GL20.GL_KEEP, GL20.GL_KEEP);
// draw where pattern has NOT been drawn
Gdx.gl20.glStencilFunc(GL20.GL_EQUAL, 0x1, 0xff);
decalBatch.add(decal);
decalBatch.flush();
Gdx.gl20.glDisable(GL20.GL_STENCIL_TEST);
decalBatch.add(decal2);
decalBatch.flush();
The only ways I can think of doing this are with a FrameBuffer.
Option 1
Draw your scene's background (the stuff that will not be masked) to a FrameBuffer. Then draw your entire scene without masks to the screen. Then draw your mask decals to the screen using the FrameBuffer's color attachment. Downside to this method is that in OpenGL ES 2.0 on Android, a FrameBuffer can have RGBA4444, not RGBA8888, so there will be visible seams along the edges of the masks where the color bit depth changes.
Option 2
Draw you mask decals as B&W opaque to your FrameBuffer. Then draw your background to the screen. When you draw anything that can be masked, draw it with multi-texturing, multiplying by the FrameBuffer's color texture. Potential downside is that absolutely anything that can be masked must be drawn multi-textured with a custom shader. But if you're just using decals, then this isn't really any more complicated than Option 1.
The following is untested...might require a bit of debugging.
In both options, I would subclass CameraGroupStrategy to be used with the DecalBatch when drawing the mask decals, and override beforeGroups to also set the second texture.
public class MaskingGroupStrategy extends CameraGroupStrategy{
private Texture fboTexture;
//call this before using the DecalBatch for drawing mask decals
public void setFBOTexture(Texture fboTexture){
this.fboTexture = fboTexture;
}
#Override
public void beforeGroups () {
super.beforeGroups();
fboTexture.bind(1);
shader.setUniformi("u_fboTexture", 1);
shader.setUniformf("u_screenDimensions", Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
}
And in your shader, you can get the FBO texture color like this:
vec4 fboColor = texture2D(u_fboTexture, gl_FragCoord.xy/u_screenDimensions.xy);
Then for option 1:
gl_FragColor = vec4(fboColor.rgb, 1.0-texture2D(u_texture, v_texCoords).a);
or for option 2:
gl_FragColor = v_color * texture2D(u_texture, v_texCoords);
gl_FragColor.a *= fboColor.r;
Related
I have this code
textureAtlas = TextureAtlas("atlas.atlas")
val box = textureAtlas.findRegion("box")
I want to create a texture with "box". Is it possible? box.texture return the original texture, not the regioned. Oh and I don't want to use Sprite and SpriteBatch. I need this in 3D, not 2D.
Thanks
TextureAtlas actually not separating pieces. When you get region from atlas its just saying that this is the area you gonna use (u,v,u2,v2) and this is original reference to whole texture.
This is why batch.draw(Texture) and batch.draw(TextureRegion) are not same in use.
However taking part of picture as texture is possible.
You can use pixmap to do it.
First generate pixmap from atlas texture. Then create new empty pixmap in size of "box" area you want. Then assign pixel arrays and generate texture from your new pixmap.
It may be quite expensive due to your Textureatlas size.
You can use framebuffer.
Create FBbuilder and build new frame buffer.Draw texture region to this buffer and get texture from it.
Problem here is the sizes of texture will be same as viewport/screen sizes.I guess you can create new camera to change it to sizes you want.
GLFrameBuffer.FrameBufferBuilder frameBufferBuilder = new GLFrameBuffer.FrameBufferBuilder(widthofBox, heightofBox);
frameBufferBuilder.addColorTextureAttachment(GL30.GL_RGBA8, GL30.GL_RGBA, GL30.GL_UNSIGNED_BYTE);
frameBuffer = frameBufferBuilder.build();
OrthographicCamera c = new OrthographicCamera(widthofBox, heightofBox);
c.up.set(0, 1, 0);
c.direction.set(0, 0, -1);
c.position.set(widthofBox / 2, heightofBox / 2, 0f);
c.update();
batch.setProjectionMatrix(c.combined);
frameBuffer.begin();
batch.begin();
batch.draw(boxregion...)
batch.end();
frameBuffer.end();
Texture texturefbo = frameBuffer.getColorBufferTexture();
Texturefbo will be y flipped. You can fix this with texture draw method by setting scaleY to -1 or You can scale scaleY to -1 while drawing on framebuffer or can change camera like this
up.set(0, -1, 0);
direction.set(0, 0, 1);
to flip to camera on y axis.
Last thing came to my mind is mipmapping this texture.Its also not so hard.
texturefbo.bind();
Gdx.gl.glGenerateMipmap(GL20.GL_TEXTURE_2D);
texturefbo.setFilter(Texture.TextureFilter.MipMapLinearLinear,
Texture.TextureFilter.MipMapLinearLinear);
You can do this:
Texture boxTexture = new TextureRegion(textureAtlas.findRegion("box")).getTexture();
I have a line actor that might have other object that intersect with it, And I need to crop out that part.
Above is the image actor
this rectangle is also a image actor might appear randomly along the lines.
And this is the sample of the result I wanted to get. I need advice on how to achieve this with libgdx.
[EDIT]
As suggest I am trying to use fbo to draw into a buffer. Below is the code I am currently working on.
#Override
public void draw(Batch batch, float parentAlpha) {
fbo.begin();
getStage().getViewport().apply();
Gdx.gl.glClearColor(0f,0f,0f,0f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.draw(trLine,position.x,position.y);
batch.flush();
fbo.end();
getStage().getViewport().apply();
batch.draw(fbo.getColorBufferTexture(),0,0);
}
I am able to buffer the draw into the buffer and draw it later but it happen to be different size. below is the code for creation and dispose of fbo. and it is outside of the draw loop.
fbo = new FrameBuffer(Pixmap.Format.RGBA8888,getStage().getViewport().getWidth(),getStage().getViewport().getHeight(),false,true);
[SOLVED FBO]
Below is the coding that have working fbo but the blending is not working as expected. Will keep trying until it works.
fbo.begin();
Gdx.gl.glClearColor(0f,0f,0f,0f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.draw(trLine,position.x,position.y);
batch.end();
int srcFunc = batch.getBlendSrcFunc();
int dstFunc = batch.getBlendDstFunc();
batch.enableBlending();
batch.begin();
batch.setBlendFunction(GL20.GL_ONE, GL20.GL_FUNC_REVERSE_SUBTRACT);
for(int i = 0 ; i < cropRectangles.size() ; i++){ batch.draw(cropTexture.get(i),cropRectangles.get(i).x,cropRectangles.get(i).y);
}
batch.end();
fbo.end();
getStage().getViewport().apply();
//reset blending before drawing the desire result
batch.begin();
batch.setBlendFunction(srcFunc, dstFunc);
batch.draw(fbo.getColorBufferTexture(),0,0);
batch.end();
But the output is not getting any blending effect. it is still a rectangle with filled white color.
[SOLVED FULL CODE]
I finally apply the equation correctly and able to reset it so it doesn't affect other things that I draw after this.
fbo.begin();
Gdx.gl.glClearColor(0f,0f,0f,0f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.draw(trLine,position.x,position.y);
batch.end();
int srcFunc = batch.getBlendSrcFunc();
int dstFunc = batch.getBlendDstFunc();
batch.enableBlending();
batch.begin();
batch.setBlendFunction(GL20.GL_ONE, GL20.GL_ONE_MINUS_SRC_ALPHA);
Gdx.gl.glBlendEquation(GL20.GL_FUNC_REVERSE_SUBTRACT);
for(int i = 0 ; i < cropRectangles.size() ; i++){
batch.draw(cropTexture.get(i),cropRectangles.get(i).x,cropRectangles.get(i).y);
}
batch.end();
batch.flush();
fbo.end();
Gdx.gl.glBlendEquation(GL20.GL_FUNC_ADD);
getStage().getViewport().apply();
batch.begin();
batch.setBlendFunction(srcFunc, dstFunc);
batch.draw(fbo.getColorBufferTexture(),0,0);
batch.end();
You can use blend mode to achieve this.Your rectangle should have 2 parts.
Outer part and transparent part.
Outer part is your actual part going to be draw as usual.
Transparent part will be another rectangle with a full alpha and you should use blending for this part.
Visual Blending Tool
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_REVERSE_SUBTRACT);
This mode clearing intersection area, it seems like correct mode .
You can easly find example usages of blending in libgdx.
SpriteBatch sb = (SpriteBatch)batch;
// draw our destination image
sb.draw(dst, 0, 0);
sb.end();
// remember SpriteBatch's current functions
int srcFunc = sb.getBlendSrcFunc();
int dstFunc = sb.getBlendDstFunc();
// Let's enable blending
sb.enableBlending();
sb.begin();
// blend them
sb.setBlendFunction(GL20.GL_ONE, GL20.ONE_MINUS_SRC_ALPHA);
sb.draw(src, 0, 0);
// Reset
sb.end();
sb.begin();
sb.setBlendFunction(srcFunc, dstFunc);
Additionally you must change blend equation as well.
And its not unique for sprite batch so we need to change for all game.
//Equation for effect you want
Gdx.gl.glBlendEquation(GL20.GL_FUNC_REVERSE_SUBTRACT);
//After draw you should also reset this
Gdx.gl.glBlendEquation(GL20.GL_FUNC_ADD);
Now we should take this drawn to FrameBufferObject because transparent area will show background color of your spritebatch.
If it's okey for you then it's done but you want to see another texture at this transparent area like background image or something then we have one more step.
You should read this article for whats the purpose of FBO(FrameBufferObject)
Frame Buffer from official wiki
You need to use this for merge your sprites and transparent areas so you can use those as whole image and see through background images from transparent area.
Maybe using second viewport or sprite batch would be easier and much more efficient according to your game.
One solution for this situation, fill that rectangle with solid background color(i mean make one rectangle inside that rectangle ring). It will cropped out that part.
I was getting in to shaders for LibGDX and noticed there are some attributes that are only being used in LibGDX.
The standard Vertex and Fragment shaders from https://github.com/libgdx/libgdx/wiki/Shaders work perfect and gets applied to my SpriteBatch.
When i try to use a HQX shader like https://github.com/SupSuper/OpenXcom/blob/master/bin/common/Shaders/HQ2x.OpenGL.shader i get a lot of errors.
Probably because i need to send some LibGDX dependant variables to the shader but i can't find out which that should be.
I'd like to use these shaders on desktops with large screens so the game keeps looking great on these screens.
I used this code to load the shader:
try {
shaderProgram = new ShaderProgram(Gdx.files.internal("vertex.glsl").readString(), Gdx.files.internal("fragment.glsl").readString());
shaderProgram.pedantic = false;
System.out.println("Shader Log:");
System.out.println(shaderProgram.getLog());
} catch(Exception ex) { }
The Shader Log outputs:
No errors.
Thanks in advance.
This is a post processing shader, so your flow should go like this:
Draw your scene to a FBO at pixel perfect resolution using SpriteBatch's default shader.
Draw the FBO's texture to the screen's frame buffer using the upscaling shader. You can do this with SpriteBatch if you modify the shader to match the attributes and uniforms that SpriteBatch uses. (You could alternatively create a simple mesh with the attribute names that the shader expects, but SpriteBatch is probably easiest.)
First of all, we are not using a typical shader with SpriteBatch so you need to call ShaderProgram.pedantic = false; somewhere before loading anything.
Now you need a FrameBuffer at the right size. It should be sized for your sprites to be pixel perfect (one pixel of texture scales to one pixel of world). Something like this:
public void resize (int width, int height){
float ratio = (float)width / (float) height;
int gameWidth = (int)(GAME_HEIGHT / ratio);
boolean needNewFrameBuffer = false;
if (frameBuffer != null && (frameBuffer.getWidth() != gameWidth || frameBuffer.getHeight() != GAME_HEIGHT)){
frameBuffer.dispose();
needNewFrameBuffer = true;
}
if (frameBuffer == null || needNewFrameBuffer)
frameBuffer = new FrameBuffer(Format.RGBA8888, gameWidth, GAME_HEIGHT);
camera.viewportWidth = gameWidth;
camera.viewportHeight = GAME_HEIGHT;
camera.update();
}
Then you can draw to the frame buffer as if it's your screen. And after that, you draw the frame buffer's texture to the screen.
public void render (){
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
frameBuffer.begin();
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.setShader(null); //use default shader
batch.begin();
//draw your game
batch.end();
frameBuffer.end();
batch.setShader(upscaleShader);
batch.begin();
upscaleShader.setUniformf("rubyTextureSize", frameBuffer.getWidth(), frameBuffer.getHeight());//this is the uniform in your shader. I assume it's wanting the scene size in pixels
batch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2); //full screen quad for no projection matrix, with Y flipped as needed for frame buffer textures
batch.end();
}
There are also some changes you need to make to your shader so it will work with OpenGL ES, and because SpriteBatch is wired for specific attribute and uniform names:
At the top of your vertex shader, add this to define your vertex attributes and varyings (which your linked shader doesn't need because it's relying on built-in variables that aren't available in GL ES):
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord[5];
Then in the vertex shader, change the gl_Position line to
gl_Position = a_position; //since we can't rely on built-in variables
and replace all occurrences of gl_TexCoord with v_texCoord for the same reason.
In the fragment shader, to be compatible with OpenGL ES, you need to declare precision. You also need to declare the same varying, so add this to the top:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord[5];
As with the vertex shader, replace all occurrences of gl_TexCoord with v_texCoord. And also replace all occurrences of rubyTexture with u_texture, which is the texture name that SpriteBatch uses.
I think that's everything. I didn't actually test this and I'm going off of memory, but hopefully it gets you close.
If both use hardware acceleration (GPU) to execute code, why WebGL is so most faster than Canvas?
I mean, I want to know why at low level, the chain from the code to the processor.
What happens? Canvas/WebGL comunicates directly with Drivers and then with Video Card?
Canvas is slower because it's generic and therefore is hard to optimize to the same level that you can optimize WebGL. Let's take a simple example, drawing a solid circle with arc.
Canvas actually runs on top of the GPU as well using the same APIs as WebGL. So, what does canvas have to do when you draw an circle? The minimum code to draw an circle in JavaScript using canvas 2d is
ctx.beginPath():
ctx.arc(x, y, radius, startAngle, endAngle);
ctx.fill();
You can imagine internally the simplest implementation is
beginPath creates a buffer (gl.bufferData)
arc generates the points for triangles that make a circle and uploads with gl.bufferData.
fill calls gl.drawArrays or gl.drawElements
But wait a minute ... knowing what we know about how GL works canvas can't generate the points at step 2 because if we call stroke instead of fill then based on what we know about how GL works we need a different set of points for a solid circle (fill) vs an outline of a circle (stroke). So, what really happens is something more like
beginPath creates or resets some internal buffer
arc generates the points that make a circle into the internal buffer
fill takes the points in that internal buffer, generates the correct set of triangles for the points in that internal buffer into a GL buffer, uploads them with gl.bufferData, calls gl.drawArrays or gl.drawElements
What happens if we want to draw 2 circles? The same steps are likely repeated.
Let's compare that to what we would do in WebGL. Of course in WebGL we'd have to write our own shaders (Canvas has its shaders as well). We'd also have to create a buffer and fill it with the triangles for a circle, (note we already saved time as we skipped the intermediate buffer of points). We then can call gl.drawArrays or gl.drawElements to draw our circle. And if we want to draw a second circle? We just update a uniform and call gl.drawArrays again skipping all the other steps.
const m4 = twgl.m4;
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
attribute vec4 position;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * position;
}
`;
const fs = `
precision mediump float;
uniform vec4 u_color;
void main() {
gl_FragColor = u_color;
}
`;
const program = twgl.createProgram(gl, [vs, fs]);
const positionLoc = gl.getAttribLocation(program, 'position');
const colorLoc = gl.getUniformLocation(program, 'u_color');
const matrixLoc = gl.getUniformLocation(program, 'u_matrix');
const positions = [];
const radius = 50;
const numEdgePoints = 64;
for (let i = 0; i < numEdgePoints; ++i) {
const angle0 = (i ) * Math.PI * 2 / numEdgePoints;
const angle1 = (i + 1) * Math.PI * 2 / numEdgePoints;
// make a triangle
positions.push(
0, 0,
Math.cos(angle0) * radius,
Math.sin(angle0) * radius,
Math.cos(angle1) * radius,
Math.sin(angle1) * radius,
);
}
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);
gl.enableVertexAttribArray(positionLoc);
gl.vertexAttribPointer(positionLoc, 2, gl.FLOAT, false, 0, 0);
gl.useProgram(program);
const projection = m4.ortho(0, gl.canvas.width, 0, gl.canvas.height, -1, 1);
function drawCircle(x, y, color) {
const mat = m4.translate(projection, [x, y, 0]);
gl.uniform4fv(colorLoc, color);
gl.uniformMatrix4fv(matrixLoc, false, mat);
gl.drawArrays(gl.TRIANGLES, 0, numEdgePoints * 3);
}
drawCircle( 50, 75, [1, 0, 0, 1]);
drawCircle(150, 75, [0, 1, 0, 1]);
drawCircle(250, 75, [0, 0, 1, 1]);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Some devs might look at that and think Canvas caches the buffer so it can just reuse the points on the 2nd draw call. It's possible that's true but I kind of doubt it. Why? Because of the genericness of the canvas api. fill, the function that does all the real work doesn't know what's in the internal buffer of points. You can call arc, then moveTo, lineTo, then arc again, then call fill. All of those points will be in the internal buffer of points when we get to fill.
const ctx = document.querySelector('canvas').getContext('2d');
ctx.beginPath();
ctx.moveTo(50, 30);
ctx.lineTo(100, 150);
ctx.arc(150, 75, 30, 0, Math.PI * 2);
ctx.fill();
<canvas></canvas>
In other words, fill needs to always look at all the points. Another thing, I suspect arc tries to optimize for size. If you call arc with a radius of 2 it probably generates less points than if you call it with a radius of 2000. It's possible canvas caches the points but given the hit rate would likely be small it seems unlikely.
In any case, the point is WebGL let's you take control at a lower level allowing you skip steps that canvas can't skip. It also lets you reuse data that canvas can't reuse.
In fact if we know we want to draw 10000 animated circles we even have other options in WebGL. We could generate the points for 10000 circles which is a valid option. We could also use instancing. Both of those techniques would be vastly faster than canvas since in canvas we'd have to call arc 10000 times and one way or another it would have to generate points for 10000 circles every single frame instead of just once at the beginning and it would have to call gl.drawXXX 10000 times instead of just once.
Of course the converse is canvas is easy. Drawing the circle took 3 lines of code. In WebGL, because you need to setup and write shaders it probably takes at least 60 lines of code. In fact the example above is about 60 lines not including the code to compile and link shaders (~10 lines). On top of that canvas supports transforms, patterns, gradients, masks, etc. All options we'd have to add with lots more lines of code in WebGL. So canvas is basically trading ease of use for speed over WebGL.
Canvas does not execute a pipeline of layers of processing to transition sets of vertices and indices into triangles which then are given textures and lighting all in hardware as does OpenGL/WebGL ... this is the root cause of such speed differences ... Canvas counterparts to such formulations are all done on CPU with only the final rendering sent to the graphics hardware ... speed differences are particularly evident when massive number of such vertices are attempted to be synthesized/animated on Canvas versus WebGL ...
Alas we are on the cusp on hearing the public announcement of the modern replacement to OpenGL : Vulkan who's remit includes exposing general purpose compute in a more pedestrian way than OpenCL/CUDA as well as baking in use of multi-core processors which might just shift Canvas like processing onto hardware
I'm trying to make a game where you build a spaceship from parts, and fly it around and such.
I would like to create the ship from a series of components (from a TextureAtlas for instance). I'd like to draw the ship from it's component textures, and then save the drawn ship as one large Texture. (So i don't have to draw 50 component textures, just one ship texture. What would be the best way to go about this?
I've been trying to do so using a FrameBuffer. I've been able to draw the components to a Texture, and draw the texture to the screen, but no matter what I try the Texture ends up with a solid background the size of the frame buffer. It's like the clear command can't clear with transparency. Here's the drawing code I have at the moment. The ship is just a Sprite which i save the FrameBuffer texture to.
public void render(){
if (ship == null){
int screenwidth = Gdx.graphics.getWidth();
int screenheight = Gdx.graphics.getHeight();
SpriteBatch fb = new SpriteBatch();
FrameBuffer fbo = new FrameBuffer(Format.RGB888, screenwidth, screenheight, false);
fbo.begin();
fb.enableBlending();
Gdx.gl.glBlendFuncSeparate(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA, GL20.GL_ONE, GL20.GL_ONE_MINUS_SRC_ALPHA);
Gdx.gl.glClearColor(1, 0, 1, 0);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
fb.begin();
atlas.createSprite("0").draw(fb);
fb.end();
fbo.end();
ship = new Sprite(fbo.getColorBufferTexture());
ship.setPosition(0, -screenheight);
}
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.enableBlending();
batch.setBlendFunction(GL20.GL_ONE, GL20.GL_ZERO);
ship.draw(batch);
batch.end();
}
The problem here lies in this line:
FrameBuffer fbo = new FrameBuffer(Format.RGB888, screenwidth, screenheight, false);
specifically with Format.RGB888. This line is saying that your FrameBuffer should be Red (8 bits) followed by Green (8 bits) followed by Blue (8 bits). Notice however, that this format doesn't have any bits for Alpha (transparency). In order to get transparency out of your frame buffer, you probably instead want to use the Format.RGBA8888, which includes an additional 8 bits for Alpha.
Hope this helps.