I am attempting to tint a texture a color but I want the texture to show under the tint. For example, I have a picture of a person but I want to tint them a light green and not change the transparency of the actual person itself.
So far I have attempted to use the SpriteBatch method setColor which takes rgba values. When I set the alpha value to .5 it will render the tinting and the texture with that alpha value. Is there any way to separate the alpha values of the tint and the texture?
I know I could draw another texture on top of it but I don't want to have two draw passes for the one texture because it will be inefficient. If there's anyway to do it in raw OpenGL that'd be great too.
You could draw it without the alpha right? The lighter the color overlay is the less it shows (by default its Color.White). So if you want to tint it slightly green you could use new Color(.9f, 1f, .9f, 1f) halfway would be new Color(.5f, 1f, .5f, 1f) and full green new Color(.0f, 1f, .0f, 1f).
The behavior you described (alpha affects the whole sprite's transparency) is defined by the shader.
The simple way to deal with this is in #MennoGouw's answer, but this always darkens the image. If you want to avoid darkening, you must use a custom shader. You can use a shader that acts somewhat like the Overlay blend mode in photoshop.
Here's an overlay fragment shader you could combine with the vertex shader from SpriteBatch's default shader (look at its source code). Here you can set the tint with the setColor method. To control the tint, you need to blend toward white. This method allows alpha to be preserved for fading sprites in and out if you need to.
tmpColor.set(tintColor).lerp(Color.WHITE, 1f - tintAmount);
tmpColor.a = transparencyAmount;
batch.setColor(tmpColor);
-
#ifdef GL_ES
#define LOWP lowp
precision mediump float;
#else
#define LOWP
#endif
varying vec2 v_texCoords;
varying LOWP vec4 v_color;
uniform sampler2D u_texture;
const vec3 one = vec3(1.0);
void main()
{
vec4 baseColor = texture2D(u_texture, v_texCoords);
vec3 multiplyColor = 2.0 * baseColor.rgb * v_color.rgb;
vec3 screenColor = one - 2.0 * (one - baseColor.rgb)*(one - v_color.rgb);
gl_FragColor = vec4(mix(multiplyColor, screenColor, step(0.5, baseColor.rgb)), v_color.a * baseColor.a);
}
I found the simple solution to be
float light = .5f; //between 0 and 1
batch.setColor(light, light, light, 1);
batch.draw(...);
batch.setColor(Color.White);
Related
I was getting in to shaders for LibGDX and noticed there are some attributes that are only being used in LibGDX.
The standard Vertex and Fragment shaders from https://github.com/libgdx/libgdx/wiki/Shaders work perfect and gets applied to my SpriteBatch.
When i try to use a HQX shader like https://github.com/SupSuper/OpenXcom/blob/master/bin/common/Shaders/HQ2x.OpenGL.shader i get a lot of errors.
Probably because i need to send some LibGDX dependant variables to the shader but i can't find out which that should be.
I'd like to use these shaders on desktops with large screens so the game keeps looking great on these screens.
I used this code to load the shader:
try {
shaderProgram = new ShaderProgram(Gdx.files.internal("vertex.glsl").readString(), Gdx.files.internal("fragment.glsl").readString());
shaderProgram.pedantic = false;
System.out.println("Shader Log:");
System.out.println(shaderProgram.getLog());
} catch(Exception ex) { }
The Shader Log outputs:
No errors.
Thanks in advance.
This is a post processing shader, so your flow should go like this:
Draw your scene to a FBO at pixel perfect resolution using SpriteBatch's default shader.
Draw the FBO's texture to the screen's frame buffer using the upscaling shader. You can do this with SpriteBatch if you modify the shader to match the attributes and uniforms that SpriteBatch uses. (You could alternatively create a simple mesh with the attribute names that the shader expects, but SpriteBatch is probably easiest.)
First of all, we are not using a typical shader with SpriteBatch so you need to call ShaderProgram.pedantic = false; somewhere before loading anything.
Now you need a FrameBuffer at the right size. It should be sized for your sprites to be pixel perfect (one pixel of texture scales to one pixel of world). Something like this:
public void resize (int width, int height){
float ratio = (float)width / (float) height;
int gameWidth = (int)(GAME_HEIGHT / ratio);
boolean needNewFrameBuffer = false;
if (frameBuffer != null && (frameBuffer.getWidth() != gameWidth || frameBuffer.getHeight() != GAME_HEIGHT)){
frameBuffer.dispose();
needNewFrameBuffer = true;
}
if (frameBuffer == null || needNewFrameBuffer)
frameBuffer = new FrameBuffer(Format.RGBA8888, gameWidth, GAME_HEIGHT);
camera.viewportWidth = gameWidth;
camera.viewportHeight = GAME_HEIGHT;
camera.update();
}
Then you can draw to the frame buffer as if it's your screen. And after that, you draw the frame buffer's texture to the screen.
public void render (){
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
frameBuffer.begin();
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.setShader(null); //use default shader
batch.begin();
//draw your game
batch.end();
frameBuffer.end();
batch.setShader(upscaleShader);
batch.begin();
upscaleShader.setUniformf("rubyTextureSize", frameBuffer.getWidth(), frameBuffer.getHeight());//this is the uniform in your shader. I assume it's wanting the scene size in pixels
batch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2); //full screen quad for no projection matrix, with Y flipped as needed for frame buffer textures
batch.end();
}
There are also some changes you need to make to your shader so it will work with OpenGL ES, and because SpriteBatch is wired for specific attribute and uniform names:
At the top of your vertex shader, add this to define your vertex attributes and varyings (which your linked shader doesn't need because it's relying on built-in variables that aren't available in GL ES):
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord[5];
Then in the vertex shader, change the gl_Position line to
gl_Position = a_position; //since we can't rely on built-in variables
and replace all occurrences of gl_TexCoord with v_texCoord for the same reason.
In the fragment shader, to be compatible with OpenGL ES, you need to declare precision. You also need to declare the same varying, so add this to the top:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord[5];
As with the vertex shader, replace all occurrences of gl_TexCoord with v_texCoord. And also replace all occurrences of rubyTexture with u_texture, which is the texture name that SpriteBatch uses.
I think that's everything. I didn't actually test this and I'm going off of memory, but hopefully it gets you close.
I am using a custom shader on cocos2dx 3.1 trying to accomplish an special effect on a sprite used as a background (has to cover the whole screen)
However If i do not use the shader, the sprite is perfectly scaled and positioned, but when i do use it its show way smaller and on the bottom left.
Here is how i load the sprite
this->background_image = Sprite::create(image_name->GetText());
// Add background shader
if (this->background_image)
{
const GLchar *shaderSource = (const GLchar*) CCString::createWithContentsOfFile("OverlayShader.fsh")->getCString();
GLProgram * p = new GLProgram();
p->initWithByteArrays(ccPositionTextureA8Color_vert, shaderSource);
p->link();
p->updateUniforms();
this->background_image->setGLProgram(p);
}
// Classroom will be streched to cover all of the game screen
Size bgImgSize = this->background_image->getContentSize();
Size windowSize = Director::getInstance()->getWinSize();
float xScaleFactor = windowSize.width/bgImgSize.width;
float yScaleFactor = (windowSize.height-MARGIN_SPACE+10)/bgImgSize.height;
this->background_image->setScale(xScaleFactor, yScaleFactor);
this->background_image->setPosition(Vec2(windowSize.width/2.0f,windowSize.height/2.0f + ((MARGIN_SPACE-10)/2.0)));
this->background_image->retain();
And this is the shader im trying to use (a simple one, once this works ill change it to a photoshop-overlay style one)
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
void main()
{
vec4 v_orColor = v_fragmentColor * texture2D(CC_Texture0, v_texCoord);
float gray = dot(v_orColor.rgb, vec3(0.299, 0.587, 0.114));
gl_FragColor = vec4(gray, gray, gray, v_orColor.a);
}
My question is, what am i doing wrong? The first thing that comes to my mind is that the attribute pointers used on the vertex shader are not correct, but now i am using the default vertex shader.
I found the solution on another post, so i'll just quote it and link to that post:
Found the solution. The vert shader should not use the MVP matrix so I
loaded ccPositionTextureColor_noMVP_vert instead of
ccPositionTextureA8Color_vert.
Weird y-position offset using custom frag shader (Cocos2d-x)
I'm using a simple shader to do a vignette transition effect in my game on each screen load (picked up from this book). I use this transition effect in my menu screens. All is well with the exception that my fadeIn , fadeOut animations that I apply to the buttons of the menu don't have any effect with this added shader; in fact no alpha value has any effect on any actor on the screen when I use this shader.
Here's the vertex shader:
...
gl_Position = u_projTrans * a_position;
v_texCoord = a_texCoord0;
v_color = a_color;
And the fragment shader:
...
vec4 texColor = texture2D(u_texture, v_texCoord);
...
gl_FragColor = vec4(texColor.r, texColor.g, texColor.b, texColor.a);
How can I fix the alpha issue so actors render the alpha properly with the shader ?
Maybe the fragment blow will work? But how do I set the alpha for every actor of the screen?
uniform float ALPHA;
...
gl_FragColor = vec4(texColor.r, texColor.g, texColor.b, texColor.a * ALPHA);
Fade animations utilize the alpha of the vertex color. This is passed into SpriteBatch which passes it to the vertex shader as a_color. Your vertex shader already passes the color to the fragment shader as v_color, so in your fragment shader, you need to multiply the final alpha by this fade value:
Fragment shader:
varying LOWP vec4 v_color;
//...
gl_FragColor = vec4(texColor.r, texColor.g, texColor.b, texColor.a * v_color.a);
I'm looking for a solution to implement alpha masking with stencil buffer in libgdx with open gles 2.0.
I have managed to implement simple alpha masking with stencil buffer and shaders, where if alpha channel of fragment is greater then some specified value it gets discarted. That works fine.
The problem is when I want to use some gradient image mask, or fethered png mask, I don't get what I wanned (I get "filled" rectangle mask with no alpha channel), instead I want smooth fade out mask.
I know that the problem is that in stencil buffer there are only 0s and 1s, but I want to write to stencil some other values, that represent actual alpha value of fragment that passed in fragment shader, and to use that value from stencil to somehow do some blending.
I hope that I've explained what I want to get, actually if it's possible.
I've recently started playing with OpenGL ES, so I still have some misunderstandings.
My questions is: How to setup and stencil buffer to store values other then 0s and 1s, and how to use that values later for alpha masking?
Tnx in advance.
This is currently my stencil setup:
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_STENCIL_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
// setup drawing to stencil buffer
Gdx.gl20.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl20.glStencilFunc(GL20.GL_ALWAYS, 0x1, 0xffffffff);
Gdx.gl20.glStencilOp(GL20.GL_REPLACE, GL20.GL_REPLACE, GL20.GL_REPLACE);
Gdx.gl20.glColorMask(false, false, false, false);
Gdx.gl20.glDepthMask(false);
spriteBatch.setShader(shaderStencilMask);
spriteBatch.begin();
// push to the batch
spriteBatch.draw(Assets.instance.actor1, Gdx.graphics.getWidth() / 2, Gdx.graphics.getHeight() / 2, Assets.instance.actor1.getRegionWidth(), Assets.instance.actor1.getRegionHeight());
spriteBatch.end();
// fix stencil buffer, enable color buffer
Gdx.gl20.glColorMask(true, true, true, true);
Gdx.gl20.glDepthMask(true);
Gdx.gl20.glStencilOp(GL20.GL_KEEP, GL20.GL_KEEP, GL20.GL_KEEP);
// draw where pattern has NOT been drawn
Gdx.gl20.glStencilFunc(GL20.GL_EQUAL, 0x1, 0xff);
decalBatch.add(decal);
decalBatch.flush();
Gdx.gl20.glDisable(GL20.GL_STENCIL_TEST);
decalBatch.add(decal2);
decalBatch.flush();
The only ways I can think of doing this are with a FrameBuffer.
Option 1
Draw your scene's background (the stuff that will not be masked) to a FrameBuffer. Then draw your entire scene without masks to the screen. Then draw your mask decals to the screen using the FrameBuffer's color attachment. Downside to this method is that in OpenGL ES 2.0 on Android, a FrameBuffer can have RGBA4444, not RGBA8888, so there will be visible seams along the edges of the masks where the color bit depth changes.
Option 2
Draw you mask decals as B&W opaque to your FrameBuffer. Then draw your background to the screen. When you draw anything that can be masked, draw it with multi-texturing, multiplying by the FrameBuffer's color texture. Potential downside is that absolutely anything that can be masked must be drawn multi-textured with a custom shader. But if you're just using decals, then this isn't really any more complicated than Option 1.
The following is untested...might require a bit of debugging.
In both options, I would subclass CameraGroupStrategy to be used with the DecalBatch when drawing the mask decals, and override beforeGroups to also set the second texture.
public class MaskingGroupStrategy extends CameraGroupStrategy{
private Texture fboTexture;
//call this before using the DecalBatch for drawing mask decals
public void setFBOTexture(Texture fboTexture){
this.fboTexture = fboTexture;
}
#Override
public void beforeGroups () {
super.beforeGroups();
fboTexture.bind(1);
shader.setUniformi("u_fboTexture", 1);
shader.setUniformf("u_screenDimensions", Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
}
And in your shader, you can get the FBO texture color like this:
vec4 fboColor = texture2D(u_fboTexture, gl_FragCoord.xy/u_screenDimensions.xy);
Then for option 1:
gl_FragColor = vec4(fboColor.rgb, 1.0-texture2D(u_texture, v_texCoords).a);
or for option 2:
gl_FragColor = v_color * texture2D(u_texture, v_texCoords);
gl_FragColor.a *= fboColor.r;
I'm trying to mix two different textures(scene and clouds) which are obtained from FBO and draw them on quad.
uniform sampler2D u_texture;
uniform sampler2D u_texture2;
uniform vec2 u_res;
void main(void)
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = sceneColor+addColor;
}
glBlendFunc is
Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
I tried all combination of glBlendFunc and the combination above was the best one.
Creating FBOs:
fbClouds = new FrameBuffer(Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
fbScene = new FrameBuffer(Format.RGB565, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
fbMix = new FrameBuffer(Format.RGB565, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
creating clouds:
fbClouds.begin();
Gdx.gl.glClearColor(0, 0, 0, 0); // to make it transparent
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
modelBatch.begin(cam);
for (Integer e : drawOrder) {
if(isVisible(cam, clouds[e])){
drawLightning(e, modelBatch);
modelBatch.render(clouds[e], cloudShader);
modelBatch.flush();
}
}
modelBatch.end();
fbClouds.end();
render code:
Gdx.gl20.glDisable(GL20.GL_BLEND);
//Gdx.gl20.glEnable(GL20.GL_BLEND);
//Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
fbMix.begin();
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
mixShader.begin();
fbScene.getColorBufferTexture().bind(1);
mixShader.setUniformi("u_texture", 1);
fbClouds.getColorBufferTexture().bind(0);
mixShader.setUniformi("u_texture2", 0);
mixShader.setUniformf("u_res", Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
quad.render(mixShader, GL20.GL_TRIANGLES);
mixShader.end();
fbMix.end();
So, I get unexpected result(clouds have absolutely white color, though they should be grey):
In case if I use modelbatch to render clouds the result is as should be:
What is the right way to mix two textures without losing color?
The blend function you use to draw the two FBO's to the screen should be irrelevant, because nothing shows through them, right? So blending should be turned off before you draw the FBO's, or you're wasting GPU cycles mixing the FBO's with your clear color.
The reason it turns white is that you are adding gray to blue without darkening the blue first. Normally when you draw a transparent object to the screen, you use a blend function like this: GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA. That means you are multiplying the sprite's color by its transparency (effectively darkening transparent colors) and you multiply the background by the inverse of the sprite's alpha (thereby darkening the pixels that will be added to the sprite's opaque pixels so they won't be too bright).
In your case, you want to emulate the same thing inside your fragment shader, since you are trying to blend two textures inside your shader before outputing them to the screen.
So if your cloud FBO had an alpha channel, you could do this in your fragment shader and you'd be good to go:
void main()
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = addColor*addColor.a + sceneColor*(1-addColor.a);
}
However, your cloud's FBO does not have an alpha channel so you need to change something.
One thing you could do is make your FBO color texture use RGBA4444 so it has an alpha channel, and then carefully draw your clouds so they also write to the alpha channel. This would be kind of complicated, because you'd have to use a separated blend function, where you select two different blend functions for the RGB and the A channels separately. I haven't done this before. Although it should be possible, I haven't even tried this method before because I think the 4-bit colors would look pretty lousy.
Alternatively, if your clouds are all going to be monochrome, you can encode your alpha information into one of the color channels. To do this you will need to customize the fragment shader you use to draw the clouds to the FBO. It would look something like this:
vec4 textureColor = texture2D(u_texture, v_texCoord);
gl_FragColor = vec4(textureColor.r * textureColor.a, textureColor.a, 0, textureColor.a);
What this does is put the cloud's monochrome color in the R channel with alpha pre-multiplied, and it puts the alpha in the G channel. We want to pre-multiply the alpha so we can simply add the encoded cloud sprite onto the scene. This is because when you draw something in front of an already-drawn sprite in an area that was translucent in the already-drawn sprite, you want to brighten the G-encoded alpha to make the pixel more opaque in the final FBO image. Since we are using pre-multiplied alpha, draw the clouds using the blend function GL_ONE, GL_ONE_MINUS_SRC_ALPHA.
(This is a slight approximation because the G-encoded alpha of the destination is getting darkened a bit by the second part of the blend function, but I looked at the math and it seems acceptable. The approximation results in slightly more transparent clouds.)
So now the cloud FBO would look like a bunch of yellow if you drew it to screen as is. We just need to make a slight adjustment to our fragment shader above to use the encoded data:
void main()
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = vec4(addColor.r*addColor.g) + sceneColor*(1-addColor.g);
}
If you want to tint your clouds something other than pure gray, you can add a uniform color tint:
void main()
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = u_cloudTint*vec4(addColor.r*addColor.g) + sceneColor*(1-addColor.g);
}