LibGDX - overlay texture above another texture using shader - libgdx

I'm trying to mix two different textures(scene and clouds) which are obtained from FBO and draw them on quad.
uniform sampler2D u_texture;
uniform sampler2D u_texture2;
uniform vec2 u_res;
void main(void)
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = sceneColor+addColor;
}
glBlendFunc is
Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
I tried all combination of glBlendFunc and the combination above was the best one.
Creating FBOs:
fbClouds = new FrameBuffer(Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
fbScene = new FrameBuffer(Format.RGB565, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
fbMix = new FrameBuffer(Format.RGB565, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
creating clouds:
fbClouds.begin();
Gdx.gl.glClearColor(0, 0, 0, 0); // to make it transparent
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
modelBatch.begin(cam);
for (Integer e : drawOrder) {
if(isVisible(cam, clouds[e])){
drawLightning(e, modelBatch);
modelBatch.render(clouds[e], cloudShader);
modelBatch.flush();
}
}
modelBatch.end();
fbClouds.end();
render code:
Gdx.gl20.glDisable(GL20.GL_BLEND);
//Gdx.gl20.glEnable(GL20.GL_BLEND);
//Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
fbMix.begin();
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
mixShader.begin();
fbScene.getColorBufferTexture().bind(1);
mixShader.setUniformi("u_texture", 1);
fbClouds.getColorBufferTexture().bind(0);
mixShader.setUniformi("u_texture2", 0);
mixShader.setUniformf("u_res", Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
quad.render(mixShader, GL20.GL_TRIANGLES);
mixShader.end();
fbMix.end();
So, I get unexpected result(clouds have absolutely white color, though they should be grey):
In case if I use modelbatch to render clouds the result is as should be:
What is the right way to mix two textures without losing color?

The blend function you use to draw the two FBO's to the screen should be irrelevant, because nothing shows through them, right? So blending should be turned off before you draw the FBO's, or you're wasting GPU cycles mixing the FBO's with your clear color.
The reason it turns white is that you are adding gray to blue without darkening the blue first. Normally when you draw a transparent object to the screen, you use a blend function like this: GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA. That means you are multiplying the sprite's color by its transparency (effectively darkening transparent colors) and you multiply the background by the inverse of the sprite's alpha (thereby darkening the pixels that will be added to the sprite's opaque pixels so they won't be too bright).
In your case, you want to emulate the same thing inside your fragment shader, since you are trying to blend two textures inside your shader before outputing them to the screen.
So if your cloud FBO had an alpha channel, you could do this in your fragment shader and you'd be good to go:
void main()
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = addColor*addColor.a + sceneColor*(1-addColor.a);
}
However, your cloud's FBO does not have an alpha channel so you need to change something.
One thing you could do is make your FBO color texture use RGBA4444 so it has an alpha channel, and then carefully draw your clouds so they also write to the alpha channel. This would be kind of complicated, because you'd have to use a separated blend function, where you select two different blend functions for the RGB and the A channels separately. I haven't done this before. Although it should be possible, I haven't even tried this method before because I think the 4-bit colors would look pretty lousy.
Alternatively, if your clouds are all going to be monochrome, you can encode your alpha information into one of the color channels. To do this you will need to customize the fragment shader you use to draw the clouds to the FBO. It would look something like this:
vec4 textureColor = texture2D(u_texture, v_texCoord);
gl_FragColor = vec4(textureColor.r * textureColor.a, textureColor.a, 0, textureColor.a);
What this does is put the cloud's monochrome color in the R channel with alpha pre-multiplied, and it puts the alpha in the G channel. We want to pre-multiply the alpha so we can simply add the encoded cloud sprite onto the scene. This is because when you draw something in front of an already-drawn sprite in an area that was translucent in the already-drawn sprite, you want to brighten the G-encoded alpha to make the pixel more opaque in the final FBO image. Since we are using pre-multiplied alpha, draw the clouds using the blend function GL_ONE, GL_ONE_MINUS_SRC_ALPHA.
(This is a slight approximation because the G-encoded alpha of the destination is getting darkened a bit by the second part of the blend function, but I looked at the math and it seems acceptable. The approximation results in slightly more transparent clouds.)
So now the cloud FBO would look like a bunch of yellow if you drew it to screen as is. We just need to make a slight adjustment to our fragment shader above to use the encoded data:
void main()
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = vec4(addColor.r*addColor.g) + sceneColor*(1-addColor.g);
}
If you want to tint your clouds something other than pure gray, you can add a uniform color tint:
void main()
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = u_cloudTint*vec4(addColor.r*addColor.g) + sceneColor*(1-addColor.g);
}

Related

Convert OpenGL HQX shader to LibGDX

I was getting in to shaders for LibGDX and noticed there are some attributes that are only being used in LibGDX.
The standard Vertex and Fragment shaders from https://github.com/libgdx/libgdx/wiki/Shaders work perfect and gets applied to my SpriteBatch.
When i try to use a HQX shader like https://github.com/SupSuper/OpenXcom/blob/master/bin/common/Shaders/HQ2x.OpenGL.shader i get a lot of errors.
Probably because i need to send some LibGDX dependant variables to the shader but i can't find out which that should be.
I'd like to use these shaders on desktops with large screens so the game keeps looking great on these screens.
I used this code to load the shader:
try {
shaderProgram = new ShaderProgram(Gdx.files.internal("vertex.glsl").readString(), Gdx.files.internal("fragment.glsl").readString());
shaderProgram.pedantic = false;
System.out.println("Shader Log:");
System.out.println(shaderProgram.getLog());
} catch(Exception ex) { }
The Shader Log outputs:
No errors.
Thanks in advance.
This is a post processing shader, so your flow should go like this:
Draw your scene to a FBO at pixel perfect resolution using SpriteBatch's default shader.
Draw the FBO's texture to the screen's frame buffer using the upscaling shader. You can do this with SpriteBatch if you modify the shader to match the attributes and uniforms that SpriteBatch uses. (You could alternatively create a simple mesh with the attribute names that the shader expects, but SpriteBatch is probably easiest.)
First of all, we are not using a typical shader with SpriteBatch so you need to call ShaderProgram.pedantic = false; somewhere before loading anything.
Now you need a FrameBuffer at the right size. It should be sized for your sprites to be pixel perfect (one pixel of texture scales to one pixel of world). Something like this:
public void resize (int width, int height){
float ratio = (float)width / (float) height;
int gameWidth = (int)(GAME_HEIGHT / ratio);
boolean needNewFrameBuffer = false;
if (frameBuffer != null && (frameBuffer.getWidth() != gameWidth || frameBuffer.getHeight() != GAME_HEIGHT)){
frameBuffer.dispose();
needNewFrameBuffer = true;
}
if (frameBuffer == null || needNewFrameBuffer)
frameBuffer = new FrameBuffer(Format.RGBA8888, gameWidth, GAME_HEIGHT);
camera.viewportWidth = gameWidth;
camera.viewportHeight = GAME_HEIGHT;
camera.update();
}
Then you can draw to the frame buffer as if it's your screen. And after that, you draw the frame buffer's texture to the screen.
public void render (){
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
frameBuffer.begin();
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.setShader(null); //use default shader
batch.begin();
//draw your game
batch.end();
frameBuffer.end();
batch.setShader(upscaleShader);
batch.begin();
upscaleShader.setUniformf("rubyTextureSize", frameBuffer.getWidth(), frameBuffer.getHeight());//this is the uniform in your shader. I assume it's wanting the scene size in pixels
batch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2); //full screen quad for no projection matrix, with Y flipped as needed for frame buffer textures
batch.end();
}
There are also some changes you need to make to your shader so it will work with OpenGL ES, and because SpriteBatch is wired for specific attribute and uniform names:
At the top of your vertex shader, add this to define your vertex attributes and varyings (which your linked shader doesn't need because it's relying on built-in variables that aren't available in GL ES):
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord[5];
Then in the vertex shader, change the gl_Position line to
gl_Position = a_position; //since we can't rely on built-in variables
and replace all occurrences of gl_TexCoord with v_texCoord for the same reason.
In the fragment shader, to be compatible with OpenGL ES, you need to declare precision. You also need to declare the same varying, so add this to the top:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord[5];
As with the vertex shader, replace all occurrences of gl_TexCoord with v_texCoord. And also replace all occurrences of rubyTexture with u_texture, which is the texture name that SpriteBatch uses.
I think that's everything. I didn't actually test this and I'm going off of memory, but hopefully it gets you close.

Libgdx - Transparent color over texture

I am attempting to tint a texture a color but I want the texture to show under the tint. For example, I have a picture of a person but I want to tint them a light green and not change the transparency of the actual person itself.
So far I have attempted to use the SpriteBatch method setColor which takes rgba values. When I set the alpha value to .5 it will render the tinting and the texture with that alpha value. Is there any way to separate the alpha values of the tint and the texture?
I know I could draw another texture on top of it but I don't want to have two draw passes for the one texture because it will be inefficient. If there's anyway to do it in raw OpenGL that'd be great too.
You could draw it without the alpha right? The lighter the color overlay is the less it shows (by default its Color.White). So if you want to tint it slightly green you could use new Color(.9f, 1f, .9f, 1f) halfway would be new Color(.5f, 1f, .5f, 1f) and full green new Color(.0f, 1f, .0f, 1f).
The behavior you described (alpha affects the whole sprite's transparency) is defined by the shader.
The simple way to deal with this is in #MennoGouw's answer, but this always darkens the image. If you want to avoid darkening, you must use a custom shader. You can use a shader that acts somewhat like the Overlay blend mode in photoshop.
Here's an overlay fragment shader you could combine with the vertex shader from SpriteBatch's default shader (look at its source code). Here you can set the tint with the setColor method. To control the tint, you need to blend toward white. This method allows alpha to be preserved for fading sprites in and out if you need to.
tmpColor.set(tintColor).lerp(Color.WHITE, 1f - tintAmount);
tmpColor.a = transparencyAmount;
batch.setColor(tmpColor);
-
#ifdef GL_ES
#define LOWP lowp
precision mediump float;
#else
#define LOWP
#endif
varying vec2 v_texCoords;
varying LOWP vec4 v_color;
uniform sampler2D u_texture;
const vec3 one = vec3(1.0);
void main()
{
vec4 baseColor = texture2D(u_texture, v_texCoords);
vec3 multiplyColor = 2.0 * baseColor.rgb * v_color.rgb;
vec3 screenColor = one - 2.0 * (one - baseColor.rgb)*(one - v_color.rgb);
gl_FragColor = vec4(mix(multiplyColor, screenColor, step(0.5, baseColor.rgb)), v_color.a * baseColor.a);
}
I found the simple solution to be
float light = .5f; //between 0 and 1
batch.setColor(light, light, light, 1);
batch.draw(...);
batch.setColor(Color.White);

Setting a custom shader messes up the scale and position of a Sprite

I am using a custom shader on cocos2dx 3.1 trying to accomplish an special effect on a sprite used as a background (has to cover the whole screen)
However If i do not use the shader, the sprite is perfectly scaled and positioned, but when i do use it its show way smaller and on the bottom left.
Here is how i load the sprite
this->background_image = Sprite::create(image_name->GetText());
// Add background shader
if (this->background_image)
{
const GLchar *shaderSource = (const GLchar*) CCString::createWithContentsOfFile("OverlayShader.fsh")->getCString();
GLProgram * p = new GLProgram();
p->initWithByteArrays(ccPositionTextureA8Color_vert, shaderSource);
p->link();
p->updateUniforms();
this->background_image->setGLProgram(p);
}
// Classroom will be streched to cover all of the game screen
Size bgImgSize = this->background_image->getContentSize();
Size windowSize = Director::getInstance()->getWinSize();
float xScaleFactor = windowSize.width/bgImgSize.width;
float yScaleFactor = (windowSize.height-MARGIN_SPACE+10)/bgImgSize.height;
this->background_image->setScale(xScaleFactor, yScaleFactor);
this->background_image->setPosition(Vec2(windowSize.width/2.0f,windowSize.height/2.0f + ((MARGIN_SPACE-10)/2.0)));
this->background_image->retain();
And this is the shader im trying to use (a simple one, once this works ill change it to a photoshop-overlay style one)
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
void main()
{
vec4 v_orColor = v_fragmentColor * texture2D(CC_Texture0, v_texCoord);
float gray = dot(v_orColor.rgb, vec3(0.299, 0.587, 0.114));
gl_FragColor = vec4(gray, gray, gray, v_orColor.a);
}
My question is, what am i doing wrong? The first thing that comes to my mind is that the attribute pointers used on the vertex shader are not correct, but now i am using the default vertex shader.
I found the solution on another post, so i'll just quote it and link to that post:
Found the solution. The vert shader should not use the MVP matrix so I
loaded ccPositionTextureColor_noMVP_vert instead of
ccPositionTextureA8Color_vert.
Weird y-position offset using custom frag shader (Cocos2d-x)

Scene2D fadeIn fadeOut breaks when using shaders

I'm using a simple shader to do a vignette transition effect in my game on each screen load (picked up from this book). I use this transition effect in my menu screens. All is well with the exception that my fadeIn , fadeOut animations that I apply to the buttons of the menu don't have any effect with this added shader; in fact no alpha value has any effect on any actor on the screen when I use this shader.
Here's the vertex shader:
...
gl_Position = u_projTrans * a_position;
v_texCoord = a_texCoord0;
v_color = a_color;
And the fragment shader:
...
vec4 texColor = texture2D(u_texture, v_texCoord);
...
gl_FragColor = vec4(texColor.r, texColor.g, texColor.b, texColor.a);
How can I fix the alpha issue so actors render the alpha properly with the shader ?
Maybe the fragment blow will work? But how do I set the alpha for every actor of the screen?
uniform float ALPHA;
...
gl_FragColor = vec4(texColor.r, texColor.g, texColor.b, texColor.a * ALPHA);
Fade animations utilize the alpha of the vertex color. This is passed into SpriteBatch which passes it to the vertex shader as a_color. Your vertex shader already passes the color to the fragment shader as v_color, so in your fragment shader, you need to multiply the final alpha by this fade value:
Fragment shader:
varying LOWP vec4 v_color;
//...
gl_FragColor = vec4(texColor.r, texColor.g, texColor.b, texColor.a * v_color.a);

I get glitches and crashes trying to use WebGL for drawing sprites

I am converting my sprite drawing function from canvas 2d to webgl.
As I am new to webgl (and openGL too), I learned from this tuto http://games.greggman.com/game/webgl-image-processing/ and I did copy many lines from it, and some other ones I found.
At last I got it working, but there are some issues. For some reason, some images are never drawn though other ones are, then I get big random black squares on the screen, and finally it makes firefox crash...
I am tearing my hair out trying to solve these problems, but I am just lost... I have to ask for some help.
Please someone have a look at my code and tell me if you see where I made errors.
The vertex shader and fragment shader :
<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;
attribute vec2 a_texCoord;
uniform vec2 u_resolution;
uniform vec2 u_translation;
uniform vec2 u_rotation;
varying vec2 v_texCoord;
void main()
{
// Rotate the position
vec2 rotatedPosition = vec2(
a_position.x * u_rotation.y + a_position.y * u_rotation.x,
a_position.y * u_rotation.y - a_position.x * u_rotation.x);
// Add in the translation.
vec2 position = rotatedPosition + u_translation;
// convert the rectangle from pixels to 0.0 to 1.0
vec2 zeroToOne = a_position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace * vec2(1, -1), 0, 1);
// pass the texCoord to the fragment shader
// The GPU will interpolate this value between points
v_texCoord = a_texCoord;
}
</script>
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
gl_FragColor = texture2D(u_image, v_texCoord);
}
</script>
I use several layered canvas to avoid wasting ressources redrawing the big background and foreground at every frame while they never change. So my canvas are in liste_canvas[] and contexts are in liste_ctx[], c is the id ("background"/"game"/"foreground"/"infos"). Here is their creation code :
// Get A WebGL context
liste_canvas[c] = document.createElement("canvas") ;
document.getElementById('game_div').appendChild(liste_canvas[c]);
liste_ctx[c] = liste_canvas[c].getContext('webgl',{premultipliedAlpha:false}) || liste_canvas[c].getContext('experimental-webgl',{premultipliedAlpha:false});
liste_ctx[c].viewport(0, 0, game.res_w, game.res_h);
// setup a GLSL program
liste_ctx[c].vertexShader = createShaderFromScriptElement(liste_ctx[c], "2d-vertex-shader");
liste_ctx[c].fragmentShader = createShaderFromScriptElement(liste_ctx[c], "2d-fragment-shader");
liste_ctx[c].program = createProgram(liste_ctx[c], [liste_ctx[c].vertexShader, liste_ctx[c].fragmentShader]);
liste_ctx[c].useProgram(liste_ctx[c].program);
And here is my sprite drawing function.
My images are stored in a list too, sprites[], with a string name as id.
They store their origin, which is not necessarily their real center, as .orgn_x and .orgn_y.
function draw_sprite( id_canvas , d_sprite , d_x , d_y , d_rotation , d_scale , d_opacity )
{
if( id_canvas=="" ){ id_canvas = "game" ; }
if( !d_scale ){ d_scale = 1 ; }
if( !d_rotation ){ d_rotation = 0 ; }
if( render_mode == "webgl" )
{
c = id_canvas ;
// look up where the vertex data needs to go.
var positionLocation = liste_ctx[c].getAttribLocation(liste_ctx[c].program, "a_position");
var texCoordLocation = liste_ctx[c].getAttribLocation(liste_ctx[c].program, "a_texCoord");
// provide texture coordinates for the rectangle.
var texCoordBuffer = liste_ctx[c].createBuffer();
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, texCoordBuffer);
liste_ctx[c].bufferData(liste_ctx[c].ARRAY_BUFFER, new Float32Array([
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
0.0, 1.0,
1.0, 0.0,
1.0, 1.0]), liste_ctx[c].STATIC_DRAW);
liste_ctx[c].enableVertexAttribArray(texCoordLocation);
liste_ctx[c].vertexAttribPointer(texCoordLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
// Create a texture.
var texture = liste_ctx[c].createTexture();
liste_ctx[c].bindTexture(liste_ctx[c].TEXTURE_2D, texture);
// Set the parameters so we can render any size image.
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_WRAP_S, liste_ctx[c].CLAMP_TO_EDGE);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_WRAP_T, liste_ctx[c].CLAMP_TO_EDGE);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_MIN_FILTER, liste_ctx[c].LINEAR);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_MAG_FILTER, liste_ctx[c].LINEAR);
// Upload the image into the texture.
liste_ctx[c].texImage2D(liste_ctx[c].TEXTURE_2D, 0, liste_ctx[c].RGBA, liste_ctx[c].RGBA, liste_ctx[c].UNSIGNED_BYTE, sprites[d_sprite] );
// set the resolution
var resolutionLocation = liste_ctx[c].getUniformLocation(liste_ctx[c].program, "u_resolution");
liste_ctx[c].uniform2f(resolutionLocation, liste_canvas[c].width, liste_canvas[c].height);
// Create a buffer and put a single clipspace rectangle in it (2 triangles)
var buffer = liste_ctx[c].createBuffer();
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, buffer);
liste_ctx[c].enableVertexAttribArray(positionLocation);
liste_ctx[c].vertexAttribPointer(positionLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
// then I calculate the coordinates of the four points of the rectangle
// taking their origin and scale into account
// I cut this part as it is large and has no importance here
// and at last, we draw
liste_ctx[c].bufferData(liste_ctx[c].ARRAY_BUFFER, new Float32Array([
topleft_x , topleft_y ,
topright_x , topright_y ,
bottomleft_x , bottomleft_y ,
bottomleft_x , bottomleft_y ,
topright_x , topright_y ,
bottomright_x , bottomright_y ]), liste_ctx[c].STATIC_DRAW);
// draw
liste_ctx[c].drawArrays(liste_ctx[c].TRIANGLES, 0, 6);
}
}
I did not find any way to port ctx.globalAlpha to webgl by the way. If someone knows how I could add it in my code, I woud be thanksful for that too.
Please help. Thanks.
I don't know why things are crashing but just a few random comments.
Only create buffers and textures once.
Currently the code is creating buffers and textures every time you call draw_sprite. Instead you should be creating them at initialization time just once and then using the created buffers and textures later. Similarly you should look up the attribute and uniform locations at initialization time and then use them when you draw.
It's possible firefox is crashing because it's running out of memory since you're creating new buffers and new textures every time you call draw_sprite
I believe it's more common to make a single buffer with a unit square it in and then use matrix math to move that square where you want it. See http://games.greggman.com/game/webgl-2d-matrices/ for some help with matrix math.
If you go that route then you only need to call all the buffer related stuff once.
Even if you don't use matrix math you can still add translation and scale to your shader, then just make one buffer with a unit rectangle (as in
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1]), gl.STATIC_DRAW)
After that then just translate it where you want it and scale it to the size you want it drawn.
In fact, if you go the matrix route it would be really easy to simulate the 2d context's matrix functions ctx.translate, ctx.rotate, ctx.scale etc...
The code might be easier to follow, and type, if you pulled the context into a local variable.
Instead of stuff like
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, buffer);
liste_ctx[c].enableVertexAttribArray(positionLocation);
liste_ctx[c].vertexAttribPointer(positionLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
You could do this
var gl = liste_ctx[c];
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
Storing things on the context is going to get tricky
This code
liste_ctx[c].vertexShader = createShaderFromScriptElement(liste_ctx[c], "2d-vertex-shader");
liste_ctx[c].fragmentShader = createShaderFromScriptElement(liste_ctx[c], "2d-fragment-shader");
liste_ctx[c].program = createProgram(liste_ctx[c], [liste_ctx[c].vertexShader, liste_ctx[c].fragmentShader]);
Makes it look like you're going to only have a single vertexshader, a single fragment shader and single program. Maybe you are but it's pretty common in WebGL to have several shaders and programs.
For globalAlpha first you need to turn on blending.
gl.enable(gl.BLEND);
And you need to tell it how to blend. To be the same as the canvas 2d context you
need to use pre-multiplied alpha math so
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
Then you need to multiply the color the shader draws by an alpha value. For example
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// global alpha
uniform float u_globalAlpha;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
vec4 color = texture2D(u_image, v_texCoord);
// Multiply the color by u_globalAlpha
gl_FragColor = color * u_globalAlpha;
}
</script>
Then you'll need to set u_globalAlpha. At init time look up it's location
var globalAlphaLocation = gl.getUniformLocation(program, "u_globalAlpha");
And at draw time set it
gl.uniform1f(globalAlphaLocation, someValueFrom0to1);
Personally I usually use a vec4 and call it u_colorMult
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// colorMult
uniform float u_colorMult;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
gl_FragColor = texture2D(u_image, v_texCoord) * u_colorMult;
}
</script>
Then I can tint my sprites for example to make the sprite draw in red just use
glUniform4fv(colorMultLocation, [1, 0, 0, 1]);
It also means I can easily draw in solid colors. Create a 1x1 pixel solid white texture. Anytime I want to draw in a solid color I just bind that texture and set u_colorMult to the color I want to draw in.