I get glitches and crashes trying to use WebGL for drawing sprites - html

I am converting my sprite drawing function from canvas 2d to webgl.
As I am new to webgl (and openGL too), I learned from this tuto http://games.greggman.com/game/webgl-image-processing/ and I did copy many lines from it, and some other ones I found.
At last I got it working, but there are some issues. For some reason, some images are never drawn though other ones are, then I get big random black squares on the screen, and finally it makes firefox crash...
I am tearing my hair out trying to solve these problems, but I am just lost... I have to ask for some help.
Please someone have a look at my code and tell me if you see where I made errors.
The vertex shader and fragment shader :
<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;
attribute vec2 a_texCoord;
uniform vec2 u_resolution;
uniform vec2 u_translation;
uniform vec2 u_rotation;
varying vec2 v_texCoord;
void main()
{
// Rotate the position
vec2 rotatedPosition = vec2(
a_position.x * u_rotation.y + a_position.y * u_rotation.x,
a_position.y * u_rotation.y - a_position.x * u_rotation.x);
// Add in the translation.
vec2 position = rotatedPosition + u_translation;
// convert the rectangle from pixels to 0.0 to 1.0
vec2 zeroToOne = a_position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace * vec2(1, -1), 0, 1);
// pass the texCoord to the fragment shader
// The GPU will interpolate this value between points
v_texCoord = a_texCoord;
}
</script>
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
gl_FragColor = texture2D(u_image, v_texCoord);
}
</script>
I use several layered canvas to avoid wasting ressources redrawing the big background and foreground at every frame while they never change. So my canvas are in liste_canvas[] and contexts are in liste_ctx[], c is the id ("background"/"game"/"foreground"/"infos"). Here is their creation code :
// Get A WebGL context
liste_canvas[c] = document.createElement("canvas") ;
document.getElementById('game_div').appendChild(liste_canvas[c]);
liste_ctx[c] = liste_canvas[c].getContext('webgl',{premultipliedAlpha:false}) || liste_canvas[c].getContext('experimental-webgl',{premultipliedAlpha:false});
liste_ctx[c].viewport(0, 0, game.res_w, game.res_h);
// setup a GLSL program
liste_ctx[c].vertexShader = createShaderFromScriptElement(liste_ctx[c], "2d-vertex-shader");
liste_ctx[c].fragmentShader = createShaderFromScriptElement(liste_ctx[c], "2d-fragment-shader");
liste_ctx[c].program = createProgram(liste_ctx[c], [liste_ctx[c].vertexShader, liste_ctx[c].fragmentShader]);
liste_ctx[c].useProgram(liste_ctx[c].program);
And here is my sprite drawing function.
My images are stored in a list too, sprites[], with a string name as id.
They store their origin, which is not necessarily their real center, as .orgn_x and .orgn_y.
function draw_sprite( id_canvas , d_sprite , d_x , d_y , d_rotation , d_scale , d_opacity )
{
if( id_canvas=="" ){ id_canvas = "game" ; }
if( !d_scale ){ d_scale = 1 ; }
if( !d_rotation ){ d_rotation = 0 ; }
if( render_mode == "webgl" )
{
c = id_canvas ;
// look up where the vertex data needs to go.
var positionLocation = liste_ctx[c].getAttribLocation(liste_ctx[c].program, "a_position");
var texCoordLocation = liste_ctx[c].getAttribLocation(liste_ctx[c].program, "a_texCoord");
// provide texture coordinates for the rectangle.
var texCoordBuffer = liste_ctx[c].createBuffer();
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, texCoordBuffer);
liste_ctx[c].bufferData(liste_ctx[c].ARRAY_BUFFER, new Float32Array([
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
0.0, 1.0,
1.0, 0.0,
1.0, 1.0]), liste_ctx[c].STATIC_DRAW);
liste_ctx[c].enableVertexAttribArray(texCoordLocation);
liste_ctx[c].vertexAttribPointer(texCoordLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
// Create a texture.
var texture = liste_ctx[c].createTexture();
liste_ctx[c].bindTexture(liste_ctx[c].TEXTURE_2D, texture);
// Set the parameters so we can render any size image.
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_WRAP_S, liste_ctx[c].CLAMP_TO_EDGE);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_WRAP_T, liste_ctx[c].CLAMP_TO_EDGE);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_MIN_FILTER, liste_ctx[c].LINEAR);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_MAG_FILTER, liste_ctx[c].LINEAR);
// Upload the image into the texture.
liste_ctx[c].texImage2D(liste_ctx[c].TEXTURE_2D, 0, liste_ctx[c].RGBA, liste_ctx[c].RGBA, liste_ctx[c].UNSIGNED_BYTE, sprites[d_sprite] );
// set the resolution
var resolutionLocation = liste_ctx[c].getUniformLocation(liste_ctx[c].program, "u_resolution");
liste_ctx[c].uniform2f(resolutionLocation, liste_canvas[c].width, liste_canvas[c].height);
// Create a buffer and put a single clipspace rectangle in it (2 triangles)
var buffer = liste_ctx[c].createBuffer();
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, buffer);
liste_ctx[c].enableVertexAttribArray(positionLocation);
liste_ctx[c].vertexAttribPointer(positionLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
// then I calculate the coordinates of the four points of the rectangle
// taking their origin and scale into account
// I cut this part as it is large and has no importance here
// and at last, we draw
liste_ctx[c].bufferData(liste_ctx[c].ARRAY_BUFFER, new Float32Array([
topleft_x , topleft_y ,
topright_x , topright_y ,
bottomleft_x , bottomleft_y ,
bottomleft_x , bottomleft_y ,
topright_x , topright_y ,
bottomright_x , bottomright_y ]), liste_ctx[c].STATIC_DRAW);
// draw
liste_ctx[c].drawArrays(liste_ctx[c].TRIANGLES, 0, 6);
}
}
I did not find any way to port ctx.globalAlpha to webgl by the way. If someone knows how I could add it in my code, I woud be thanksful for that too.
Please help. Thanks.

I don't know why things are crashing but just a few random comments.
Only create buffers and textures once.
Currently the code is creating buffers and textures every time you call draw_sprite. Instead you should be creating them at initialization time just once and then using the created buffers and textures later. Similarly you should look up the attribute and uniform locations at initialization time and then use them when you draw.
It's possible firefox is crashing because it's running out of memory since you're creating new buffers and new textures every time you call draw_sprite
I believe it's more common to make a single buffer with a unit square it in and then use matrix math to move that square where you want it. See http://games.greggman.com/game/webgl-2d-matrices/ for some help with matrix math.
If you go that route then you only need to call all the buffer related stuff once.
Even if you don't use matrix math you can still add translation and scale to your shader, then just make one buffer with a unit rectangle (as in
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1]), gl.STATIC_DRAW)
After that then just translate it where you want it and scale it to the size you want it drawn.
In fact, if you go the matrix route it would be really easy to simulate the 2d context's matrix functions ctx.translate, ctx.rotate, ctx.scale etc...
The code might be easier to follow, and type, if you pulled the context into a local variable.
Instead of stuff like
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, buffer);
liste_ctx[c].enableVertexAttribArray(positionLocation);
liste_ctx[c].vertexAttribPointer(positionLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
You could do this
var gl = liste_ctx[c];
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
Storing things on the context is going to get tricky
This code
liste_ctx[c].vertexShader = createShaderFromScriptElement(liste_ctx[c], "2d-vertex-shader");
liste_ctx[c].fragmentShader = createShaderFromScriptElement(liste_ctx[c], "2d-fragment-shader");
liste_ctx[c].program = createProgram(liste_ctx[c], [liste_ctx[c].vertexShader, liste_ctx[c].fragmentShader]);
Makes it look like you're going to only have a single vertexshader, a single fragment shader and single program. Maybe you are but it's pretty common in WebGL to have several shaders and programs.
For globalAlpha first you need to turn on blending.
gl.enable(gl.BLEND);
And you need to tell it how to blend. To be the same as the canvas 2d context you
need to use pre-multiplied alpha math so
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
Then you need to multiply the color the shader draws by an alpha value. For example
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// global alpha
uniform float u_globalAlpha;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
vec4 color = texture2D(u_image, v_texCoord);
// Multiply the color by u_globalAlpha
gl_FragColor = color * u_globalAlpha;
}
</script>
Then you'll need to set u_globalAlpha. At init time look up it's location
var globalAlphaLocation = gl.getUniformLocation(program, "u_globalAlpha");
And at draw time set it
gl.uniform1f(globalAlphaLocation, someValueFrom0to1);
Personally I usually use a vec4 and call it u_colorMult
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// colorMult
uniform float u_colorMult;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
gl_FragColor = texture2D(u_image, v_texCoord) * u_colorMult;
}
</script>
Then I can tint my sprites for example to make the sprite draw in red just use
glUniform4fv(colorMultLocation, [1, 0, 0, 1]);
It also means I can easily draw in solid colors. Create a 1x1 pixel solid white texture. Anytime I want to draw in a solid color I just bind that texture and set u_colorMult to the color I want to draw in.

Related

Convert OpenGL HQX shader to LibGDX

I was getting in to shaders for LibGDX and noticed there are some attributes that are only being used in LibGDX.
The standard Vertex and Fragment shaders from https://github.com/libgdx/libgdx/wiki/Shaders work perfect and gets applied to my SpriteBatch.
When i try to use a HQX shader like https://github.com/SupSuper/OpenXcom/blob/master/bin/common/Shaders/HQ2x.OpenGL.shader i get a lot of errors.
Probably because i need to send some LibGDX dependant variables to the shader but i can't find out which that should be.
I'd like to use these shaders on desktops with large screens so the game keeps looking great on these screens.
I used this code to load the shader:
try {
shaderProgram = new ShaderProgram(Gdx.files.internal("vertex.glsl").readString(), Gdx.files.internal("fragment.glsl").readString());
shaderProgram.pedantic = false;
System.out.println("Shader Log:");
System.out.println(shaderProgram.getLog());
} catch(Exception ex) { }
The Shader Log outputs:
No errors.
Thanks in advance.
This is a post processing shader, so your flow should go like this:
Draw your scene to a FBO at pixel perfect resolution using SpriteBatch's default shader.
Draw the FBO's texture to the screen's frame buffer using the upscaling shader. You can do this with SpriteBatch if you modify the shader to match the attributes and uniforms that SpriteBatch uses. (You could alternatively create a simple mesh with the attribute names that the shader expects, but SpriteBatch is probably easiest.)
First of all, we are not using a typical shader with SpriteBatch so you need to call ShaderProgram.pedantic = false; somewhere before loading anything.
Now you need a FrameBuffer at the right size. It should be sized for your sprites to be pixel perfect (one pixel of texture scales to one pixel of world). Something like this:
public void resize (int width, int height){
float ratio = (float)width / (float) height;
int gameWidth = (int)(GAME_HEIGHT / ratio);
boolean needNewFrameBuffer = false;
if (frameBuffer != null && (frameBuffer.getWidth() != gameWidth || frameBuffer.getHeight() != GAME_HEIGHT)){
frameBuffer.dispose();
needNewFrameBuffer = true;
}
if (frameBuffer == null || needNewFrameBuffer)
frameBuffer = new FrameBuffer(Format.RGBA8888, gameWidth, GAME_HEIGHT);
camera.viewportWidth = gameWidth;
camera.viewportHeight = GAME_HEIGHT;
camera.update();
}
Then you can draw to the frame buffer as if it's your screen. And after that, you draw the frame buffer's texture to the screen.
public void render (){
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
frameBuffer.begin();
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.setShader(null); //use default shader
batch.begin();
//draw your game
batch.end();
frameBuffer.end();
batch.setShader(upscaleShader);
batch.begin();
upscaleShader.setUniformf("rubyTextureSize", frameBuffer.getWidth(), frameBuffer.getHeight());//this is the uniform in your shader. I assume it's wanting the scene size in pixels
batch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2); //full screen quad for no projection matrix, with Y flipped as needed for frame buffer textures
batch.end();
}
There are also some changes you need to make to your shader so it will work with OpenGL ES, and because SpriteBatch is wired for specific attribute and uniform names:
At the top of your vertex shader, add this to define your vertex attributes and varyings (which your linked shader doesn't need because it's relying on built-in variables that aren't available in GL ES):
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord[5];
Then in the vertex shader, change the gl_Position line to
gl_Position = a_position; //since we can't rely on built-in variables
and replace all occurrences of gl_TexCoord with v_texCoord for the same reason.
In the fragment shader, to be compatible with OpenGL ES, you need to declare precision. You also need to declare the same varying, so add this to the top:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord[5];
As with the vertex shader, replace all occurrences of gl_TexCoord with v_texCoord. And also replace all occurrences of rubyTexture with u_texture, which is the texture name that SpriteBatch uses.
I think that's everything. I didn't actually test this and I'm going off of memory, but hopefully it gets you close.

Why WebGL is faster than Canvas?

If both use hardware acceleration (GPU) to execute code, why WebGL is so most faster than Canvas?
I mean, I want to know why at low level, the chain from the code to the processor.
What happens? Canvas/WebGL comunicates directly with Drivers and then with Video Card?
Canvas is slower because it's generic and therefore is hard to optimize to the same level that you can optimize WebGL. Let's take a simple example, drawing a solid circle with arc.
Canvas actually runs on top of the GPU as well using the same APIs as WebGL. So, what does canvas have to do when you draw an circle? The minimum code to draw an circle in JavaScript using canvas 2d is
ctx.beginPath():
ctx.arc(x, y, radius, startAngle, endAngle);
ctx.fill();
You can imagine internally the simplest implementation is
beginPath creates a buffer (gl.bufferData)
arc generates the points for triangles that make a circle and uploads with gl.bufferData.
fill calls gl.drawArrays or gl.drawElements
But wait a minute ... knowing what we know about how GL works canvas can't generate the points at step 2 because if we call stroke instead of fill then based on what we know about how GL works we need a different set of points for a solid circle (fill) vs an outline of a circle (stroke). So, what really happens is something more like
beginPath creates or resets some internal buffer
arc generates the points that make a circle into the internal buffer
fill takes the points in that internal buffer, generates the correct set of triangles for the points in that internal buffer into a GL buffer, uploads them with gl.bufferData, calls gl.drawArrays or gl.drawElements
What happens if we want to draw 2 circles? The same steps are likely repeated.
Let's compare that to what we would do in WebGL. Of course in WebGL we'd have to write our own shaders (Canvas has its shaders as well). We'd also have to create a buffer and fill it with the triangles for a circle, (note we already saved time as we skipped the intermediate buffer of points). We then can call gl.drawArrays or gl.drawElements to draw our circle. And if we want to draw a second circle? We just update a uniform and call gl.drawArrays again skipping all the other steps.
const m4 = twgl.m4;
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
attribute vec4 position;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * position;
}
`;
const fs = `
precision mediump float;
uniform vec4 u_color;
void main() {
gl_FragColor = u_color;
}
`;
const program = twgl.createProgram(gl, [vs, fs]);
const positionLoc = gl.getAttribLocation(program, 'position');
const colorLoc = gl.getUniformLocation(program, 'u_color');
const matrixLoc = gl.getUniformLocation(program, 'u_matrix');
const positions = [];
const radius = 50;
const numEdgePoints = 64;
for (let i = 0; i < numEdgePoints; ++i) {
const angle0 = (i ) * Math.PI * 2 / numEdgePoints;
const angle1 = (i + 1) * Math.PI * 2 / numEdgePoints;
// make a triangle
positions.push(
0, 0,
Math.cos(angle0) * radius,
Math.sin(angle0) * radius,
Math.cos(angle1) * radius,
Math.sin(angle1) * radius,
);
}
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);
gl.enableVertexAttribArray(positionLoc);
gl.vertexAttribPointer(positionLoc, 2, gl.FLOAT, false, 0, 0);
gl.useProgram(program);
const projection = m4.ortho(0, gl.canvas.width, 0, gl.canvas.height, -1, 1);
function drawCircle(x, y, color) {
const mat = m4.translate(projection, [x, y, 0]);
gl.uniform4fv(colorLoc, color);
gl.uniformMatrix4fv(matrixLoc, false, mat);
gl.drawArrays(gl.TRIANGLES, 0, numEdgePoints * 3);
}
drawCircle( 50, 75, [1, 0, 0, 1]);
drawCircle(150, 75, [0, 1, 0, 1]);
drawCircle(250, 75, [0, 0, 1, 1]);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Some devs might look at that and think Canvas caches the buffer so it can just reuse the points on the 2nd draw call. It's possible that's true but I kind of doubt it. Why? Because of the genericness of the canvas api. fill, the function that does all the real work doesn't know what's in the internal buffer of points. You can call arc, then moveTo, lineTo, then arc again, then call fill. All of those points will be in the internal buffer of points when we get to fill.
const ctx = document.querySelector('canvas').getContext('2d');
ctx.beginPath();
ctx.moveTo(50, 30);
ctx.lineTo(100, 150);
ctx.arc(150, 75, 30, 0, Math.PI * 2);
ctx.fill();
<canvas></canvas>
In other words, fill needs to always look at all the points. Another thing, I suspect arc tries to optimize for size. If you call arc with a radius of 2 it probably generates less points than if you call it with a radius of 2000. It's possible canvas caches the points but given the hit rate would likely be small it seems unlikely.
In any case, the point is WebGL let's you take control at a lower level allowing you skip steps that canvas can't skip. It also lets you reuse data that canvas can't reuse.
In fact if we know we want to draw 10000 animated circles we even have other options in WebGL. We could generate the points for 10000 circles which is a valid option. We could also use instancing. Both of those techniques would be vastly faster than canvas since in canvas we'd have to call arc 10000 times and one way or another it would have to generate points for 10000 circles every single frame instead of just once at the beginning and it would have to call gl.drawXXX 10000 times instead of just once.
Of course the converse is canvas is easy. Drawing the circle took 3 lines of code. In WebGL, because you need to setup and write shaders it probably takes at least 60 lines of code. In fact the example above is about 60 lines not including the code to compile and link shaders (~10 lines). On top of that canvas supports transforms, patterns, gradients, masks, etc. All options we'd have to add with lots more lines of code in WebGL. So canvas is basically trading ease of use for speed over WebGL.
Canvas does not execute a pipeline of layers of processing to transition sets of vertices and indices into triangles which then are given textures and lighting all in hardware as does OpenGL/WebGL ... this is the root cause of such speed differences ... Canvas counterparts to such formulations are all done on CPU with only the final rendering sent to the graphics hardware ... speed differences are particularly evident when massive number of such vertices are attempted to be synthesized/animated on Canvas versus WebGL ...
Alas we are on the cusp on hearing the public announcement of the modern replacement to OpenGL : Vulkan who's remit includes exposing general purpose compute in a more pedestrian way than OpenCL/CUDA as well as baking in use of multi-core processors which might just shift Canvas like processing onto hardware

GLSL ES - Mapping texture from rectangular to polar coordinates with repeating

I need to warp a rectangular texture to texture with polar coordinates. To spread the light on my problem, I am going to illustrate it:
I have the image:
and I have to deform it using shader to something like this:
then I'm going to map it to a plane.
How can I do this? Any help will be appreciated!
That is not particularly hard. You just need to convert your texture coordinates to polar coordinates, and use the radius for the texture's s direction, and the azimuth angle to the t direction.
Assuming you want to texture a quad that way, and also assuming you use standard texcoords for this, so the lower left vertex will have (0,0), the upper right one (1,1) as texture coords.
So in the fragment shader, you just need to convert the interpolated texcoords (using tc for this) to polar coordinates. SInce the center will be at (0.5, 0.5), we have to offset this first.
vec2 x=tc - vec2(0.5,0.5);
float radius=length(x);
float angle=atan(x.y, x.x);
Now all you need to do is to map the range back to the [0,1] texture space. The maximum radius here will be 0.5, so you simply can use 2*radius as the s coordinate, and angle will be in [-pi,pi], so you should map that to [0,1] for the t coordinate.
UPDATE1
There are a few details I left out so far. From your image it is clear that you do not want the inner circle to be mapped to the texture. But this can easily be incorparated. I just assume two radii here: r_inner, which is the radius of the inner circle, and r_outer, which is the radius onto which you want to map the outer part. Let me sketch out a simple fragment shader for that:
#version ...
precision ...
varying vec2 tc; // texcoords from vertex shader
uniform sampler2D tex;
#define PI 3.14159265358979323844
void main ()
{
const float r_inner=0.25;
const float t_outer=0.5;
vec2 x = v_tex - vec2(0.5);
float radius = length(x);
float angle = atan(x.y, x.x);
vec2 tc_polar; // the new polar texcoords
// map radius so that for r=r_inner -> 0 and r=r_outer -> 1
tc_polar.s = ( radius - r_inner) / (r_outer - r_inner);
// map angle from [-PI,PI] to [0,1]
tc_polar.t = angle * 0.5 / PI + 0.5;
// texture mapping
gl_FragColor = texture2D(tex, tc_polar);
}
Now there is still one detail missing. The mapping generated above generates texcoords which are outside of the [0,1] range for any position where you have black in your image. But the texture sampling will not automatically give black here. The easiest solution would be to just use the GL_CLAMP_TO_BORDER mode for GL_TEXTURE_WRAP_S (the default border color will be (0,0,0,0) so you might not need to specify it or you can set GL_TEXTURE_BORDER_COLOR explicitly to (0,0,0,1) if you work with alpha blending and don't want any transparency that way). That way, you will get the black color for free. Other options would be using GL_CLAMP_TO_EDGE and adding a black pixel column both the left and right end of the texture. Another way would be to add a brach to the shader and check for tc_polar.s being below 0 or above 1, but I wouldn't recommend that for this use case.
For those who want a more flexible shader that does the same:
uniform float Angle; // range 2pi / 100000.0 to 1.0 (rounded down), exponential
uniform float AngleMin; // range -3.2 to 3.2
uniform float AngleWidth; // range 0.0 to 6.4
uniform float Radius; // range -10000.0 to 1.0
uniform float RadiusMin; // range 0.0 to 2.0
uniform float RadiusWidth; // range 0.0 to 2.0
uniform vec2 Center; // range: -1.0 to 3.0
uniform sampler2D Texture;
void main()
{
// Normalised texture coords
vec2 texCoord = gl_TexCoord[0].xy;
// Shift origin to texture centre (with offset)
vec2 normCoord;
normCoord.x = 2.0 * texCoord.x – Center.x;
normCoord.y = 2.0 * texCoord.y – Center.y;
// Convert Cartesian to Polar coords
float r = length(normCoord);
float theta = atan(normCoord.y, normCoord.x);
// The actual effect
r = (r < RadiusMin) ? r : (r > RadiusMin + RadiusWidth) ? r : ceil(r / Radius) * Radius;
theta = (theta < AngleMin) ? theta : (theta > AngleMin + AngleWidth) ? theta : floor(theta / Angle) * Angle;
// Convert Polar back to Cartesian coords
normCoord.x = r * cos(theta);
normCoord.y = r * sin(theta);
// Shift origin back to bottom-left (taking offset into account)
texCoord.x = normCoord.x / 2.0 + (Center.x / 2.0);
texCoord.y = normCoord.y / 2.0 + (Center.y / 2.0);
// Output
gl_FragColor = texture2D(Texture, texCoord);
}
Source: polarpixellate glsl.
Shadertoy example

LibGDX - overlay texture above another texture using shader

I'm trying to mix two different textures(scene and clouds) which are obtained from FBO and draw them on quad.
uniform sampler2D u_texture;
uniform sampler2D u_texture2;
uniform vec2 u_res;
void main(void)
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = sceneColor+addColor;
}
glBlendFunc is
Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
I tried all combination of glBlendFunc and the combination above was the best one.
Creating FBOs:
fbClouds = new FrameBuffer(Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
fbScene = new FrameBuffer(Format.RGB565, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
fbMix = new FrameBuffer(Format.RGB565, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);
creating clouds:
fbClouds.begin();
Gdx.gl.glClearColor(0, 0, 0, 0); // to make it transparent
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
modelBatch.begin(cam);
for (Integer e : drawOrder) {
if(isVisible(cam, clouds[e])){
drawLightning(e, modelBatch);
modelBatch.render(clouds[e], cloudShader);
modelBatch.flush();
}
}
modelBatch.end();
fbClouds.end();
render code:
Gdx.gl20.glDisable(GL20.GL_BLEND);
//Gdx.gl20.glEnable(GL20.GL_BLEND);
//Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
fbMix.begin();
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
mixShader.begin();
fbScene.getColorBufferTexture().bind(1);
mixShader.setUniformi("u_texture", 1);
fbClouds.getColorBufferTexture().bind(0);
mixShader.setUniformi("u_texture2", 0);
mixShader.setUniformf("u_res", Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
quad.render(mixShader, GL20.GL_TRIANGLES);
mixShader.end();
fbMix.end();
So, I get unexpected result(clouds have absolutely white color, though they should be grey):
In case if I use modelbatch to render clouds the result is as should be:
What is the right way to mix two textures without losing color?
The blend function you use to draw the two FBO's to the screen should be irrelevant, because nothing shows through them, right? So blending should be turned off before you draw the FBO's, or you're wasting GPU cycles mixing the FBO's with your clear color.
The reason it turns white is that you are adding gray to blue without darkening the blue first. Normally when you draw a transparent object to the screen, you use a blend function like this: GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA. That means you are multiplying the sprite's color by its transparency (effectively darkening transparent colors) and you multiply the background by the inverse of the sprite's alpha (thereby darkening the pixels that will be added to the sprite's opaque pixels so they won't be too bright).
In your case, you want to emulate the same thing inside your fragment shader, since you are trying to blend two textures inside your shader before outputing them to the screen.
So if your cloud FBO had an alpha channel, you could do this in your fragment shader and you'd be good to go:
void main()
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = addColor*addColor.a + sceneColor*(1-addColor.a);
}
However, your cloud's FBO does not have an alpha channel so you need to change something.
One thing you could do is make your FBO color texture use RGBA4444 so it has an alpha channel, and then carefully draw your clouds so they also write to the alpha channel. This would be kind of complicated, because you'd have to use a separated blend function, where you select two different blend functions for the RGB and the A channels separately. I haven't done this before. Although it should be possible, I haven't even tried this method before because I think the 4-bit colors would look pretty lousy.
Alternatively, if your clouds are all going to be monochrome, you can encode your alpha information into one of the color channels. To do this you will need to customize the fragment shader you use to draw the clouds to the FBO. It would look something like this:
vec4 textureColor = texture2D(u_texture, v_texCoord);
gl_FragColor = vec4(textureColor.r * textureColor.a, textureColor.a, 0, textureColor.a);
What this does is put the cloud's monochrome color in the R channel with alpha pre-multiplied, and it puts the alpha in the G channel. We want to pre-multiply the alpha so we can simply add the encoded cloud sprite onto the scene. This is because when you draw something in front of an already-drawn sprite in an area that was translucent in the already-drawn sprite, you want to brighten the G-encoded alpha to make the pixel more opaque in the final FBO image. Since we are using pre-multiplied alpha, draw the clouds using the blend function GL_ONE, GL_ONE_MINUS_SRC_ALPHA.
(This is a slight approximation because the G-encoded alpha of the destination is getting darkened a bit by the second part of the blend function, but I looked at the math and it seems acceptable. The approximation results in slightly more transparent clouds.)
So now the cloud FBO would look like a bunch of yellow if you drew it to screen as is. We just need to make a slight adjustment to our fragment shader above to use the encoded data:
void main()
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = vec4(addColor.r*addColor.g) + sceneColor*(1-addColor.g);
}
If you want to tint your clouds something other than pure gray, you can add a uniform color tint:
void main()
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor = texture2D(u_texture, texCoord);
vec4 addColor = texture2D(u_texture2, texCoord);
gl_FragColor = u_cloudTint*vec4(addColor.r*addColor.g) + sceneColor*(1-addColor.g);
}

WebGL: drawArrays: attribs not setup correctly

Here's my vertex shader:
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
attribute float type;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec4 vColor;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
if(type > 0.0) {
} else {
}
}
What I want to do is pretty simple, just capture a float value named type and use it for logic operates.
The problem is, when I try to use it in Javascript, the error comes:
shaderProgram.textureCoordAttribute = gl.getAttribLocation(shaderProgram, "type");
gl.enableVertexAttribArray(shaderProgram.textureCoordAttribute);
WebGL: INVALID_OPERATION: drawArrays: attribs not setup correctly main.js:253
WebGL: INVALID_OPERATION: drawArrays: attribs not setup correctly main.js:267
WebGL: INVALID_OPERATION: drawElements: attribs not setup correctly
The output of getAttribLocation is meaningful, all of them are equal greater than 0.
================= UPDATE ===================
Here's my whole project code:
https://gist.github.com/royguo/5873503
Explanation:
index.html Shaders script are here.
main.js Start the WebGL application and draw scene.
shaders.js Load shaders and bind attributes.
buffers.js Init vertex and color buffers.
utils.js Common used utils.
Here is a link to a gist with the files I updated to get the type attribute working.
If you search for //ADDED CODE you should be able to view every change I had to make to get it working.
In addition to enabling the objectTypeAttribute you have to create an array buffer for each object you are drawing:
triangleObjectTypeBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleObjectTypeBuffer);
objectTypes = [
1.0, 1.0, 0.0
];
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(objectTypes), gl.STATIC_DRAW);
triangleObjectTypeBuffer.itemSize = 1;
triangleObjectTypeBuffer.numItems = 3;
And bind that array buffer for each object before you draw the object:
gl.bindBuffer(gl.ARRAY_BUFFER, triangleObjectTypeBuffer);
gl.vertexAttribPointer(shaderProgram.objectTypeAttribute, triangleObjectTypeBuffer.itemSize, gl.FLOAT, false, 0, 0);
You probably already tried this and accidentally went wrong somewhere along the way.