GLSL ES - Mapping texture from rectangular to polar coordinates with repeating - libgdx

I need to warp a rectangular texture to texture with polar coordinates. To spread the light on my problem, I am going to illustrate it:
I have the image:
and I have to deform it using shader to something like this:
then I'm going to map it to a plane.
How can I do this? Any help will be appreciated!

That is not particularly hard. You just need to convert your texture coordinates to polar coordinates, and use the radius for the texture's s direction, and the azimuth angle to the t direction.
Assuming you want to texture a quad that way, and also assuming you use standard texcoords for this, so the lower left vertex will have (0,0), the upper right one (1,1) as texture coords.
So in the fragment shader, you just need to convert the interpolated texcoords (using tc for this) to polar coordinates. SInce the center will be at (0.5, 0.5), we have to offset this first.
vec2 x=tc - vec2(0.5,0.5);
float radius=length(x);
float angle=atan(x.y, x.x);
Now all you need to do is to map the range back to the [0,1] texture space. The maximum radius here will be 0.5, so you simply can use 2*radius as the s coordinate, and angle will be in [-pi,pi], so you should map that to [0,1] for the t coordinate.
UPDATE1
There are a few details I left out so far. From your image it is clear that you do not want the inner circle to be mapped to the texture. But this can easily be incorparated. I just assume two radii here: r_inner, which is the radius of the inner circle, and r_outer, which is the radius onto which you want to map the outer part. Let me sketch out a simple fragment shader for that:
#version ...
precision ...
varying vec2 tc; // texcoords from vertex shader
uniform sampler2D tex;
#define PI 3.14159265358979323844
void main ()
{
const float r_inner=0.25;
const float t_outer=0.5;
vec2 x = v_tex - vec2(0.5);
float radius = length(x);
float angle = atan(x.y, x.x);
vec2 tc_polar; // the new polar texcoords
// map radius so that for r=r_inner -> 0 and r=r_outer -> 1
tc_polar.s = ( radius - r_inner) / (r_outer - r_inner);
// map angle from [-PI,PI] to [0,1]
tc_polar.t = angle * 0.5 / PI + 0.5;
// texture mapping
gl_FragColor = texture2D(tex, tc_polar);
}
Now there is still one detail missing. The mapping generated above generates texcoords which are outside of the [0,1] range for any position where you have black in your image. But the texture sampling will not automatically give black here. The easiest solution would be to just use the GL_CLAMP_TO_BORDER mode for GL_TEXTURE_WRAP_S (the default border color will be (0,0,0,0) so you might not need to specify it or you can set GL_TEXTURE_BORDER_COLOR explicitly to (0,0,0,1) if you work with alpha blending and don't want any transparency that way). That way, you will get the black color for free. Other options would be using GL_CLAMP_TO_EDGE and adding a black pixel column both the left and right end of the texture. Another way would be to add a brach to the shader and check for tc_polar.s being below 0 or above 1, but I wouldn't recommend that for this use case.

For those who want a more flexible shader that does the same:
uniform float Angle; // range 2pi / 100000.0 to 1.0 (rounded down), exponential
uniform float AngleMin; // range -3.2 to 3.2
uniform float AngleWidth; // range 0.0 to 6.4
uniform float Radius; // range -10000.0 to 1.0
uniform float RadiusMin; // range 0.0 to 2.0
uniform float RadiusWidth; // range 0.0 to 2.0
uniform vec2 Center; // range: -1.0 to 3.0
uniform sampler2D Texture;
void main()
{
// Normalised texture coords
vec2 texCoord = gl_TexCoord[0].xy;
// Shift origin to texture centre (with offset)
vec2 normCoord;
normCoord.x = 2.0 * texCoord.x – Center.x;
normCoord.y = 2.0 * texCoord.y – Center.y;
// Convert Cartesian to Polar coords
float r = length(normCoord);
float theta = atan(normCoord.y, normCoord.x);
// The actual effect
r = (r < RadiusMin) ? r : (r > RadiusMin + RadiusWidth) ? r : ceil(r / Radius) * Radius;
theta = (theta < AngleMin) ? theta : (theta > AngleMin + AngleWidth) ? theta : floor(theta / Angle) * Angle;
// Convert Polar back to Cartesian coords
normCoord.x = r * cos(theta);
normCoord.y = r * sin(theta);
// Shift origin back to bottom-left (taking offset into account)
texCoord.x = normCoord.x / 2.0 + (Center.x / 2.0);
texCoord.y = normCoord.y / 2.0 + (Center.y / 2.0);
// Output
gl_FragColor = texture2D(Texture, texCoord);
}
Source: polarpixellate glsl.
Shadertoy example

Related

Function for Creating Spiral Stripes with Orientation 45 Degrees to Radial?

I am trying to reconstruct the spiral pattern in the depicted image for a neuroscience experiment. Basically, the pattern has the properties that:
1) Every part of the spiral has local orientation 45 degrees to radial
2) The thickness of each arm of the spiral increases in direct proportion with the radius.
Ideally I would like to be able to parametrically vary the number of arms of the spiral as needed. You can ignore the blank circle in the middle and the circular boundaries, those are very easy to add.
Does anybody know if there is a function in terms of the number of spiral arms and local orientation that would be able to reconstruct this spiral pattern? For what it's worth I'm coding in Matlab, although if someone has the mathematical formula I can implement it myself no problem.
Your spiral image does not satisfy your property 1, as can be seen by overlaying the spiral with a flipped copy (the angles at the outer edge are more perpendicular to the radial direction than 45deg, and more parallel at the inner edge):
As I commented, a logarithmic spiral can satisfy both properties. I implemented it in GLSL using Fragmentarium, here is the code:
#include "Progressive2D.frag"
#group Spiral
uniform int Stripes; slider[1,20,100]
const float pi = 3.141592653589793;
vec2 cLog(vec2 z)
{
return vec2(log(length(z)), atan(z.y, z.x));
}
vec3 color(vec2 p)
{
float t = radians(45.0);
float c = cos(t);
float s = sin(t);
mat2 m = mat2(c, -s, s, c);
vec2 q = m * cLog(p);
return vec3(float
( mod(float(Stripes) * q.y / (sqrt(2.0) * pi), 1.0) < 0.5
|| length(p) < 0.125
|| length(p) > 0.875
));
}
And the output:

In Starling, how do you transform Filters to match the target Sprite's rotation & position?

Let's say your Starling display-list is as follows:
Stage
|___MainApp
|______Canvas (filter's target)
Then, you decide your MainApp should be rotated 90 degrees and offset a bit:
mainApp.rotation = Math.PI * 0.5;
mainApp.x = stage.stageWidth;
But all of a sudden, the filter keeps on applying itself to the target (canvas) in the angle it was originally (as if the MainApp was still at 0 degrees).
(notice in the GIF how the Blur's strong horizontal value continues to only apply horizontally although the parent object turned 90 degrees).
What would need to be changed to apply the filter to the target object before it gets it's parents transform? That way (I'm assuming) the filter's result would get transformed by the parent objects.
Any guess as to how this could be done?
https://github.com/bigp/StarlingShaderIssue
(PS: the filter I'm actually using is custom-made, but this BlurFilter example shows the same issue I'm having with the custom one. If there's any patching-up to do in the shader code, at least it wouldn't necessarily have to be done on the built-in BlurFilter specifically).
I solved this myself with numerous trial and error attempts over the course of several hours.
Since I only needed the shader to run in either at 0 or 90 degrees (not actually tweened like the gif demo shown in the question), I created a shader with two specialized sets of AGAL instructions.
Without going in too much details, the rotated version basically requires a few extra instructions to flip the x and y fields in the vertex and fragment shader (either by moving them with mov or directly calculating the mul or div result into the x or y field).
For example, compare the 0 deg vertex shader...
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
"mul posScaled, va1, viewportScale", // pass displacement positions (scaled)
].join("\n");
... with the 90 deg vertex shader:
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
//Calculate the rotated vertex "displacement" UVs
"mov temp1, va1",
"mov temp2, va1",
"mul temp2.y, temp1.x, viewportScale.y", //Flip X to Y, and scale with viewport Y
"mul temp2.x, temp1.y, viewportScale.x", //Flip Y to X, and scale with viewport X
"sub temp2.y, 1.0, temp2.y", //Invert the UV for the Y axis.
"mov posScaled, temp2",
].join("\n");
You can ignore the special aliases in the AGAL example, they're essentially posOriginal = v0, posScaled = v1 variants and viewportScale = vc4constants, then I do a string-replace to change them back to their respective registers & fields ).
Just a human-readable trick I use to avoid going insane. \☻/
The part that I struggled with the most was calculating the correct scale to adjust the UV's scale (with proper detection to Stage / Viewport resize and render-texture size shifts).
Eventually, this is what I came up with in the AS3 code:
var pt:Texture = _passTexture,
dt:RenderTexture = _displacement.texture,
notReady:Boolean = pt == null,
star:Starling = Starling.current;
var finalScaleX:Number, viewRatioX:Number = star.viewPort.width / star.stage.stageWidth;
var finalScaleY:Number, viewRatioY:Number = star.viewPort.height / star.stage.stageHeight;
if (notReady) {
finalScaleX = finalScaleY = 1.0;
} else if (isRotated) {
//NOTE: Notice how the native width is divided with height, instead of same side. Weird, but it works!
finalScaleY = pt.nativeWidth / dt.nativeHeight / _imageRatio / paramScaleX / viewRatioX; //Eureka!
finalScaleX = pt.nativeHeight / dt.nativeWidth / _imageRatio / paramScaleY / viewRatioY; //Eureka x2!
} else {
finalScaleX = pt.nativeWidth / dt.nativeWidth / _imageRatio / viewRatioX / paramScaleX;
finalScaleY = pt.nativeHeight / dt.nativeHeight / _imageRatio / viewRatioY / paramScaleY;
}
Hopefully these extracted pieces of code can be helpful to others with similar shader issues.
Good luck!

Libgdx - Transparent color over texture

I am attempting to tint a texture a color but I want the texture to show under the tint. For example, I have a picture of a person but I want to tint them a light green and not change the transparency of the actual person itself.
So far I have attempted to use the SpriteBatch method setColor which takes rgba values. When I set the alpha value to .5 it will render the tinting and the texture with that alpha value. Is there any way to separate the alpha values of the tint and the texture?
I know I could draw another texture on top of it but I don't want to have two draw passes for the one texture because it will be inefficient. If there's anyway to do it in raw OpenGL that'd be great too.
You could draw it without the alpha right? The lighter the color overlay is the less it shows (by default its Color.White). So if you want to tint it slightly green you could use new Color(.9f, 1f, .9f, 1f) halfway would be new Color(.5f, 1f, .5f, 1f) and full green new Color(.0f, 1f, .0f, 1f).
The behavior you described (alpha affects the whole sprite's transparency) is defined by the shader.
The simple way to deal with this is in #MennoGouw's answer, but this always darkens the image. If you want to avoid darkening, you must use a custom shader. You can use a shader that acts somewhat like the Overlay blend mode in photoshop.
Here's an overlay fragment shader you could combine with the vertex shader from SpriteBatch's default shader (look at its source code). Here you can set the tint with the setColor method. To control the tint, you need to blend toward white. This method allows alpha to be preserved for fading sprites in and out if you need to.
tmpColor.set(tintColor).lerp(Color.WHITE, 1f - tintAmount);
tmpColor.a = transparencyAmount;
batch.setColor(tmpColor);
-
#ifdef GL_ES
#define LOWP lowp
precision mediump float;
#else
#define LOWP
#endif
varying vec2 v_texCoords;
varying LOWP vec4 v_color;
uniform sampler2D u_texture;
const vec3 one = vec3(1.0);
void main()
{
vec4 baseColor = texture2D(u_texture, v_texCoords);
vec3 multiplyColor = 2.0 * baseColor.rgb * v_color.rgb;
vec3 screenColor = one - 2.0 * (one - baseColor.rgb)*(one - v_color.rgb);
gl_FragColor = vec4(mix(multiplyColor, screenColor, step(0.5, baseColor.rgb)), v_color.a * baseColor.a);
}
I found the simple solution to be
float light = .5f; //between 0 and 1
batch.setColor(light, light, light, 1);
batch.draw(...);
batch.setColor(Color.White);

I get glitches and crashes trying to use WebGL for drawing sprites

I am converting my sprite drawing function from canvas 2d to webgl.
As I am new to webgl (and openGL too), I learned from this tuto http://games.greggman.com/game/webgl-image-processing/ and I did copy many lines from it, and some other ones I found.
At last I got it working, but there are some issues. For some reason, some images are never drawn though other ones are, then I get big random black squares on the screen, and finally it makes firefox crash...
I am tearing my hair out trying to solve these problems, but I am just lost... I have to ask for some help.
Please someone have a look at my code and tell me if you see where I made errors.
The vertex shader and fragment shader :
<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;
attribute vec2 a_texCoord;
uniform vec2 u_resolution;
uniform vec2 u_translation;
uniform vec2 u_rotation;
varying vec2 v_texCoord;
void main()
{
// Rotate the position
vec2 rotatedPosition = vec2(
a_position.x * u_rotation.y + a_position.y * u_rotation.x,
a_position.y * u_rotation.y - a_position.x * u_rotation.x);
// Add in the translation.
vec2 position = rotatedPosition + u_translation;
// convert the rectangle from pixels to 0.0 to 1.0
vec2 zeroToOne = a_position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace * vec2(1, -1), 0, 1);
// pass the texCoord to the fragment shader
// The GPU will interpolate this value between points
v_texCoord = a_texCoord;
}
</script>
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
gl_FragColor = texture2D(u_image, v_texCoord);
}
</script>
I use several layered canvas to avoid wasting ressources redrawing the big background and foreground at every frame while they never change. So my canvas are in liste_canvas[] and contexts are in liste_ctx[], c is the id ("background"/"game"/"foreground"/"infos"). Here is their creation code :
// Get A WebGL context
liste_canvas[c] = document.createElement("canvas") ;
document.getElementById('game_div').appendChild(liste_canvas[c]);
liste_ctx[c] = liste_canvas[c].getContext('webgl',{premultipliedAlpha:false}) || liste_canvas[c].getContext('experimental-webgl',{premultipliedAlpha:false});
liste_ctx[c].viewport(0, 0, game.res_w, game.res_h);
// setup a GLSL program
liste_ctx[c].vertexShader = createShaderFromScriptElement(liste_ctx[c], "2d-vertex-shader");
liste_ctx[c].fragmentShader = createShaderFromScriptElement(liste_ctx[c], "2d-fragment-shader");
liste_ctx[c].program = createProgram(liste_ctx[c], [liste_ctx[c].vertexShader, liste_ctx[c].fragmentShader]);
liste_ctx[c].useProgram(liste_ctx[c].program);
And here is my sprite drawing function.
My images are stored in a list too, sprites[], with a string name as id.
They store their origin, which is not necessarily their real center, as .orgn_x and .orgn_y.
function draw_sprite( id_canvas , d_sprite , d_x , d_y , d_rotation , d_scale , d_opacity )
{
if( id_canvas=="" ){ id_canvas = "game" ; }
if( !d_scale ){ d_scale = 1 ; }
if( !d_rotation ){ d_rotation = 0 ; }
if( render_mode == "webgl" )
{
c = id_canvas ;
// look up where the vertex data needs to go.
var positionLocation = liste_ctx[c].getAttribLocation(liste_ctx[c].program, "a_position");
var texCoordLocation = liste_ctx[c].getAttribLocation(liste_ctx[c].program, "a_texCoord");
// provide texture coordinates for the rectangle.
var texCoordBuffer = liste_ctx[c].createBuffer();
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, texCoordBuffer);
liste_ctx[c].bufferData(liste_ctx[c].ARRAY_BUFFER, new Float32Array([
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
0.0, 1.0,
1.0, 0.0,
1.0, 1.0]), liste_ctx[c].STATIC_DRAW);
liste_ctx[c].enableVertexAttribArray(texCoordLocation);
liste_ctx[c].vertexAttribPointer(texCoordLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
// Create a texture.
var texture = liste_ctx[c].createTexture();
liste_ctx[c].bindTexture(liste_ctx[c].TEXTURE_2D, texture);
// Set the parameters so we can render any size image.
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_WRAP_S, liste_ctx[c].CLAMP_TO_EDGE);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_WRAP_T, liste_ctx[c].CLAMP_TO_EDGE);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_MIN_FILTER, liste_ctx[c].LINEAR);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_MAG_FILTER, liste_ctx[c].LINEAR);
// Upload the image into the texture.
liste_ctx[c].texImage2D(liste_ctx[c].TEXTURE_2D, 0, liste_ctx[c].RGBA, liste_ctx[c].RGBA, liste_ctx[c].UNSIGNED_BYTE, sprites[d_sprite] );
// set the resolution
var resolutionLocation = liste_ctx[c].getUniformLocation(liste_ctx[c].program, "u_resolution");
liste_ctx[c].uniform2f(resolutionLocation, liste_canvas[c].width, liste_canvas[c].height);
// Create a buffer and put a single clipspace rectangle in it (2 triangles)
var buffer = liste_ctx[c].createBuffer();
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, buffer);
liste_ctx[c].enableVertexAttribArray(positionLocation);
liste_ctx[c].vertexAttribPointer(positionLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
// then I calculate the coordinates of the four points of the rectangle
// taking their origin and scale into account
// I cut this part as it is large and has no importance here
// and at last, we draw
liste_ctx[c].bufferData(liste_ctx[c].ARRAY_BUFFER, new Float32Array([
topleft_x , topleft_y ,
topright_x , topright_y ,
bottomleft_x , bottomleft_y ,
bottomleft_x , bottomleft_y ,
topright_x , topright_y ,
bottomright_x , bottomright_y ]), liste_ctx[c].STATIC_DRAW);
// draw
liste_ctx[c].drawArrays(liste_ctx[c].TRIANGLES, 0, 6);
}
}
I did not find any way to port ctx.globalAlpha to webgl by the way. If someone knows how I could add it in my code, I woud be thanksful for that too.
Please help. Thanks.
I don't know why things are crashing but just a few random comments.
Only create buffers and textures once.
Currently the code is creating buffers and textures every time you call draw_sprite. Instead you should be creating them at initialization time just once and then using the created buffers and textures later. Similarly you should look up the attribute and uniform locations at initialization time and then use them when you draw.
It's possible firefox is crashing because it's running out of memory since you're creating new buffers and new textures every time you call draw_sprite
I believe it's more common to make a single buffer with a unit square it in and then use matrix math to move that square where you want it. See http://games.greggman.com/game/webgl-2d-matrices/ for some help with matrix math.
If you go that route then you only need to call all the buffer related stuff once.
Even if you don't use matrix math you can still add translation and scale to your shader, then just make one buffer with a unit rectangle (as in
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1]), gl.STATIC_DRAW)
After that then just translate it where you want it and scale it to the size you want it drawn.
In fact, if you go the matrix route it would be really easy to simulate the 2d context's matrix functions ctx.translate, ctx.rotate, ctx.scale etc...
The code might be easier to follow, and type, if you pulled the context into a local variable.
Instead of stuff like
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, buffer);
liste_ctx[c].enableVertexAttribArray(positionLocation);
liste_ctx[c].vertexAttribPointer(positionLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
You could do this
var gl = liste_ctx[c];
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
Storing things on the context is going to get tricky
This code
liste_ctx[c].vertexShader = createShaderFromScriptElement(liste_ctx[c], "2d-vertex-shader");
liste_ctx[c].fragmentShader = createShaderFromScriptElement(liste_ctx[c], "2d-fragment-shader");
liste_ctx[c].program = createProgram(liste_ctx[c], [liste_ctx[c].vertexShader, liste_ctx[c].fragmentShader]);
Makes it look like you're going to only have a single vertexshader, a single fragment shader and single program. Maybe you are but it's pretty common in WebGL to have several shaders and programs.
For globalAlpha first you need to turn on blending.
gl.enable(gl.BLEND);
And you need to tell it how to blend. To be the same as the canvas 2d context you
need to use pre-multiplied alpha math so
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
Then you need to multiply the color the shader draws by an alpha value. For example
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// global alpha
uniform float u_globalAlpha;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
vec4 color = texture2D(u_image, v_texCoord);
// Multiply the color by u_globalAlpha
gl_FragColor = color * u_globalAlpha;
}
</script>
Then you'll need to set u_globalAlpha. At init time look up it's location
var globalAlphaLocation = gl.getUniformLocation(program, "u_globalAlpha");
And at draw time set it
gl.uniform1f(globalAlphaLocation, someValueFrom0to1);
Personally I usually use a vec4 and call it u_colorMult
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// colorMult
uniform float u_colorMult;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
gl_FragColor = texture2D(u_image, v_texCoord) * u_colorMult;
}
</script>
Then I can tint my sprites for example to make the sprite draw in red just use
glUniform4fv(colorMultLocation, [1, 0, 0, 1]);
It also means I can easily draw in solid colors. Create a 1x1 pixel solid white texture. Anytime I want to draw in a solid color I just bind that texture and set u_colorMult to the color I want to draw in.

WebGL on Chrome for Windows : warning X3206 when cast float to int

I think i found a strange bug of Windows version of the Chrome WebGL implementation. Linking a shader with a cast float to int cause an "warning X3206: implicit truncation of vector type" error. I have tried many way to avoid it, but no chance.
for example :
int i;
vec3 u = vec3(1.5, 2.5, 3.5);
float z = u.z;
i = int(u.z): // warning X3206: implicit truncation of vector type
i = int(z): // warning X3206: implicit truncation of vector type
The strange thing is that this vertex program perfectly works on the Linux version on the same computer (same graphic card). Is it a driver issue ? (I have tested on two Windows version with two different graphic cards with the same result). Other strange thing (to me) : X3206 is ordinary an DirectX error (?!) what is the relation with WebGL ?
Here is the complete shader i use and cause the Warning:
#define MATRIX_ARRAY_SIZE 48
/* vertex attributes */
attribute vec4 p;
attribute vec3 n;
attribute vec3 u;
attribute vec3 t;
attribute vec3 b;
attribute vec4 c;
attribute vec4 i;
attribute vec4 w;
/* enable vertex weight */
uniform bool ENw;
/* enable comput tangent */
uniform bool ENt;
/* eye view matrix */
uniform mat4 MEV;
/* transform matrices */
uniform mat4 MXF[MATRIX_ARRAY_SIZE];
/* transform normal matrices */
uniform mat3 MNR[MATRIX_ARRAY_SIZE];
/* varying fragment shader */
varying vec4 Vp;
varying vec3 Vn;
varying vec2 Vu;
varying vec3 Vt;
varying vec3 Vb;
varying vec4 Vc;
void main(void) {
/* Position et Normal transform */
if(ENw) { /* enable vertex weight */
Vp = vec4(0.0, 0.0, 0.0, 0.0);
Vn = vec3(0.0, 0.0, 0.0);
Vp += (MXF[int(i.x)] * p) * w.x;
Vn += (MNR[int(i.x)] * n) * w.x;
Vp += (MXF[int(i.y)] * p) * w.y;
Vn += (MNR[int(i.y)] * n) * w.y;
Vp += (MXF[int(i.z)] * p) * w.z;
Vn += (MNR[int(i.z)] * n) * w.z;
Vp += (MXF[int(i.w)] * p) * w.w;
Vn += (MNR[int(i.w)] * n) * w.w;
} else {
Vp = MXF[0] * p;
Vn = MNR[0] * n;
}
/* Tangent et Binormal transform */
if(ENt) { /* enable comput tangent */
vec3 Cz = cross(Vn, vec3(0.0, 0.0, 1.0));
vec3 Cy = cross(Vn, vec3(0.0, 1.0, 0.0));
if(length(Cz) > length(Cy)) {
Vt = Cz;
} else {
Vt = Cy;
}
Vb = cross(Vn, Vt);
} else {
Vt = t;
Vb = b;
}
/* Texcoord et color */
Vu = u.xy;
Vc = c;
gl_PointSize = u.z;
gl_Position = MEV * Vp;
}
If someone found an elegant workaround...
The problem is you're running out of uniforms.
48 mat3s + 49 mat4s + 2 bools = 1218 values / 4 = at least 306 uniform vectors needed
On my GPU gl.getParameter(gl.MAX_VERTEX_UNIFORM_VECTORS) only returns 254.
Note that 306 uniform vectors is for a perfectly optimizing GLSL compiler. For an un-optimized compiler it might internally use 3 vec4s for mat3 and a full vec4 for each bool making it need more uniform vectors.
That seems to be the case since if I lower MATRIX_ARRAY_SIZE to 35 and it works on my machine and 36 fails.
35 mat3s each using 3 vectors + 36 mat4s each using 4 vectors + 2 bools each using 1 vector = 249 vectors required. One more, 36 requires 257 which is 3 more than my GPU driver supports which is why it fails.
Note 128 is the minimum number of vertex uniform vectors required to be supported which means if you want it to work everywhere you'd need to set MATRIX_ARRAY_SIZE to 17. On the other hand I don't know what uniforms you're using in your fragment shader. Alternatively you could query the number of uniform vectors supported and modify your shader source at runtime.
Here's a sample that works for me
http://jsfiddle.net/greggman/474Et/2/
Change the 35 at the top back to 48 and it will generate the same error message.
It sucks that the error message is cryptic.
Chrome's and Firefox's WebGL in Windows is implemented with ANGLE, which in turn uses DirectX as the underlying API. Then, it doesn't come as a surprise that certain DirectX restrictions/warnings/errors rise when using WebGL there.
And you indeed are truncating a float type, use T floor(T) or T ceil(T) to obtain more meaningful results and no warnings.
This should be fixed in ANGLE revision 1557. It will take a while for this fix to become available in mainstream Chrome.