'Constructor' has too many arguments [vertex shader] - html

I am working with WebGL and am writing up the vertex shader in my .html file that goes along with my .js file for my program. This mainly deals with lighting.
The error I receive is: Vertex shader failed to compile. The error log is:ERROR: 0:29: 'constructor' : too many arguments
ERROR: 0:32: 'dot' : no matching overloaded function found
29 and 32 correspond to the ones in the code below (see comments)
Here is my vertex shader code
<script id="vertex-shader" type="x-shader/x-vertex">
attribute vec4 a_Position;
attribute vec4 a_Color;
attribute vec3 a_Normal;
attribute vec2 a_Texture;
uniform mat4 u_MvpMatrix;
uniform mat3 u_NormalMatrix;
uniform vec3 uAmbientColor; // all 3 of these passed in .js
uniform vec3 uLightPosition; //
uniform vec3 uLightColor; //
varying vec4 v_Color;
varying vec2 v_Texture;
varying vec3 vLightWeighting;
void
main()
{
vec4 eyeLightPos = vec4(0.0, 200.0, 0.0, 1.0); // Line 29***
vec4 eyePosition = u_MvpMatrix * vec4(a_Position, 1.0); // vertex position in the eye space
vec3 normal = normalize(u_NormalMatrix * a_Normal);
float nDotL = max(dot(normal, eyeLightPos), 0.0); // Line 32***
v_Color = vec4(a_Color.rgb * nDotL, a_Color.a);
v_Texture = a_Texture;
////*************
vec3 eyeLightVector = normalize(eyeLightPos.xyz - eyePosition.xyz);
vec3 eyeViewVector = -normalize(eyePosition.xyz); // eye position is at (0, 0, 0) in the eye space
vec3 eyeReflectVector = -reflect(eyeLightVector, normal);
float shininess = 16.0;
float specular = pow(max(dot(eyeViewVector, eyeReflectVector),0.0), shininess);;
vLightWeighting = uAmbientColor + uLightColor * nDotL + uLightColor * specular;
}
</script>
Why is this happening? Let me know if you'd like to see anything else.

You most probably marked the wrong line for 29. The error happens two lines below:
vec4 eyePosition = u_MvpMatrix * vec4(a_Position, 1.0);
The problem is, that a_Position is already a vec4, thus you try to call a constructor of the form vec4(vec4, float) which is not existing. Maybe you wanted to pass only the first three axis for a_Position in which case the code would be:
vec4 eyePosition = u_MvpMatrix * vec4(a_Position.xyz, 1.0);
The second error comes because you have a type mismatch. In the dot method normal is a vec3 but eyeLightPos is a vec4. The dot function is only defined for two parameters of the same type.

vec4 eyePosition = u_MvpMatrix * vec4(a_Position);
a_Position already has 4 vectors and in eyePosition, you are multiplying with 5 vector
Whether you remove the last axes vec4(a_Position); or vec4(a_Position.xyz, 1.0);

Related

Passing too many arguments works sometimes, why?

I was using this site for testing: http://glslsandbox.com/
This shows the color red:
#ifdef GL_ES
precision mediump float;
#endif
void main( void ) {
vec4 c = vec4(1.0, 0.0, 0.0, 1.0);
gl_FragColor = c;
}
I can change the color line in different ways, sometimes it compiles and sometimes not:
vec4 c = vec4(1.0, vec2(0.0), vec4(1.0)); // works
vec4 c = vec4(vec2(1.0), vec2(0.0), 0.0); // doesn't compile
vec4 c = vec4(1.0, vec2(0.0), vec2(1.0)); // works
vec4 c = vec4(1.0, vec4(0.0), 0.0); // doesn't compile
vec4 c = vec4(vec4(1.0), vec4(0.0)); // doesn't compile
Why does passing too many arguments work sometimes and sometimes not?
See OpenGL Shading Language 4.60 Specification (HTML) - 5.4.2. Vector and Matrix Constructors:
[...] The arguments will be consumed left to right, and each argument will have all its components consumed, in order, before any components from the next argument are consumed. [...]
In these cases, there must be enough components provided in the arguments to provide an initializer for every component in the constructed value. It is a compile-time error to provide extra arguments beyond this last used argument.
Hence, the following is allowed:
vec4 c = vec4(1.0, 2.0, 3.0, vec4(4.0));
vec3 c = vec3(vec4(4.0));
However, the following is not allowed because the last element in the constructor (vec4(4.0)) causes a compile time error (It is a compile-time error to provide extra arguments beyond this last used argument.):
vec3 c = vec3(1.0, 2.0, 3.0, vec4(4.0));
The reason for this is that it should be allowed to construct a smaller vector (or matrix) from a larger vector (or matrix). For instance:
vec4 v4;
vec3 v3 = vec3(v4);

GLSL ES - Mapping texture from rectangular to polar coordinates with repeating

I need to warp a rectangular texture to texture with polar coordinates. To spread the light on my problem, I am going to illustrate it:
I have the image:
and I have to deform it using shader to something like this:
then I'm going to map it to a plane.
How can I do this? Any help will be appreciated!
That is not particularly hard. You just need to convert your texture coordinates to polar coordinates, and use the radius for the texture's s direction, and the azimuth angle to the t direction.
Assuming you want to texture a quad that way, and also assuming you use standard texcoords for this, so the lower left vertex will have (0,0), the upper right one (1,1) as texture coords.
So in the fragment shader, you just need to convert the interpolated texcoords (using tc for this) to polar coordinates. SInce the center will be at (0.5, 0.5), we have to offset this first.
vec2 x=tc - vec2(0.5,0.5);
float radius=length(x);
float angle=atan(x.y, x.x);
Now all you need to do is to map the range back to the [0,1] texture space. The maximum radius here will be 0.5, so you simply can use 2*radius as the s coordinate, and angle will be in [-pi,pi], so you should map that to [0,1] for the t coordinate.
UPDATE1
There are a few details I left out so far. From your image it is clear that you do not want the inner circle to be mapped to the texture. But this can easily be incorparated. I just assume two radii here: r_inner, which is the radius of the inner circle, and r_outer, which is the radius onto which you want to map the outer part. Let me sketch out a simple fragment shader for that:
#version ...
precision ...
varying vec2 tc; // texcoords from vertex shader
uniform sampler2D tex;
#define PI 3.14159265358979323844
void main ()
{
const float r_inner=0.25;
const float t_outer=0.5;
vec2 x = v_tex - vec2(0.5);
float radius = length(x);
float angle = atan(x.y, x.x);
vec2 tc_polar; // the new polar texcoords
// map radius so that for r=r_inner -> 0 and r=r_outer -> 1
tc_polar.s = ( radius - r_inner) / (r_outer - r_inner);
// map angle from [-PI,PI] to [0,1]
tc_polar.t = angle * 0.5 / PI + 0.5;
// texture mapping
gl_FragColor = texture2D(tex, tc_polar);
}
Now there is still one detail missing. The mapping generated above generates texcoords which are outside of the [0,1] range for any position where you have black in your image. But the texture sampling will not automatically give black here. The easiest solution would be to just use the GL_CLAMP_TO_BORDER mode for GL_TEXTURE_WRAP_S (the default border color will be (0,0,0,0) so you might not need to specify it or you can set GL_TEXTURE_BORDER_COLOR explicitly to (0,0,0,1) if you work with alpha blending and don't want any transparency that way). That way, you will get the black color for free. Other options would be using GL_CLAMP_TO_EDGE and adding a black pixel column both the left and right end of the texture. Another way would be to add a brach to the shader and check for tc_polar.s being below 0 or above 1, but I wouldn't recommend that for this use case.
For those who want a more flexible shader that does the same:
uniform float Angle; // range 2pi / 100000.0 to 1.0 (rounded down), exponential
uniform float AngleMin; // range -3.2 to 3.2
uniform float AngleWidth; // range 0.0 to 6.4
uniform float Radius; // range -10000.0 to 1.0
uniform float RadiusMin; // range 0.0 to 2.0
uniform float RadiusWidth; // range 0.0 to 2.0
uniform vec2 Center; // range: -1.0 to 3.0
uniform sampler2D Texture;
void main()
{
// Normalised texture coords
vec2 texCoord = gl_TexCoord[0].xy;
// Shift origin to texture centre (with offset)
vec2 normCoord;
normCoord.x = 2.0 * texCoord.x – Center.x;
normCoord.y = 2.0 * texCoord.y – Center.y;
// Convert Cartesian to Polar coords
float r = length(normCoord);
float theta = atan(normCoord.y, normCoord.x);
// The actual effect
r = (r < RadiusMin) ? r : (r > RadiusMin + RadiusWidth) ? r : ceil(r / Radius) * Radius;
theta = (theta < AngleMin) ? theta : (theta > AngleMin + AngleWidth) ? theta : floor(theta / Angle) * Angle;
// Convert Polar back to Cartesian coords
normCoord.x = r * cos(theta);
normCoord.y = r * sin(theta);
// Shift origin back to bottom-left (taking offset into account)
texCoord.x = normCoord.x / 2.0 + (Center.x / 2.0);
texCoord.y = normCoord.y / 2.0 + (Center.y / 2.0);
// Output
gl_FragColor = texture2D(Texture, texCoord);
}
Source: polarpixellate glsl.
Shadertoy example

I get glitches and crashes trying to use WebGL for drawing sprites

I am converting my sprite drawing function from canvas 2d to webgl.
As I am new to webgl (and openGL too), I learned from this tuto http://games.greggman.com/game/webgl-image-processing/ and I did copy many lines from it, and some other ones I found.
At last I got it working, but there are some issues. For some reason, some images are never drawn though other ones are, then I get big random black squares on the screen, and finally it makes firefox crash...
I am tearing my hair out trying to solve these problems, but I am just lost... I have to ask for some help.
Please someone have a look at my code and tell me if you see where I made errors.
The vertex shader and fragment shader :
<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;
attribute vec2 a_texCoord;
uniform vec2 u_resolution;
uniform vec2 u_translation;
uniform vec2 u_rotation;
varying vec2 v_texCoord;
void main()
{
// Rotate the position
vec2 rotatedPosition = vec2(
a_position.x * u_rotation.y + a_position.y * u_rotation.x,
a_position.y * u_rotation.y - a_position.x * u_rotation.x);
// Add in the translation.
vec2 position = rotatedPosition + u_translation;
// convert the rectangle from pixels to 0.0 to 1.0
vec2 zeroToOne = a_position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace * vec2(1, -1), 0, 1);
// pass the texCoord to the fragment shader
// The GPU will interpolate this value between points
v_texCoord = a_texCoord;
}
</script>
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
gl_FragColor = texture2D(u_image, v_texCoord);
}
</script>
I use several layered canvas to avoid wasting ressources redrawing the big background and foreground at every frame while they never change. So my canvas are in liste_canvas[] and contexts are in liste_ctx[], c is the id ("background"/"game"/"foreground"/"infos"). Here is their creation code :
// Get A WebGL context
liste_canvas[c] = document.createElement("canvas") ;
document.getElementById('game_div').appendChild(liste_canvas[c]);
liste_ctx[c] = liste_canvas[c].getContext('webgl',{premultipliedAlpha:false}) || liste_canvas[c].getContext('experimental-webgl',{premultipliedAlpha:false});
liste_ctx[c].viewport(0, 0, game.res_w, game.res_h);
// setup a GLSL program
liste_ctx[c].vertexShader = createShaderFromScriptElement(liste_ctx[c], "2d-vertex-shader");
liste_ctx[c].fragmentShader = createShaderFromScriptElement(liste_ctx[c], "2d-fragment-shader");
liste_ctx[c].program = createProgram(liste_ctx[c], [liste_ctx[c].vertexShader, liste_ctx[c].fragmentShader]);
liste_ctx[c].useProgram(liste_ctx[c].program);
And here is my sprite drawing function.
My images are stored in a list too, sprites[], with a string name as id.
They store their origin, which is not necessarily their real center, as .orgn_x and .orgn_y.
function draw_sprite( id_canvas , d_sprite , d_x , d_y , d_rotation , d_scale , d_opacity )
{
if( id_canvas=="" ){ id_canvas = "game" ; }
if( !d_scale ){ d_scale = 1 ; }
if( !d_rotation ){ d_rotation = 0 ; }
if( render_mode == "webgl" )
{
c = id_canvas ;
// look up where the vertex data needs to go.
var positionLocation = liste_ctx[c].getAttribLocation(liste_ctx[c].program, "a_position");
var texCoordLocation = liste_ctx[c].getAttribLocation(liste_ctx[c].program, "a_texCoord");
// provide texture coordinates for the rectangle.
var texCoordBuffer = liste_ctx[c].createBuffer();
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, texCoordBuffer);
liste_ctx[c].bufferData(liste_ctx[c].ARRAY_BUFFER, new Float32Array([
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
0.0, 1.0,
1.0, 0.0,
1.0, 1.0]), liste_ctx[c].STATIC_DRAW);
liste_ctx[c].enableVertexAttribArray(texCoordLocation);
liste_ctx[c].vertexAttribPointer(texCoordLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
// Create a texture.
var texture = liste_ctx[c].createTexture();
liste_ctx[c].bindTexture(liste_ctx[c].TEXTURE_2D, texture);
// Set the parameters so we can render any size image.
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_WRAP_S, liste_ctx[c].CLAMP_TO_EDGE);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_WRAP_T, liste_ctx[c].CLAMP_TO_EDGE);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_MIN_FILTER, liste_ctx[c].LINEAR);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_MAG_FILTER, liste_ctx[c].LINEAR);
// Upload the image into the texture.
liste_ctx[c].texImage2D(liste_ctx[c].TEXTURE_2D, 0, liste_ctx[c].RGBA, liste_ctx[c].RGBA, liste_ctx[c].UNSIGNED_BYTE, sprites[d_sprite] );
// set the resolution
var resolutionLocation = liste_ctx[c].getUniformLocation(liste_ctx[c].program, "u_resolution");
liste_ctx[c].uniform2f(resolutionLocation, liste_canvas[c].width, liste_canvas[c].height);
// Create a buffer and put a single clipspace rectangle in it (2 triangles)
var buffer = liste_ctx[c].createBuffer();
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, buffer);
liste_ctx[c].enableVertexAttribArray(positionLocation);
liste_ctx[c].vertexAttribPointer(positionLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
// then I calculate the coordinates of the four points of the rectangle
// taking their origin and scale into account
// I cut this part as it is large and has no importance here
// and at last, we draw
liste_ctx[c].bufferData(liste_ctx[c].ARRAY_BUFFER, new Float32Array([
topleft_x , topleft_y ,
topright_x , topright_y ,
bottomleft_x , bottomleft_y ,
bottomleft_x , bottomleft_y ,
topright_x , topright_y ,
bottomright_x , bottomright_y ]), liste_ctx[c].STATIC_DRAW);
// draw
liste_ctx[c].drawArrays(liste_ctx[c].TRIANGLES, 0, 6);
}
}
I did not find any way to port ctx.globalAlpha to webgl by the way. If someone knows how I could add it in my code, I woud be thanksful for that too.
Please help. Thanks.
I don't know why things are crashing but just a few random comments.
Only create buffers and textures once.
Currently the code is creating buffers and textures every time you call draw_sprite. Instead you should be creating them at initialization time just once and then using the created buffers and textures later. Similarly you should look up the attribute and uniform locations at initialization time and then use them when you draw.
It's possible firefox is crashing because it's running out of memory since you're creating new buffers and new textures every time you call draw_sprite
I believe it's more common to make a single buffer with a unit square it in and then use matrix math to move that square where you want it. See http://games.greggman.com/game/webgl-2d-matrices/ for some help with matrix math.
If you go that route then you only need to call all the buffer related stuff once.
Even if you don't use matrix math you can still add translation and scale to your shader, then just make one buffer with a unit rectangle (as in
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1]), gl.STATIC_DRAW)
After that then just translate it where you want it and scale it to the size you want it drawn.
In fact, if you go the matrix route it would be really easy to simulate the 2d context's matrix functions ctx.translate, ctx.rotate, ctx.scale etc...
The code might be easier to follow, and type, if you pulled the context into a local variable.
Instead of stuff like
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, buffer);
liste_ctx[c].enableVertexAttribArray(positionLocation);
liste_ctx[c].vertexAttribPointer(positionLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
You could do this
var gl = liste_ctx[c];
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
Storing things on the context is going to get tricky
This code
liste_ctx[c].vertexShader = createShaderFromScriptElement(liste_ctx[c], "2d-vertex-shader");
liste_ctx[c].fragmentShader = createShaderFromScriptElement(liste_ctx[c], "2d-fragment-shader");
liste_ctx[c].program = createProgram(liste_ctx[c], [liste_ctx[c].vertexShader, liste_ctx[c].fragmentShader]);
Makes it look like you're going to only have a single vertexshader, a single fragment shader and single program. Maybe you are but it's pretty common in WebGL to have several shaders and programs.
For globalAlpha first you need to turn on blending.
gl.enable(gl.BLEND);
And you need to tell it how to blend. To be the same as the canvas 2d context you
need to use pre-multiplied alpha math so
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
Then you need to multiply the color the shader draws by an alpha value. For example
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// global alpha
uniform float u_globalAlpha;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
vec4 color = texture2D(u_image, v_texCoord);
// Multiply the color by u_globalAlpha
gl_FragColor = color * u_globalAlpha;
}
</script>
Then you'll need to set u_globalAlpha. At init time look up it's location
var globalAlphaLocation = gl.getUniformLocation(program, "u_globalAlpha");
And at draw time set it
gl.uniform1f(globalAlphaLocation, someValueFrom0to1);
Personally I usually use a vec4 and call it u_colorMult
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// colorMult
uniform float u_colorMult;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
gl_FragColor = texture2D(u_image, v_texCoord) * u_colorMult;
}
</script>
Then I can tint my sprites for example to make the sprite draw in red just use
glUniform4fv(colorMultLocation, [1, 0, 0, 1]);
It also means I can easily draw in solid colors. Create a 1x1 pixel solid white texture. Anytime I want to draw in a solid color I just bind that texture and set u_colorMult to the color I want to draw in.

WebGL: drawArrays: attribs not setup correctly

Here's my vertex shader:
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
attribute float type;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec4 vColor;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
if(type > 0.0) {
} else {
}
}
What I want to do is pretty simple, just capture a float value named type and use it for logic operates.
The problem is, when I try to use it in Javascript, the error comes:
shaderProgram.textureCoordAttribute = gl.getAttribLocation(shaderProgram, "type");
gl.enableVertexAttribArray(shaderProgram.textureCoordAttribute);
WebGL: INVALID_OPERATION: drawArrays: attribs not setup correctly main.js:253
WebGL: INVALID_OPERATION: drawArrays: attribs not setup correctly main.js:267
WebGL: INVALID_OPERATION: drawElements: attribs not setup correctly
The output of getAttribLocation is meaningful, all of them are equal greater than 0.
================= UPDATE ===================
Here's my whole project code:
https://gist.github.com/royguo/5873503
Explanation:
index.html Shaders script are here.
main.js Start the WebGL application and draw scene.
shaders.js Load shaders and bind attributes.
buffers.js Init vertex and color buffers.
utils.js Common used utils.
Here is a link to a gist with the files I updated to get the type attribute working.
If you search for //ADDED CODE you should be able to view every change I had to make to get it working.
In addition to enabling the objectTypeAttribute you have to create an array buffer for each object you are drawing:
triangleObjectTypeBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleObjectTypeBuffer);
objectTypes = [
1.0, 1.0, 0.0
];
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(objectTypes), gl.STATIC_DRAW);
triangleObjectTypeBuffer.itemSize = 1;
triangleObjectTypeBuffer.numItems = 3;
And bind that array buffer for each object before you draw the object:
gl.bindBuffer(gl.ARRAY_BUFFER, triangleObjectTypeBuffer);
gl.vertexAttribPointer(shaderProgram.objectTypeAttribute, triangleObjectTypeBuffer.itemSize, gl.FLOAT, false, 0, 0);
You probably already tried this and accidentally went wrong somewhere along the way.

WebGL on Chrome for Windows : warning X3206 when cast float to int

I think i found a strange bug of Windows version of the Chrome WebGL implementation. Linking a shader with a cast float to int cause an "warning X3206: implicit truncation of vector type" error. I have tried many way to avoid it, but no chance.
for example :
int i;
vec3 u = vec3(1.5, 2.5, 3.5);
float z = u.z;
i = int(u.z): // warning X3206: implicit truncation of vector type
i = int(z): // warning X3206: implicit truncation of vector type
The strange thing is that this vertex program perfectly works on the Linux version on the same computer (same graphic card). Is it a driver issue ? (I have tested on two Windows version with two different graphic cards with the same result). Other strange thing (to me) : X3206 is ordinary an DirectX error (?!) what is the relation with WebGL ?
Here is the complete shader i use and cause the Warning:
#define MATRIX_ARRAY_SIZE 48
/* vertex attributes */
attribute vec4 p;
attribute vec3 n;
attribute vec3 u;
attribute vec3 t;
attribute vec3 b;
attribute vec4 c;
attribute vec4 i;
attribute vec4 w;
/* enable vertex weight */
uniform bool ENw;
/* enable comput tangent */
uniform bool ENt;
/* eye view matrix */
uniform mat4 MEV;
/* transform matrices */
uniform mat4 MXF[MATRIX_ARRAY_SIZE];
/* transform normal matrices */
uniform mat3 MNR[MATRIX_ARRAY_SIZE];
/* varying fragment shader */
varying vec4 Vp;
varying vec3 Vn;
varying vec2 Vu;
varying vec3 Vt;
varying vec3 Vb;
varying vec4 Vc;
void main(void) {
/* Position et Normal transform */
if(ENw) { /* enable vertex weight */
Vp = vec4(0.0, 0.0, 0.0, 0.0);
Vn = vec3(0.0, 0.0, 0.0);
Vp += (MXF[int(i.x)] * p) * w.x;
Vn += (MNR[int(i.x)] * n) * w.x;
Vp += (MXF[int(i.y)] * p) * w.y;
Vn += (MNR[int(i.y)] * n) * w.y;
Vp += (MXF[int(i.z)] * p) * w.z;
Vn += (MNR[int(i.z)] * n) * w.z;
Vp += (MXF[int(i.w)] * p) * w.w;
Vn += (MNR[int(i.w)] * n) * w.w;
} else {
Vp = MXF[0] * p;
Vn = MNR[0] * n;
}
/* Tangent et Binormal transform */
if(ENt) { /* enable comput tangent */
vec3 Cz = cross(Vn, vec3(0.0, 0.0, 1.0));
vec3 Cy = cross(Vn, vec3(0.0, 1.0, 0.0));
if(length(Cz) > length(Cy)) {
Vt = Cz;
} else {
Vt = Cy;
}
Vb = cross(Vn, Vt);
} else {
Vt = t;
Vb = b;
}
/* Texcoord et color */
Vu = u.xy;
Vc = c;
gl_PointSize = u.z;
gl_Position = MEV * Vp;
}
If someone found an elegant workaround...
The problem is you're running out of uniforms.
48 mat3s + 49 mat4s + 2 bools = 1218 values / 4 = at least 306 uniform vectors needed
On my GPU gl.getParameter(gl.MAX_VERTEX_UNIFORM_VECTORS) only returns 254.
Note that 306 uniform vectors is for a perfectly optimizing GLSL compiler. For an un-optimized compiler it might internally use 3 vec4s for mat3 and a full vec4 for each bool making it need more uniform vectors.
That seems to be the case since if I lower MATRIX_ARRAY_SIZE to 35 and it works on my machine and 36 fails.
35 mat3s each using 3 vectors + 36 mat4s each using 4 vectors + 2 bools each using 1 vector = 249 vectors required. One more, 36 requires 257 which is 3 more than my GPU driver supports which is why it fails.
Note 128 is the minimum number of vertex uniform vectors required to be supported which means if you want it to work everywhere you'd need to set MATRIX_ARRAY_SIZE to 17. On the other hand I don't know what uniforms you're using in your fragment shader. Alternatively you could query the number of uniform vectors supported and modify your shader source at runtime.
Here's a sample that works for me
http://jsfiddle.net/greggman/474Et/2/
Change the 35 at the top back to 48 and it will generate the same error message.
It sucks that the error message is cryptic.
Chrome's and Firefox's WebGL in Windows is implemented with ANGLE, which in turn uses DirectX as the underlying API. Then, it doesn't come as a surprise that certain DirectX restrictions/warnings/errors rise when using WebGL there.
And you indeed are truncating a float type, use T floor(T) or T ceil(T) to obtain more meaningful results and no warnings.
This should be fixed in ANGLE revision 1557. It will take a while for this fix to become available in mainstream Chrome.