Inigo Quilez's website has a page of 3D ray-surface intersectors for use with signed distance functions, one of which is for a basic 3D box:
// axis aligned box centered at the origin, with size boxSize
vec2 boxIntersection( in vec3 ro, in vec3 rd, vec3 boxSize, out vec3 outNormal )
{
vec3 m = 1.0/rd; // can precompute if traversing a set of aligned boxes
vec3 n = m*ro; // can precompute if traversing a set of aligned boxes
vec3 k = abs(m)*boxSize;
vec3 t1 = -n - k;
vec3 t2 = -n + k;
float tN = max( max( t1.x, t1.y ), t1.z );
float tF = min( min( t2.x, t2.y ), t2.z );
if( tN>tF || tF<0.0) return vec2(-1.0); // no intersection
outNormal = (tN>0.0) ? step(vec3(tN),t1)) : // ro ouside the box
step(t2,vec3(tF))); // ro inside the box
outNormal *= -sign(rd);
return vec2( tN, tF );
}
In addition to calculating the intersection, it also calculates the surface normal at the point of intersection. Calculating the normal is the part I'm interested in. I would like to understand how it does this, but the page doesn't break down the math at all, and I haven't found any other sources that do (also, not understanding the process limits my understanding of what to search for).
Some parts make sense conceptually, like the calculation of near/far intersection points for the ray. But the calculation of constants that make up that calculation are lost on me. For example, it appears to do ray_origin/ray_direction, dividing a position by a (probably normalized) direction. What sort of result does that produce? I'd have assumed it would be nonsensical.
I'd like to understand how this works, rather than treat it like a black box.
I found another version, modified to only return normals, but shares many of the same calculations. It may be easier to explain, I just wish I knew how it worked.
vec3 boxNormal( in vec3 ro, in vec3 rd, vec3 boxSize)
{
vec3 m = 1.0/rd;
vec3 n = m*ro;
vec3 k = abs(m)*boxSize;
vec3 t1 = -n - k;
return -sign(rd)*step(t1.yzx,t1.xyz)*step(t1.zxy,t1.xyz);
}
Related
I am trying to reconstruct the spiral pattern in the depicted image for a neuroscience experiment. Basically, the pattern has the properties that:
1) Every part of the spiral has local orientation 45 degrees to radial
2) The thickness of each arm of the spiral increases in direct proportion with the radius.
Ideally I would like to be able to parametrically vary the number of arms of the spiral as needed. You can ignore the blank circle in the middle and the circular boundaries, those are very easy to add.
Does anybody know if there is a function in terms of the number of spiral arms and local orientation that would be able to reconstruct this spiral pattern? For what it's worth I'm coding in Matlab, although if someone has the mathematical formula I can implement it myself no problem.
Your spiral image does not satisfy your property 1, as can be seen by overlaying the spiral with a flipped copy (the angles at the outer edge are more perpendicular to the radial direction than 45deg, and more parallel at the inner edge):
As I commented, a logarithmic spiral can satisfy both properties. I implemented it in GLSL using Fragmentarium, here is the code:
#include "Progressive2D.frag"
#group Spiral
uniform int Stripes; slider[1,20,100]
const float pi = 3.141592653589793;
vec2 cLog(vec2 z)
{
return vec2(log(length(z)), atan(z.y, z.x));
}
vec3 color(vec2 p)
{
float t = radians(45.0);
float c = cos(t);
float s = sin(t);
mat2 m = mat2(c, -s, s, c);
vec2 q = m * cLog(p);
return vec3(float
( mod(float(Stripes) * q.y / (sqrt(2.0) * pi), 1.0) < 0.5
|| length(p) < 0.125
|| length(p) > 0.875
));
}
And the output:
I am working with WebGL and am writing up the vertex shader in my .html file that goes along with my .js file for my program. This mainly deals with lighting.
The error I receive is: Vertex shader failed to compile. The error log is:ERROR: 0:29: 'constructor' : too many arguments
ERROR: 0:32: 'dot' : no matching overloaded function found
29 and 32 correspond to the ones in the code below (see comments)
Here is my vertex shader code
<script id="vertex-shader" type="x-shader/x-vertex">
attribute vec4 a_Position;
attribute vec4 a_Color;
attribute vec3 a_Normal;
attribute vec2 a_Texture;
uniform mat4 u_MvpMatrix;
uniform mat3 u_NormalMatrix;
uniform vec3 uAmbientColor; // all 3 of these passed in .js
uniform vec3 uLightPosition; //
uniform vec3 uLightColor; //
varying vec4 v_Color;
varying vec2 v_Texture;
varying vec3 vLightWeighting;
void
main()
{
vec4 eyeLightPos = vec4(0.0, 200.0, 0.0, 1.0); // Line 29***
vec4 eyePosition = u_MvpMatrix * vec4(a_Position, 1.0); // vertex position in the eye space
vec3 normal = normalize(u_NormalMatrix * a_Normal);
float nDotL = max(dot(normal, eyeLightPos), 0.0); // Line 32***
v_Color = vec4(a_Color.rgb * nDotL, a_Color.a);
v_Texture = a_Texture;
////*************
vec3 eyeLightVector = normalize(eyeLightPos.xyz - eyePosition.xyz);
vec3 eyeViewVector = -normalize(eyePosition.xyz); // eye position is at (0, 0, 0) in the eye space
vec3 eyeReflectVector = -reflect(eyeLightVector, normal);
float shininess = 16.0;
float specular = pow(max(dot(eyeViewVector, eyeReflectVector),0.0), shininess);;
vLightWeighting = uAmbientColor + uLightColor * nDotL + uLightColor * specular;
}
</script>
Why is this happening? Let me know if you'd like to see anything else.
You most probably marked the wrong line for 29. The error happens two lines below:
vec4 eyePosition = u_MvpMatrix * vec4(a_Position, 1.0);
The problem is, that a_Position is already a vec4, thus you try to call a constructor of the form vec4(vec4, float) which is not existing. Maybe you wanted to pass only the first three axis for a_Position in which case the code would be:
vec4 eyePosition = u_MvpMatrix * vec4(a_Position.xyz, 1.0);
The second error comes because you have a type mismatch. In the dot method normal is a vec3 but eyeLightPos is a vec4. The dot function is only defined for two parameters of the same type.
vec4 eyePosition = u_MvpMatrix * vec4(a_Position);
a_Position already has 4 vectors and in eyePosition, you are multiplying with 5 vector
Whether you remove the last axes vec4(a_Position); or vec4(a_Position.xyz, 1.0);
I need to warp a rectangular texture to texture with polar coordinates. To spread the light on my problem, I am going to illustrate it:
I have the image:
and I have to deform it using shader to something like this:
then I'm going to map it to a plane.
How can I do this? Any help will be appreciated!
That is not particularly hard. You just need to convert your texture coordinates to polar coordinates, and use the radius for the texture's s direction, and the azimuth angle to the t direction.
Assuming you want to texture a quad that way, and also assuming you use standard texcoords for this, so the lower left vertex will have (0,0), the upper right one (1,1) as texture coords.
So in the fragment shader, you just need to convert the interpolated texcoords (using tc for this) to polar coordinates. SInce the center will be at (0.5, 0.5), we have to offset this first.
vec2 x=tc - vec2(0.5,0.5);
float radius=length(x);
float angle=atan(x.y, x.x);
Now all you need to do is to map the range back to the [0,1] texture space. The maximum radius here will be 0.5, so you simply can use 2*radius as the s coordinate, and angle will be in [-pi,pi], so you should map that to [0,1] for the t coordinate.
UPDATE1
There are a few details I left out so far. From your image it is clear that you do not want the inner circle to be mapped to the texture. But this can easily be incorparated. I just assume two radii here: r_inner, which is the radius of the inner circle, and r_outer, which is the radius onto which you want to map the outer part. Let me sketch out a simple fragment shader for that:
#version ...
precision ...
varying vec2 tc; // texcoords from vertex shader
uniform sampler2D tex;
#define PI 3.14159265358979323844
void main ()
{
const float r_inner=0.25;
const float t_outer=0.5;
vec2 x = v_tex - vec2(0.5);
float radius = length(x);
float angle = atan(x.y, x.x);
vec2 tc_polar; // the new polar texcoords
// map radius so that for r=r_inner -> 0 and r=r_outer -> 1
tc_polar.s = ( radius - r_inner) / (r_outer - r_inner);
// map angle from [-PI,PI] to [0,1]
tc_polar.t = angle * 0.5 / PI + 0.5;
// texture mapping
gl_FragColor = texture2D(tex, tc_polar);
}
Now there is still one detail missing. The mapping generated above generates texcoords which are outside of the [0,1] range for any position where you have black in your image. But the texture sampling will not automatically give black here. The easiest solution would be to just use the GL_CLAMP_TO_BORDER mode for GL_TEXTURE_WRAP_S (the default border color will be (0,0,0,0) so you might not need to specify it or you can set GL_TEXTURE_BORDER_COLOR explicitly to (0,0,0,1) if you work with alpha blending and don't want any transparency that way). That way, you will get the black color for free. Other options would be using GL_CLAMP_TO_EDGE and adding a black pixel column both the left and right end of the texture. Another way would be to add a brach to the shader and check for tc_polar.s being below 0 or above 1, but I wouldn't recommend that for this use case.
For those who want a more flexible shader that does the same:
uniform float Angle; // range 2pi / 100000.0 to 1.0 (rounded down), exponential
uniform float AngleMin; // range -3.2 to 3.2
uniform float AngleWidth; // range 0.0 to 6.4
uniform float Radius; // range -10000.0 to 1.0
uniform float RadiusMin; // range 0.0 to 2.0
uniform float RadiusWidth; // range 0.0 to 2.0
uniform vec2 Center; // range: -1.0 to 3.0
uniform sampler2D Texture;
void main()
{
// Normalised texture coords
vec2 texCoord = gl_TexCoord[0].xy;
// Shift origin to texture centre (with offset)
vec2 normCoord;
normCoord.x = 2.0 * texCoord.x – Center.x;
normCoord.y = 2.0 * texCoord.y – Center.y;
// Convert Cartesian to Polar coords
float r = length(normCoord);
float theta = atan(normCoord.y, normCoord.x);
// The actual effect
r = (r < RadiusMin) ? r : (r > RadiusMin + RadiusWidth) ? r : ceil(r / Radius) * Radius;
theta = (theta < AngleMin) ? theta : (theta > AngleMin + AngleWidth) ? theta : floor(theta / Angle) * Angle;
// Convert Polar back to Cartesian coords
normCoord.x = r * cos(theta);
normCoord.y = r * sin(theta);
// Shift origin back to bottom-left (taking offset into account)
texCoord.x = normCoord.x / 2.0 + (Center.x / 2.0);
texCoord.y = normCoord.y / 2.0 + (Center.y / 2.0);
// Output
gl_FragColor = texture2D(Texture, texCoord);
}
Source: polarpixellate glsl.
Shadertoy example
I'm using AS3 to program some collision detection for a flash game and am having trouble figuring out how to bounce a ball off of a line. I keep track of a vector that represents the ball's 2D velocity and I'm trying to reflect it over the vector that is perpendicular to the line that the ball's colliding with (aka the normal). My problem is that I don't know how to figure out the new vector (that's reflected over the normal). I figured that you can use Math.atan2 to find the difference between the normal and the ball's vector but I'm not sure how to expand that to solve my problem.
Vector algebra - You want the "bounce" vector:
vec1 is the ball's motion vector and vec2 is the surface/line vector:
// 1. Find the dot product of vec1 and vec2
// Note: dx and dy are vx and vy divided over the length of the vector (magnitude)
var dpA:Number = vec1.vx * vec2.dx + vec1.vy * vec2.dy;
// 2. Project vec1 over vec2
var prA_vx:Number = dpA * vec2.dx;
var prA_vy:Number = dpA * vec2.dy;
// 3. Find the dot product of vec1 and vec2's normal
// (left or right normal depending on line's direction, let's say left)
var dpB:Number = vec1.vx * vec2.leftNormal.dx + vec1.vy * vec2.leftNormal.dy;
// 4. Project vec1 over vec2's left normal
var prB_vx:Number = dpB * vec2.leftNormal.dx;
var prB_vy:Number = dpB * vec2.leftNormal.dy;
// 5. Add the first projection prA to the reverse of the second -prB
var new_vx:Number = prA_vx - prB_vx;
var new_vy:Number = prA_vy - prB_vy;
Assign those velocities to your ball's motion vector and let it bounce.
PS:
vec.leftNormal --> vx = vec.vy; vy = -vec.vx;
vec.rightNormal --> vx = -vec.vy; vy = vec.vx;
The mirror reflection of any vector v from a line/(hyper-)surface with normal n in any dimension can be computed using projection tensors. The parallel projection of v on n is: v|| = (v . n) n = v . nn. Here nn is the outer (or tensor) product of the normal with itself. In Cartesian coordinates it is a matrix with elements: nn[i,j] = n[i]*n[j]. The perpendicular projection is just the difference between the original vector and its parallel projection: v - v||. When the vector is reflected, its parallel projection is reversed while the perpendicular projection is retained. So the reflected vector is:
v' = -v|| + (v - v||) = v - 2 v|| = v . (I - 2 nn) = v . R( n ), where
R( n ) = I - 2 nn
(I is the identity tensor which in Cartesian coordinates is simply the diagonal identity matrix diag(1))
R is called the reflection tensor. In Cartesian coordinates it is a real symmetric matrix with components R[i,j] = delta[i,j] - 2*n[i]*n[j], where delta[i,j] = 1 if i == j and 0 otherwise. It is also symmetric with respect to n:
R( -n ) = I - 2(-n)(-n) = I - 2 nn = R( n )
Hence it doesn't matter if one uses the outward facing or the inward facing normal n - the result would be the same.
In two dimensions and Cartesian coordinates, R (the matrix representation of R) becomes:
[ R00 R01 ] [ 1.0-2.0*n.x*n.x -2.0*n.x*n.y ]
R = [ ] = [ ]
[ R10 R11 ] [ -2.0*n.x*n.y 1.0-2.0*n.y*n.y ]
The components of the reflected vector are then computed as a row-vector-matrix product:
v1.x = v.x*R00 + v.y*R10
v1.y = v.x*R01 + v.y*R11
or after expansion:
k = 2.0*(v.x*n.x + v.y*n.y)
v1.x = v.x - k*n.x
v1.y = v.y - k*n.y
In three dimensions:
k = 2.0*(v.x*n.x + v.y*n.y + v.z*n.z)
v1.x = v.x - k*n.x
v1.y = v.y - k*n.y
v1.z = v.z - k*n.z
Finding the exact point where the ball will hit the line/wall is more involved - see here.
Calculate two components of the vector.
One component will be the projection of your vector onto the reflecting surface the other component will be the projection on to the surface's normal (which you say you already have). Use dot products to get the projections. Add these two components together by summing the two vectors. You'll have your answer.
You can even calculate the second component A2 as being the original vector minus the first component, so: A2 = A - A1. And then the vector you want is A1 plus the reflected A2 (which is simply -A2 since its perpendicular to your surface) or:
Ar = A1-A2
or
Ar = 2A1 - A which is the same as Ar = -(2A2 - A)
If [Ax,Bx] is your balls velocity and [Wx,Wy] is a unit vector representing the wall:
A1x = (Ax*Wx+Ay*Wy)*Wx;
A1y = (Ax*Wx+Ay*Wy)*Wy;
Arx = 2*A1x - Ax;
Ary = 2*A1y - Ay;
I think i found a strange bug of Windows version of the Chrome WebGL implementation. Linking a shader with a cast float to int cause an "warning X3206: implicit truncation of vector type" error. I have tried many way to avoid it, but no chance.
for example :
int i;
vec3 u = vec3(1.5, 2.5, 3.5);
float z = u.z;
i = int(u.z): // warning X3206: implicit truncation of vector type
i = int(z): // warning X3206: implicit truncation of vector type
The strange thing is that this vertex program perfectly works on the Linux version on the same computer (same graphic card). Is it a driver issue ? (I have tested on two Windows version with two different graphic cards with the same result). Other strange thing (to me) : X3206 is ordinary an DirectX error (?!) what is the relation with WebGL ?
Here is the complete shader i use and cause the Warning:
#define MATRIX_ARRAY_SIZE 48
/* vertex attributes */
attribute vec4 p;
attribute vec3 n;
attribute vec3 u;
attribute vec3 t;
attribute vec3 b;
attribute vec4 c;
attribute vec4 i;
attribute vec4 w;
/* enable vertex weight */
uniform bool ENw;
/* enable comput tangent */
uniform bool ENt;
/* eye view matrix */
uniform mat4 MEV;
/* transform matrices */
uniform mat4 MXF[MATRIX_ARRAY_SIZE];
/* transform normal matrices */
uniform mat3 MNR[MATRIX_ARRAY_SIZE];
/* varying fragment shader */
varying vec4 Vp;
varying vec3 Vn;
varying vec2 Vu;
varying vec3 Vt;
varying vec3 Vb;
varying vec4 Vc;
void main(void) {
/* Position et Normal transform */
if(ENw) { /* enable vertex weight */
Vp = vec4(0.0, 0.0, 0.0, 0.0);
Vn = vec3(0.0, 0.0, 0.0);
Vp += (MXF[int(i.x)] * p) * w.x;
Vn += (MNR[int(i.x)] * n) * w.x;
Vp += (MXF[int(i.y)] * p) * w.y;
Vn += (MNR[int(i.y)] * n) * w.y;
Vp += (MXF[int(i.z)] * p) * w.z;
Vn += (MNR[int(i.z)] * n) * w.z;
Vp += (MXF[int(i.w)] * p) * w.w;
Vn += (MNR[int(i.w)] * n) * w.w;
} else {
Vp = MXF[0] * p;
Vn = MNR[0] * n;
}
/* Tangent et Binormal transform */
if(ENt) { /* enable comput tangent */
vec3 Cz = cross(Vn, vec3(0.0, 0.0, 1.0));
vec3 Cy = cross(Vn, vec3(0.0, 1.0, 0.0));
if(length(Cz) > length(Cy)) {
Vt = Cz;
} else {
Vt = Cy;
}
Vb = cross(Vn, Vt);
} else {
Vt = t;
Vb = b;
}
/* Texcoord et color */
Vu = u.xy;
Vc = c;
gl_PointSize = u.z;
gl_Position = MEV * Vp;
}
If someone found an elegant workaround...
The problem is you're running out of uniforms.
48 mat3s + 49 mat4s + 2 bools = 1218 values / 4 = at least 306 uniform vectors needed
On my GPU gl.getParameter(gl.MAX_VERTEX_UNIFORM_VECTORS) only returns 254.
Note that 306 uniform vectors is for a perfectly optimizing GLSL compiler. For an un-optimized compiler it might internally use 3 vec4s for mat3 and a full vec4 for each bool making it need more uniform vectors.
That seems to be the case since if I lower MATRIX_ARRAY_SIZE to 35 and it works on my machine and 36 fails.
35 mat3s each using 3 vectors + 36 mat4s each using 4 vectors + 2 bools each using 1 vector = 249 vectors required. One more, 36 requires 257 which is 3 more than my GPU driver supports which is why it fails.
Note 128 is the minimum number of vertex uniform vectors required to be supported which means if you want it to work everywhere you'd need to set MATRIX_ARRAY_SIZE to 17. On the other hand I don't know what uniforms you're using in your fragment shader. Alternatively you could query the number of uniform vectors supported and modify your shader source at runtime.
Here's a sample that works for me
http://jsfiddle.net/greggman/474Et/2/
Change the 35 at the top back to 48 and it will generate the same error message.
It sucks that the error message is cryptic.
Chrome's and Firefox's WebGL in Windows is implemented with ANGLE, which in turn uses DirectX as the underlying API. Then, it doesn't come as a surprise that certain DirectX restrictions/warnings/errors rise when using WebGL there.
And you indeed are truncating a float type, use T floor(T) or T ceil(T) to obtain more meaningful results and no warnings.
This should be fixed in ANGLE revision 1557. It will take a while for this fix to become available in mainstream Chrome.