Related
So the issue that I'm having is that in developing an HTML5 canvas app I need to use a lot of transformations (i.e. translate, rotate, scale) and therefore a lot of calls being made to context.save() and context.restore(). The performance drops very quickly even with drawing very little (because the save() and restore() are being called as many times as possible in the loop). Is there an alternative to using these methods but still be able to use the transformations? Thank you!
Animation and Game performance tips.
Avoid save restore
Use setTransform as that will negate the need for save and restore.
There are many reasons that save an restore will slow things down and these are dependent on the current GPU && 2D context state. If you have the current fill and/or stroke styles set to a large pattern, or you have a complex font / gradient, or you are using filters (if available) then the save and restore process can take longer than rendering the image.
When writing for animations and games performance is everything, for me it is about sprite counts. The more sprites I can draw per frame (60th second) the more FX I can add, the more detailed the environment, and the better the game.
I leave the state open ended, that is I do not keep a detailed track of the current 2D context state. This way I never have to use save and restore.
ctx.setTransform rather than ctx.transform
Because the transforms functions transform, rotate, scale, translate multiply the current transform, they are seldom used, as i do not know what the transform state is.
To deal with the unknown I use setTransform that completely replaces the current transformation matrix. This also allows me to set the scale and translation in one call without needing to know what the current state is.
ctx.setTransform(scaleX,0,0,scaleY,posX,posY); // scale and translate in one call
I could also add the rotation but the javascript code to find the x,y axis vectors (the first 4 numbers in setTransform) is slower than rotate.
Sprites and rendering them
Below is an expanded sprite function. It draws a sprite from a sprite sheet, the sprite has x & y scale, position, and center, and as I always use alpha so set alpha as well
// image is the image. Must have an array of sprites
// image.sprites = [{x:0,y:0,w:10,h:10},{x:20,y:0,w:30,h:40},....]
// where the position and size of each sprite is kept
// spriteInd is the index of the sprite
// x,y position on sprite center
// cx,cy location of sprite center (I also have that in the sprite list for some situations)
// sx,sy x and y scales
// r rotation in radians
// a alpha value
function drawSprite(image, spriteInd, x, y, cx, cy, sx, sy, r, a){
var spr = image.sprites[spriteInd];
var w = spr.w;
var h = spr.h;
ctx.setTransform(sx,0,0,sy,x,y); // set scale and position
ctx.rotate(r);
ctx.globalAlpha = a;
ctx.drawImage(image,spr.x,spr.y,w,h,-cx,-cy,w,h); // render the subimage
}
On just an average machine you can render 1000 +sprites at full frame rate with that function. On Firefox (at time of writing) I am getting 2000+ for that function (sprites are randomly selected sprites from a 1024 by 2048 sprite sheet) max sprite size 256 * 256
But I have well over 15 such functions, each with the minimum functionality to do what I want. If it is never rotated, or scaled (ie for UI) then
function drawSprite(image, spriteInd, x, y, a){
var spr = image.sprites[spriteInd];
var w = spr.w;
var h = spr.h;
ctx.setTransform(1,0,0,1,x,y); // set scale and position
ctx.globalAlpha = a;
ctx.drawImage(image,spr.x,spr.y,w,h,0,0,w,h); // render the subimage
}
Or the simplest play sprite, particle, bullets, etc
function drawSprite(image, spriteInd, x, y,s,r,a){
var spr = image.sprites[spriteInd];
var w = spr.w;
var h = spr.h;
ctx.setTransform(s,0,0,s,x,y); // set scale and position
ctx.rotate(r);
ctx.globalAlpha = a;
ctx.drawImage(image,spr.x,spr.y,w,h,-w/2,-h/2,w,h); // render the subimage
}
if it is a background image
function drawSprite(image){
var s = Math.max(image.width / canvasWidth, image.height / canvasHeight); // canvasWidth and height are globals
ctx.setTransform(s,0,0,s,0,0); // set scale and position
ctx.globalAlpha = 1;
ctx.drawImage(image,0,0); // render the subimage
}
It is common that the playfield can be zoomed, panned, and rotated. For this I maintain a closure transform state (all globals above are closed over variables and part of the render object)
// all coords are relative to the global transfrom
function drawGlobalSprite(image, spriteInd, x, y, cx, cy, sx, sy, r, a){
var spr = image.sprites[spriteInd];
var w = spr.w;
var h = spr.h;
// m1 to m6 are the global transform
ctx.setTransform(m1,m2,m3,m4,m5,m6); // set playfield
ctx.transform(sx,0,0,sy,x,y); // set scale and position
ctx.rotate(r);
ctx.globalAlpha = a * globalAlpha; (a real global alpha)
ctx.drawImage(image,spr.x,spr.y,w,h,-cx,-cy,w,h); // render the subimage
}
All the above are about as fast as you can get for practical game sprite rendering.
General tips
Never use any of the vector type rendering methods (unless you have the spare frame time) like, fill, stroke, filltext, arc, rect, moveTo, lineTo as they are an instant slowdown. If you need to render text create a offscreen canvas, render once to that, and display as a sprite or image.
Image sizes and GPU RAM
When creating content, always use the power rule for image sizes. GPU handle images in sizes that are powers of 2. (2,4,8,16,32,64,128....) so the width and height have to be a power of two. ie 1024 by 512, or 2048 by 128 are good sizes.
When you do not use these sizes the 2D context does not care, what it does is expand the image to fit the closest power. So if I have an image that is 300 by 300 to fit that on the GPU the image has to be expanded to the closest power, which is 512 by 512. So the actual memory footprint is over 2.5 times greater than the pixels you are able to display. When the GPU runs out of local memory it will start switching memory from mainboard RAM, when this happens your frame rate drops to unusable.
Ensuring that you size images so that you do not waste RAM will mean you can pack a lot more into you game before you hit the RAM wall (which for smaller devices is not much at all).
GC is a major frame theef
One last optimisation is to make sure that the GC (garbage collector) has little to nothing to do. With in the main loop, avoid using new (reuse and object rather than dereference it and create another), avoid pushing and popping from arrays (keep their lengths from falling) keep a separate count of active items. Create a custom iterator and push functions that are item context aware (know if an array item is active or not). When you push you don't push a new item unless there are no inactive items, when an item becomes inactive, leave it in the array and use it later if one is needed.
There is a simple strategy that I call a fast stack that is beyond the scope of this answer but can handle 1000s of transient (short lived) gameobjects with ZERO GC load. Some of the better game engines use a similar approch (pool arrays that provide a pool of inactive items).
GC should be less than 5% of your game activity, if not you need to find where you are needlessly creating and dereferencing.
I've been trying to work with more complicated shaders, and have run into issues with the coordinate systems used by the vertex shader and texture sampler. In short: they don't seem to make any sense, and when trying to test them I end up getting inconsistent results. To make matters worse, the internet has little in the way of documentation, and most of the information I've found seems to expect me to know how this works already. I was hoping someone could clarify the following:
The vertex shaders pass an (x, y, z) representing a location on the render target. What are acceptable values for x, y, and z?
How do x and y correspond to the width and height of the back buffer (assuming that it's the render target)?
How do x and y correspond to the width and height on an output texture (assuming that it's the render target)?
When x=0 and y=0 where does the vertex sit, location-wise?
The texture samplers sample a texture at a (u, v) coordinate. What are acceptable values for u and v?
How do u and v correspond with the width and height of the texture being sampled?
How do AGAL's wrap, clamp, and repeat flags alter sampling, and what is the default behavior when one isn't given?
when sampling at u=0 and v=0, which pixel is returned location-wise?
EDIT:
From my tests, I believe the answers are:
Unsure
-1 is left/bottom, 1 is right/top
Unsure
At the center of the output
Unsure
0 is left/bottom, 1 is right/top
Unsure
The far bottom-left of the texture
You normally use the coordinate system of your own and then multiply the position of each vertex by MVP (model-view-projection) matrix to get NDC coordinates that can be fed to GPU as an output of vertex shader. There is a nice article explaining all that for Stage3D.
Correct. And z is in range [0, 1]
Rendering to a render target is the same as rendering to backbuffer - you output NDC from your vertex shader so the real size of the texture is irrelevant.
Yup, center of the screen.
Normally, it`s [0, 1] but you can use values that go out of that range and then the output depends on texture wrap mode (like repeat or clamp) set on the sampler.
(0, 0) is left/top, (1, 1) is right/bottom.
Default one is repeat. Those modes decide what you will get when you sample using coordinate that is out of range of [0, 1]. With repeat [1.5, 1.5] will result in [0.5, 0.5] while [1.0, 1.0] will be the result if the mode is set to clamp.
Top-left pixel of the texture.
I've been able to apply a smooth animation to my sprite and control it using the accelerometer. My sprite is fixed to move left and right along the x-aixs.
From here, I need to figure out how to create a vertical infinite wavy line for the sprite to attempt to trace. the aim of my game is for the user to control the sprite's left/right movement with the accelerometer in an attempt to trace the never ending wavy line as best they can, whilst the sprite and camera both move in a vertical direction to simulate "moving along the line." It would be ideal if the line was randomly generated.
I've researched about splines, planes, bezier curves etc, but I can't find anything that seems to relate close enough to what I'm trying to achieve.
I'm just seeking some guidance as to what methods I could possibly use to achieve this. Any ideas?
You could use sum of 4 to 5 sine waves (each with different amplitude, wavelength and phase difference). All 3 of those parameters could be random.
The resulting curve would be very smooth (since it is primarily sinusoidal) yet it'll look random (it's time period would be LCM of all 4 to 5 random wavelengths which is a huge number).
So the curve won't repeat for a long time, yet it will not be hard on memory. Concerning computational complexity, you can always tune it by changing number of sine terms with FPS.
It should look like this.
It's really easy to implement too. (even I could generate above image.. haha)
Hope this helps. Maths rocks. :D
(The basic idea here is a finite Fourier series which I think should be ideal for your use case)
Edit:
You can create each term like this and assign random values to all terms.
public class SineTerm {
private float amplitude;
private float waveLength;
private float phaseDifference;
public SineTerm(float amplitude, float waveLength, float phaseDifference) {
this.amplitude = amplitude;
this.waveLength = waveLength;
this.phaseDifference = phaseDifference;
}
public float evaluate(float x) {
return amplitude * (float) Math.sin(2 * Math.PI * x / waveLength + phaseDifference);
}
}
Now create an array of SineTerms and add all values returned by evaluate(x) (use one coordinate of sprite as input). Use the output as other coordinate of sprite. You should be good to go.
The real trick would be in tuning those random numbers.
Good luck.
I have a 3D ball in the browser , now I want to dig a hole on it to see its back.
How can I make it possiable ?
For example, I want the white triangle part of the cube could be transparent(I mean we could see the background behind the cube).
I've tried to change the alpha in fragment shader(the area in the code is a square not triangle, doesn't matter):
<script id="shader-fs-alpha" type="x-shader/x-fragment">
precision mediump float;
varying vec4 vColor;
uniform float uAlpha;
void main(void) {
if( gl_FragCoord.x < 350.0 && gl_FragCoord.x > 300.0
&& gl_FragCoord.y < 300.0 && gl_FragCoord.y > 230.0
) {
gl_FragColor = vec4(0, 0 ,0, 0.0);
} else {
gl_FragColor = vec4(vColor.rgb, 1.0);
}
}
</script>
This actually works but the area turns to be white(not transparent), so then I tried to enable blending, but that makes the whole cube transparent.
So now I thought if there's a way to enable bleanding in fragment shader and I could disable it in the else block.
Here's my whole project https://gist.github.com/royguo/5873503:
index.html : Shader script here.
buffers.js : All obejcts here.
shaders.js : Init shaders.
You should use stencil buffer. 2 steps: One, draw the triangle on stencil. Two, draw the cube with stencil test.
// you can use this code multiple time without clearing stencil
// just increment mask_id
mask_id = 1;
// allow to draw inside or outside mask
invert_mask = false;
1) Activate stencil buffer. Configure it: no testing and write mask_id value
glEnable(GL_STENCIL_TEST);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glStencilFunc(GL_ALWAYS, mask_id, 0);
2) Remove color & depth writing
glDepthMask(GL_FALSE);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
3) here draw your triangle :)
4) Enabled depth & color writing for the cube
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
5) Configure stencil for the cube: no writing and test stencil value
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
glStencilFunc(invert_mask ? GL_NOTEQUAL : GL_EQUAL, mask_id, 0xff);
6) here draw your cube :)
7) cleanup: remove stencil test
glDisable(GL_STENCIL_TEST);
Don't forget to request a stencil when you create your opengl context (see WebGLContextAttributes, second parameter of getContext)
This code run well with Opengl ES 2 on Ios platform. It didn't depend of any extension. WebGL use the same API.
The only small defect: no msaa on triangle borders. There is no free lunch :)
You could render part of the scene without the object (or the objects insides) to a texture and then project the texture onto a surface in front of the object. Just make sure that the surface faces the camera directly (is parallel to the x-y-plane in cam-space) and that the texture is rendered with the same center and perspective as the screen.
Use Learning WebGL: Rendering to Textures if you require help doing this. You should also check the light options to make sure the surface itself isn't affected by the lighting that your object gets.
Also: official WebGL reference sheet
try add these lines
glEnable (GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
It worked for me with OpenGL ES.
Or remove some triangles from the ball mesh, or apply texture to it that has transparent parts, turn on blending and turn off any type of culling, so you don't loose back-oriented triangles.
EDIT:
Here's a tutorial on blending in webGL: http://learningwebgl.com/blog/?p=859
You have multiple modes to choose from how two pixels will me blended, and by adjusting alpha values you could have anything you want.
Eventually, you apply texture that has alpha set to 1.0 everywhere, except those parts where it should be "hollow" and then enable blending. I hope I made myself clear.
TIP: Instead of reading documentation (which sometimes can be more confusing than helping), try this http://www.nihilogic.dk/labs/webgl_cheat_sheet/WebGL_Cheat_Sheet.htm,. It lists all functions in a very nice manner.
I am trying to find the "left" border of my WebGL view port because I would like to draw a number of debug information there. (an axis mini map like most modeling programs have)
I certainly can get the width and height of the canvas containing the WebGL viewport.
I would really like to know how I would go about calculating 2d canvas coordinates to 3d coordinates? What would be the best approach to find the left border in the 3d viewport?
Anyone looking into this should read
http://webglfactory.blogspot.com/2011/05/how-to-convert-world-to-screen.html or take a look at GluProject() and GluUnproject()
To clarify datenwolf's answer, the coordinate mapping between your 3D space and 2D canvas is exactly what you want it to be. You control it with gl.viewport and the matrices that you pass to your shader.
gl.viewport simply blocks out a rectangle of pixels on your canvas that you are drawing to. Most of the time this matches the dimensions of your canvas exactly, but there are some scenarios where you only want to draw to part of it. (Split-screen gaming, for example.) The area of your canvas that you're drawing to will be referred to as the viewport from here on out. You can assume it means the same thing as "canvas" if you'd like.
At it's simplest, the viewport always has an implicit coordinate system from -1 to 1 on both the X and Y axis. This is the space that the gl_Position output by your vertex shader operates in. If you output a vertex at (-1, -1) it will be in the bottom left corner of your viewport. a vertex at (1, 1) will be in the top right. (Yes, I'm ignoring depth for now) Using this, you could construct geometry designed to map to that space and draw it without any matrix transforms at all, but that can be a bit awkward.
To make life easier, we use projection matrices. A projection matrix is simply one that transforms your geometry from some arbitrary 3D space into that -1 to 1 space required by the viewport. The most common one is a perspective matrix. How you create it will look a bit different depending on the library you use, but typically it's something like this:
var fov = 45;
var aspectRatio = canvas.width/canvas.height;
var near = 1.0;
var far = 1024.0;
var projectionMat = mat4.perspective(fov, aspect, near, far);
I'm not going to get into what all those values mean, but you can clearly see that we're using the canvas width and height to help set up this projection. That allows it to not look stretched or squashed depending on the canvas size. What it all boils down to, however, is that taking any 3D point in space and multiplying it by this matrix will produce a point that maps to that -1 to 1 space, taking into account distance from the 'camera' and everything else. (It may actually fall outside of that bounds, but that simply means it's off camera.) It's what makes our 3D scenes look 3D.
It's also possible to create an projection matrix specifically for drawing 2D geometry, though. This is called an orthographic matrix, and the setup typically looks something like this:
var left = 0;
var top = 0;
var right = canvas.width;
var bottom = canvas.height;
var near = 1.0;
var far = 1024.0;
var projectionMat = mat4.ortho(left, right, bottom, top, near, far);
This matrix is different than the perspective matrix in that it ignores the z component of your position entirely. Instead, this matrix transforms flat coordinates, like pixels, into the -1 to 1 range. As such, your scenes don't look 3D but it's easier to control exacty where things appear on screen. So, using the matrix above, if we give it a vertex at (16, 16, 0) it will appear at (16, 16) on our canvas (assuming the viewport is the same dimensions as the canvas). As such, when you want to draw things like flat UI elements this is the type of matrix you want!
The nice part is that because these are just values that you pass to a shader you can use completely different matrices from one draw call to the next. Typically you'll draw all of your 3D geometry with a perspective matrix, then all of your UI with a orthographic matrix.
Apologies if that was a bit rambling. I've never been terribly good at explaining all that math-y stuff.
I am trying to find the "left" border of my WebGL view port because I would like to draw a number of debug information there. (an axis mini map like most modeling programs have) I certainly can get the width and height of the canvas containing the WebGL viewport.
Just switch the viewport and projection for those parts. You can change them anytime.
See http://games.greggman.com/game/webgl-fundamentals
Basically if you want to draw in 2D use a 2D shader, don't try messing with a 3D shader.
WebGL draws in clipspace so all you need to do is convert from pixels to clip space.
attribute vec2 a_position;
uniform vec2 u_resolution;
void main() {
// convert positions from pixels to 0.0 to 1.0
vec2 zeroToOne = a_position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace * vec2(1, -1), 0, 1);
}
Why not just use some overlaid HTML?
<html>
<head>
<style>
#container {
position: relative;
}
#debugInfo {
position: absolute;
top: 0px;
left: 0px;
background-color: rgba(0,0,0,0.7);
padding: 1em;
z-index: 2;
color: white;
}
</style>
</head>
<body>
<div id="container">
<canvas></canvas>
<div id="debugInfo">There be info here!</div>
</div>
</body>
</html>
You can then update it with
var debugInfo = document.getElementById("debugInfo");
debugInfo.innerHTML = "some info";