If both use hardware acceleration (GPU) to execute code, why WebGL is so most faster than Canvas?
I mean, I want to know why at low level, the chain from the code to the processor.
What happens? Canvas/WebGL comunicates directly with Drivers and then with Video Card?
Canvas is slower because it's generic and therefore is hard to optimize to the same level that you can optimize WebGL. Let's take a simple example, drawing a solid circle with arc.
Canvas actually runs on top of the GPU as well using the same APIs as WebGL. So, what does canvas have to do when you draw an circle? The minimum code to draw an circle in JavaScript using canvas 2d is
ctx.beginPath():
ctx.arc(x, y, radius, startAngle, endAngle);
ctx.fill();
You can imagine internally the simplest implementation is
beginPath creates a buffer (gl.bufferData)
arc generates the points for triangles that make a circle and uploads with gl.bufferData.
fill calls gl.drawArrays or gl.drawElements
But wait a minute ... knowing what we know about how GL works canvas can't generate the points at step 2 because if we call stroke instead of fill then based on what we know about how GL works we need a different set of points for a solid circle (fill) vs an outline of a circle (stroke). So, what really happens is something more like
beginPath creates or resets some internal buffer
arc generates the points that make a circle into the internal buffer
fill takes the points in that internal buffer, generates the correct set of triangles for the points in that internal buffer into a GL buffer, uploads them with gl.bufferData, calls gl.drawArrays or gl.drawElements
What happens if we want to draw 2 circles? The same steps are likely repeated.
Let's compare that to what we would do in WebGL. Of course in WebGL we'd have to write our own shaders (Canvas has its shaders as well). We'd also have to create a buffer and fill it with the triangles for a circle, (note we already saved time as we skipped the intermediate buffer of points). We then can call gl.drawArrays or gl.drawElements to draw our circle. And if we want to draw a second circle? We just update a uniform and call gl.drawArrays again skipping all the other steps.
const m4 = twgl.m4;
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
attribute vec4 position;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * position;
}
`;
const fs = `
precision mediump float;
uniform vec4 u_color;
void main() {
gl_FragColor = u_color;
}
`;
const program = twgl.createProgram(gl, [vs, fs]);
const positionLoc = gl.getAttribLocation(program, 'position');
const colorLoc = gl.getUniformLocation(program, 'u_color');
const matrixLoc = gl.getUniformLocation(program, 'u_matrix');
const positions = [];
const radius = 50;
const numEdgePoints = 64;
for (let i = 0; i < numEdgePoints; ++i) {
const angle0 = (i ) * Math.PI * 2 / numEdgePoints;
const angle1 = (i + 1) * Math.PI * 2 / numEdgePoints;
// make a triangle
positions.push(
0, 0,
Math.cos(angle0) * radius,
Math.sin(angle0) * radius,
Math.cos(angle1) * radius,
Math.sin(angle1) * radius,
);
}
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);
gl.enableVertexAttribArray(positionLoc);
gl.vertexAttribPointer(positionLoc, 2, gl.FLOAT, false, 0, 0);
gl.useProgram(program);
const projection = m4.ortho(0, gl.canvas.width, 0, gl.canvas.height, -1, 1);
function drawCircle(x, y, color) {
const mat = m4.translate(projection, [x, y, 0]);
gl.uniform4fv(colorLoc, color);
gl.uniformMatrix4fv(matrixLoc, false, mat);
gl.drawArrays(gl.TRIANGLES, 0, numEdgePoints * 3);
}
drawCircle( 50, 75, [1, 0, 0, 1]);
drawCircle(150, 75, [0, 1, 0, 1]);
drawCircle(250, 75, [0, 0, 1, 1]);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Some devs might look at that and think Canvas caches the buffer so it can just reuse the points on the 2nd draw call. It's possible that's true but I kind of doubt it. Why? Because of the genericness of the canvas api. fill, the function that does all the real work doesn't know what's in the internal buffer of points. You can call arc, then moveTo, lineTo, then arc again, then call fill. All of those points will be in the internal buffer of points when we get to fill.
const ctx = document.querySelector('canvas').getContext('2d');
ctx.beginPath();
ctx.moveTo(50, 30);
ctx.lineTo(100, 150);
ctx.arc(150, 75, 30, 0, Math.PI * 2);
ctx.fill();
<canvas></canvas>
In other words, fill needs to always look at all the points. Another thing, I suspect arc tries to optimize for size. If you call arc with a radius of 2 it probably generates less points than if you call it with a radius of 2000. It's possible canvas caches the points but given the hit rate would likely be small it seems unlikely.
In any case, the point is WebGL let's you take control at a lower level allowing you skip steps that canvas can't skip. It also lets you reuse data that canvas can't reuse.
In fact if we know we want to draw 10000 animated circles we even have other options in WebGL. We could generate the points for 10000 circles which is a valid option. We could also use instancing. Both of those techniques would be vastly faster than canvas since in canvas we'd have to call arc 10000 times and one way or another it would have to generate points for 10000 circles every single frame instead of just once at the beginning and it would have to call gl.drawXXX 10000 times instead of just once.
Of course the converse is canvas is easy. Drawing the circle took 3 lines of code. In WebGL, because you need to setup and write shaders it probably takes at least 60 lines of code. In fact the example above is about 60 lines not including the code to compile and link shaders (~10 lines). On top of that canvas supports transforms, patterns, gradients, masks, etc. All options we'd have to add with lots more lines of code in WebGL. So canvas is basically trading ease of use for speed over WebGL.
Canvas does not execute a pipeline of layers of processing to transition sets of vertices and indices into triangles which then are given textures and lighting all in hardware as does OpenGL/WebGL ... this is the root cause of such speed differences ... Canvas counterparts to such formulations are all done on CPU with only the final rendering sent to the graphics hardware ... speed differences are particularly evident when massive number of such vertices are attempted to be synthesized/animated on Canvas versus WebGL ...
Alas we are on the cusp on hearing the public announcement of the modern replacement to OpenGL : Vulkan who's remit includes exposing general purpose compute in a more pedestrian way than OpenCL/CUDA as well as baking in use of multi-core processors which might just shift Canvas like processing onto hardware
Related
Let's say your Starling display-list is as follows:
Stage
|___MainApp
|______Canvas (filter's target)
Then, you decide your MainApp should be rotated 90 degrees and offset a bit:
mainApp.rotation = Math.PI * 0.5;
mainApp.x = stage.stageWidth;
But all of a sudden, the filter keeps on applying itself to the target (canvas) in the angle it was originally (as if the MainApp was still at 0 degrees).
(notice in the GIF how the Blur's strong horizontal value continues to only apply horizontally although the parent object turned 90 degrees).
What would need to be changed to apply the filter to the target object before it gets it's parents transform? That way (I'm assuming) the filter's result would get transformed by the parent objects.
Any guess as to how this could be done?
https://github.com/bigp/StarlingShaderIssue
(PS: the filter I'm actually using is custom-made, but this BlurFilter example shows the same issue I'm having with the custom one. If there's any patching-up to do in the shader code, at least it wouldn't necessarily have to be done on the built-in BlurFilter specifically).
I solved this myself with numerous trial and error attempts over the course of several hours.
Since I only needed the shader to run in either at 0 or 90 degrees (not actually tweened like the gif demo shown in the question), I created a shader with two specialized sets of AGAL instructions.
Without going in too much details, the rotated version basically requires a few extra instructions to flip the x and y fields in the vertex and fragment shader (either by moving them with mov or directly calculating the mul or div result into the x or y field).
For example, compare the 0 deg vertex shader...
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
"mul posScaled, va1, viewportScale", // pass displacement positions (scaled)
].join("\n");
... with the 90 deg vertex shader:
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
//Calculate the rotated vertex "displacement" UVs
"mov temp1, va1",
"mov temp2, va1",
"mul temp2.y, temp1.x, viewportScale.y", //Flip X to Y, and scale with viewport Y
"mul temp2.x, temp1.y, viewportScale.x", //Flip Y to X, and scale with viewport X
"sub temp2.y, 1.0, temp2.y", //Invert the UV for the Y axis.
"mov posScaled, temp2",
].join("\n");
You can ignore the special aliases in the AGAL example, they're essentially posOriginal = v0, posScaled = v1 variants and viewportScale = vc4constants, then I do a string-replace to change them back to their respective registers & fields ).
Just a human-readable trick I use to avoid going insane. \☻/
The part that I struggled with the most was calculating the correct scale to adjust the UV's scale (with proper detection to Stage / Viewport resize and render-texture size shifts).
Eventually, this is what I came up with in the AS3 code:
var pt:Texture = _passTexture,
dt:RenderTexture = _displacement.texture,
notReady:Boolean = pt == null,
star:Starling = Starling.current;
var finalScaleX:Number, viewRatioX:Number = star.viewPort.width / star.stage.stageWidth;
var finalScaleY:Number, viewRatioY:Number = star.viewPort.height / star.stage.stageHeight;
if (notReady) {
finalScaleX = finalScaleY = 1.0;
} else if (isRotated) {
//NOTE: Notice how the native width is divided with height, instead of same side. Weird, but it works!
finalScaleY = pt.nativeWidth / dt.nativeHeight / _imageRatio / paramScaleX / viewRatioX; //Eureka!
finalScaleX = pt.nativeHeight / dt.nativeWidth / _imageRatio / paramScaleY / viewRatioY; //Eureka x2!
} else {
finalScaleX = pt.nativeWidth / dt.nativeWidth / _imageRatio / viewRatioX / paramScaleX;
finalScaleY = pt.nativeHeight / dt.nativeHeight / _imageRatio / viewRatioY / paramScaleY;
}
Hopefully these extracted pieces of code can be helpful to others with similar shader issues.
Good luck!
I am converting my sprite drawing function from canvas 2d to webgl.
As I am new to webgl (and openGL too), I learned from this tuto http://games.greggman.com/game/webgl-image-processing/ and I did copy many lines from it, and some other ones I found.
At last I got it working, but there are some issues. For some reason, some images are never drawn though other ones are, then I get big random black squares on the screen, and finally it makes firefox crash...
I am tearing my hair out trying to solve these problems, but I am just lost... I have to ask for some help.
Please someone have a look at my code and tell me if you see where I made errors.
The vertex shader and fragment shader :
<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;
attribute vec2 a_texCoord;
uniform vec2 u_resolution;
uniform vec2 u_translation;
uniform vec2 u_rotation;
varying vec2 v_texCoord;
void main()
{
// Rotate the position
vec2 rotatedPosition = vec2(
a_position.x * u_rotation.y + a_position.y * u_rotation.x,
a_position.y * u_rotation.y - a_position.x * u_rotation.x);
// Add in the translation.
vec2 position = rotatedPosition + u_translation;
// convert the rectangle from pixels to 0.0 to 1.0
vec2 zeroToOne = a_position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace * vec2(1, -1), 0, 1);
// pass the texCoord to the fragment shader
// The GPU will interpolate this value between points
v_texCoord = a_texCoord;
}
</script>
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
gl_FragColor = texture2D(u_image, v_texCoord);
}
</script>
I use several layered canvas to avoid wasting ressources redrawing the big background and foreground at every frame while they never change. So my canvas are in liste_canvas[] and contexts are in liste_ctx[], c is the id ("background"/"game"/"foreground"/"infos"). Here is their creation code :
// Get A WebGL context
liste_canvas[c] = document.createElement("canvas") ;
document.getElementById('game_div').appendChild(liste_canvas[c]);
liste_ctx[c] = liste_canvas[c].getContext('webgl',{premultipliedAlpha:false}) || liste_canvas[c].getContext('experimental-webgl',{premultipliedAlpha:false});
liste_ctx[c].viewport(0, 0, game.res_w, game.res_h);
// setup a GLSL program
liste_ctx[c].vertexShader = createShaderFromScriptElement(liste_ctx[c], "2d-vertex-shader");
liste_ctx[c].fragmentShader = createShaderFromScriptElement(liste_ctx[c], "2d-fragment-shader");
liste_ctx[c].program = createProgram(liste_ctx[c], [liste_ctx[c].vertexShader, liste_ctx[c].fragmentShader]);
liste_ctx[c].useProgram(liste_ctx[c].program);
And here is my sprite drawing function.
My images are stored in a list too, sprites[], with a string name as id.
They store their origin, which is not necessarily their real center, as .orgn_x and .orgn_y.
function draw_sprite( id_canvas , d_sprite , d_x , d_y , d_rotation , d_scale , d_opacity )
{
if( id_canvas=="" ){ id_canvas = "game" ; }
if( !d_scale ){ d_scale = 1 ; }
if( !d_rotation ){ d_rotation = 0 ; }
if( render_mode == "webgl" )
{
c = id_canvas ;
// look up where the vertex data needs to go.
var positionLocation = liste_ctx[c].getAttribLocation(liste_ctx[c].program, "a_position");
var texCoordLocation = liste_ctx[c].getAttribLocation(liste_ctx[c].program, "a_texCoord");
// provide texture coordinates for the rectangle.
var texCoordBuffer = liste_ctx[c].createBuffer();
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, texCoordBuffer);
liste_ctx[c].bufferData(liste_ctx[c].ARRAY_BUFFER, new Float32Array([
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
0.0, 1.0,
1.0, 0.0,
1.0, 1.0]), liste_ctx[c].STATIC_DRAW);
liste_ctx[c].enableVertexAttribArray(texCoordLocation);
liste_ctx[c].vertexAttribPointer(texCoordLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
// Create a texture.
var texture = liste_ctx[c].createTexture();
liste_ctx[c].bindTexture(liste_ctx[c].TEXTURE_2D, texture);
// Set the parameters so we can render any size image.
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_WRAP_S, liste_ctx[c].CLAMP_TO_EDGE);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_WRAP_T, liste_ctx[c].CLAMP_TO_EDGE);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_MIN_FILTER, liste_ctx[c].LINEAR);
liste_ctx[c].texParameteri(liste_ctx[c].TEXTURE_2D, liste_ctx[c].TEXTURE_MAG_FILTER, liste_ctx[c].LINEAR);
// Upload the image into the texture.
liste_ctx[c].texImage2D(liste_ctx[c].TEXTURE_2D, 0, liste_ctx[c].RGBA, liste_ctx[c].RGBA, liste_ctx[c].UNSIGNED_BYTE, sprites[d_sprite] );
// set the resolution
var resolutionLocation = liste_ctx[c].getUniformLocation(liste_ctx[c].program, "u_resolution");
liste_ctx[c].uniform2f(resolutionLocation, liste_canvas[c].width, liste_canvas[c].height);
// Create a buffer and put a single clipspace rectangle in it (2 triangles)
var buffer = liste_ctx[c].createBuffer();
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, buffer);
liste_ctx[c].enableVertexAttribArray(positionLocation);
liste_ctx[c].vertexAttribPointer(positionLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
// then I calculate the coordinates of the four points of the rectangle
// taking their origin and scale into account
// I cut this part as it is large and has no importance here
// and at last, we draw
liste_ctx[c].bufferData(liste_ctx[c].ARRAY_BUFFER, new Float32Array([
topleft_x , topleft_y ,
topright_x , topright_y ,
bottomleft_x , bottomleft_y ,
bottomleft_x , bottomleft_y ,
topright_x , topright_y ,
bottomright_x , bottomright_y ]), liste_ctx[c].STATIC_DRAW);
// draw
liste_ctx[c].drawArrays(liste_ctx[c].TRIANGLES, 0, 6);
}
}
I did not find any way to port ctx.globalAlpha to webgl by the way. If someone knows how I could add it in my code, I woud be thanksful for that too.
Please help. Thanks.
I don't know why things are crashing but just a few random comments.
Only create buffers and textures once.
Currently the code is creating buffers and textures every time you call draw_sprite. Instead you should be creating them at initialization time just once and then using the created buffers and textures later. Similarly you should look up the attribute and uniform locations at initialization time and then use them when you draw.
It's possible firefox is crashing because it's running out of memory since you're creating new buffers and new textures every time you call draw_sprite
I believe it's more common to make a single buffer with a unit square it in and then use matrix math to move that square where you want it. See http://games.greggman.com/game/webgl-2d-matrices/ for some help with matrix math.
If you go that route then you only need to call all the buffer related stuff once.
Even if you don't use matrix math you can still add translation and scale to your shader, then just make one buffer with a unit rectangle (as in
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1]), gl.STATIC_DRAW)
After that then just translate it where you want it and scale it to the size you want it drawn.
In fact, if you go the matrix route it would be really easy to simulate the 2d context's matrix functions ctx.translate, ctx.rotate, ctx.scale etc...
The code might be easier to follow, and type, if you pulled the context into a local variable.
Instead of stuff like
liste_ctx[c].bindBuffer(liste_ctx[c].ARRAY_BUFFER, buffer);
liste_ctx[c].enableVertexAttribArray(positionLocation);
liste_ctx[c].vertexAttribPointer(positionLocation, 2, liste_ctx[c].FLOAT, false, 0, 0);
You could do this
var gl = liste_ctx[c];
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
Storing things on the context is going to get tricky
This code
liste_ctx[c].vertexShader = createShaderFromScriptElement(liste_ctx[c], "2d-vertex-shader");
liste_ctx[c].fragmentShader = createShaderFromScriptElement(liste_ctx[c], "2d-fragment-shader");
liste_ctx[c].program = createProgram(liste_ctx[c], [liste_ctx[c].vertexShader, liste_ctx[c].fragmentShader]);
Makes it look like you're going to only have a single vertexshader, a single fragment shader and single program. Maybe you are but it's pretty common in WebGL to have several shaders and programs.
For globalAlpha first you need to turn on blending.
gl.enable(gl.BLEND);
And you need to tell it how to blend. To be the same as the canvas 2d context you
need to use pre-multiplied alpha math so
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
Then you need to multiply the color the shader draws by an alpha value. For example
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// global alpha
uniform float u_globalAlpha;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
vec4 color = texture2D(u_image, v_texCoord);
// Multiply the color by u_globalAlpha
gl_FragColor = color * u_globalAlpha;
}
</script>
Then you'll need to set u_globalAlpha. At init time look up it's location
var globalAlphaLocation = gl.getUniformLocation(program, "u_globalAlpha");
And at draw time set it
gl.uniform1f(globalAlphaLocation, someValueFrom0to1);
Personally I usually use a vec4 and call it u_colorMult
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image;
// colorMult
uniform float u_colorMult;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main()
{
// Look up a color from the texture.
gl_FragColor = texture2D(u_image, v_texCoord) * u_colorMult;
}
</script>
Then I can tint my sprites for example to make the sprite draw in red just use
glUniform4fv(colorMultLocation, [1, 0, 0, 1]);
It also means I can easily draw in solid colors. Create a 1x1 pixel solid white texture. Anytime I want to draw in a solid color I just bind that texture and set u_colorMult to the color I want to draw in.
I want to draw an image using a HTML5 canvas, translate the image and then change the image but keep the transformations I've made. Is this possible?
Here's some pseudo-code to illustrate my problem:
// initially draw an image and translate it
var context = canvas.getContext("2d");
context.putImageData(someData, 0, 0);
context.translate(200, 10);
// then later somewhere else in code
// this should be drawn # 200/10
var context = canvas.getContext("2d");
context.putImageData(someOtherData, ?, ?);
I thought this would be possible by some save/restore calls but I did not succeed yet, so how can I achieve this?
The problem here is that putImageData is not affected by the transformation matrix.
This is by the Spec's orders:
The current path, transformation matrix, shadow attributes, global alpha, the clipping region, and global composition operator must not affect the getImageData() and putImageData() methods.
What you can do is putImageData to an in-memory canvas, and then
inMemCtx.putImageData(imgdata, 0, 0);
context.drawImage(inMemoryCanvas, x, y)
onto your regular canvas, and the translation will be applied to the drawImage call.
Put the image and its associated data in a custom data structure. For instance:
var obj = {
imgData: someData,
position: [0, 0],
translate: function (x, y) {
this.position[0] = x;
this.position[1] = y;
}
};
Now you can do successive updates to both imgData and its position. Use obj.position[0] and obj.position[1] as xy-coordinates when drawing.
obj.translate(200, 10);
This question already has answers here:
What's the best way to set a single pixel in an HTML5 canvas?
(14 answers)
Closed 7 years ago.
Drawing a line on the HTML5 canvas is quite straightforward using the context.moveTo() and context.lineTo() functions.
I'm not quite sure if it's possible to draw a dot i.e. color a single pixel. The lineTo function wont draw a single pixel line (obviously).
Is there a method to do this?
For performance reasons, don't draw a circle if you can avoid it. Just draw a rectangle with a width and height of one:
ctx.fillRect(10,10,1,1); // fill in the pixel at (10,10)
If you are planning to draw a lot of pixel, it's a lot more efficient to use the image data of the canvas to do pixel drawing.
var canvas = document.getElementById("myCanvas");
var canvasWidth = canvas.width;
var canvasHeight = canvas.height;
var ctx = canvas.getContext("2d");
var canvasData = ctx.getImageData(0, 0, canvasWidth, canvasHeight);
// That's how you define the value of a pixel
function drawPixel (x, y, r, g, b, a) {
var index = (x + y * canvasWidth) * 4;
canvasData.data[index + 0] = r;
canvasData.data[index + 1] = g;
canvasData.data[index + 2] = b;
canvasData.data[index + 3] = a;
}
// That's how you update the canvas, so that your
// modification are taken in consideration
function updateCanvas() {
ctx.putImageData(canvasData, 0, 0);
}
Then, you can use it in this way :
drawPixel(1, 1, 255, 0, 0, 255);
drawPixel(1, 2, 255, 0, 0, 255);
drawPixel(1, 3, 255, 0, 0, 255);
updateCanvas();
For more information, you can take a look at this Mozilla blog post : http://hacks.mozilla.org/2009/06/pushing-pixels-with-canvas/
It seems strange, but nonetheless HTML5 supports drawing lines, circles, rectangles and many other basic shapes, it does not have anything suitable for drawing the basic point. The only way to do so is to simulate a point with whatever you have.
So basically there are 3 possible solutions:
draw point as a line
draw point as a polygon
draw point as a circle
Each of them has their drawbacks.
Line
function point(x, y, canvas){
canvas.beginPath();
canvas.moveTo(x, y);
canvas.lineTo(x+1, y+1);
canvas.stroke();
}
Keep in mind that we are drawing to South-East direction, and if this is the edge, there can be a problem. But you can also draw in any other direction.
Rectangle
function point(x, y, canvas){
canvas.strokeRect(x,y,1,1);
}
or in a faster way using fillRect because render engine will just fill one pixel.
function point(x, y, canvas){
canvas.fillRect(x,y,1,1);
}
Circle
One of the problems with circles is that it is harder for an engine to render them
function point(x, y, canvas){
canvas.beginPath();
canvas.arc(x, y, 1, 0, 2 * Math.PI, true);
canvas.stroke();
}
the same idea as with rectangle you can achieve with fill.
function point(x, y, canvas){
canvas.beginPath();
canvas.arc(x, y, 1, 0, 2 * Math.PI, true);
canvas.fill();
}
Problems with all these solutions:
it is hard to keep track of all the points you are going to draw.
when you zoom in, it looks ugly
If you are wondering, what is the best way to draw a point, I would go with filled rectangle. You can see my jsperf here with comparison tests
In my Firefox this trick works:
function SetPixel(canvas, x, y)
{
canvas.beginPath();
canvas.moveTo(x, y);
canvas.lineTo(x+0.4, y+0.4);
canvas.stroke();
}
Small offset is not visible on screen, but forces rendering engine to actually draw a point.
The above claim that "If you are planning to draw a lot of pixel, it's a lot more efficient to use the image data of the canvas to do pixel drawing" seems to be quite wrong - at least with Chrome 31.0.1650.57 m or depending on your definition of "lot of pixel". I would have preferred to comment directly to the respective post - but unfortunately I don't have enough stackoverflow points yet:
I think that I am drawing "a lot of pixels" and therefore I first followed the respective advice for good measure I later changed my implementation to a simple ctx.fillRect(..) for each drawn point, see http://www.wothke.ch/webgl_orbittrap/Orbittrap.htm
Interestingly it turns out the silly ctx.fillRect() implementation in my example is actually at least twice as fast as the ImageData based double buffering approach.
At least for my scenario it seems that the built-in ctx.getImageData/ctx.putImageData is in fact unbelievably SLOW. (It would be interesting to know the percentage of pixels that need to be touched before an ImageData based approach might take the lead..)
Conclusion: If you need to optimize performance you have to profile YOUR code and act on YOUR findings..
This should do the job
//get a reference to the canvas
var ctx = $('#canvas')[0].getContext("2d");
//draw a dot
ctx.beginPath();
ctx.arc(20, 20, 10, 0, Math.PI*2, true);
ctx.closePath();
ctx.fill();
I want to draw a 3D ball or sphere in HTML 5.0 canvas. I want to understand the Algorithm about how to draw a 3D sphere. Who can share this with me?
You will need to model a sphere, and have it be varying colors so that as it rotates you can see that it is not only a sphere, but being rendered.
Otherwise, a sphere in space, with not point of reference around it looks like a circle, if it is all one solid color.
To start with you will want to try drawing a circle with rectangles, as that is the main primitive you have.
Once you understand how to do that, or create a new primitive, such as a triangle, using the Path method, and create a circle, then you are ready to move it to 3D.
3D is just a trick, as you will take your model, probably generated by an equation, and then flatten it, as you determine which parts will be seen, and then display it.
But, you will want to change the color of the triangles based on how far they are from a source of light, as well as based on the angle of that part to the light source.
This is where you can start to do optimizations, as, if you do this pixel by pixel then you are raytracing. If you have larger blocks, and a point source of light, and the object is rotating but not moving around then you can recalculate how the color changes for each triangle, then it is just a matter of changing colors to simulate rotating.
The algorithm will depend on what simplifications you want to make, so as you gain experience come back and ask, showing what you have done so far.
Here is an example of doing it, and below I copied the 3D sphere part, but please look at the entire article.
function Sphere3D(radius) {
this.point = new Array();
this.color = "rgb(100,0,255)"
this.radius = (typeof(radius) == "undefined") ? 20.0 : radius;
this.radius = (typeof(radius) != "number") ? 20.0 : radius;
this.numberOfVertexes = 0;
// Loop from 0 to 360 degrees with a pitch of 10 degrees ...
for(alpha = 0; alpha <= 6.28; alpha += 0.17) {
p = this.point[this.numberOfVertexes] = new Point3D();
p.x = Math.cos(alpha) * this.radius;
p.y = 0;
p.z = Math.sin(alpha) * this.radius;
this.numberOfVertexes++;
}
// Loop from 0 to 90 degrees with a pitch of 10 degrees ...
// (direction = 1)
// Loop from 0 to 90 degrees with a pitch of 10 degrees ...
// (direction = -1)
for(var direction = 1; direction >= -1; direction -= 2) {
for(var beta = 0.17; beta < 1.445; beta += 0.17) {
var radius = Math.cos(beta) * this.radius;
var fixedY = Math.sin(beta) * this.radius * direction;
for(var alpha = 0; alpha < 6.28; alpha += 0.17) {
p = this.point[this.numberOfVertexes] = new Point3D();
p.x = Math.cos(alpha) * radius;
p.y = fixedY;
p.z = Math.sin(alpha) * radius;
this.numberOfVertexes++;
}
}
}
}
u can try with three.js library , which abstracts a lot of code from core webgl programming. Include three.js library in your html from three.js lib.
u can use canvas renderer for safari browser , webgl works for chrome
please find the JS FIDDLE FOR SPHERE
var camera, scene, material, mesh, geometry, renderer
function drawSphere() {
init();
animate();
}
function init() {
// camera
scene = new THREE.Scene()
camera = new THREE.PerspectiveCamera(50, window.innerWidth / innerHeight, 1, 1000);
camera.position.z = 300;
scene.add(camera);
// sphere object
var radius = 50,
segments = 10,
rings = 10;
geometry = new THREE.SphereGeometry(radius, segments, rings);
material = new THREE.MeshNormalMaterial({
color: 0x002288
});
mesh = new THREE.Mesh(geometry, material);
//scene
;
scene.add(mesh);
// renderer
renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
mesh.rotation.x += .01;
mesh.rotation.y += .02;
renderer.render(scene, camera);
}
// fn callin
drawSphere();
Update: This code is quite old and limited. There are libraries for doing 3D spheres now: http://techslides.com/d3-globe-with-canvas-webgl-and-three-js/
Over ten years ago I wrote a Java applet to render a textured sphere by actually doing the math to work out where the surface of the sphere was in the scene (not using triangles).
I've rewritten it in JavaScript for canvas and I've got a demo rendering the earth as a sphere:
(source: haslers.info)
I get around 22 fps on my machine. Which is about as fast as the Java version it was based on renders at, if not a little faster!
Now it's a long time since I wrote the Java code - and it was quite obtuse - so I don't really remember exactly how it works, I've just ported it JavaScript. However this is from a slow version of the code and I'm not sure if the faster version was due to optimisations in the Java methods I used to manipulate pixels or from speedups in the math it does to work out which pixel to render from the texture. I was also corresponding at the time with someone who had a similar applet that was much faster than mine but again I don't know if any of the speed improvements they had would be possible in JavaScript as it may have relied on Java libraries. (I never saw their code so I don't know how they did it.)
So it may be possible to improve on the speed. But this works well as a proof of concept.
I'll have a go at converting my faster version some time to see if I can get any speed improvements into the JavaScript version.
Well, an image of a sphere will always have a circular shape on your screen, so the only thing that matters is the shading. This will be determined by where you place your light source.
As for algorithms, ray tracing is the simplest, but also the slowest by far — so you probably wouldn't want to use it to do anything very complicated in a <CANVAS> (especially given the lack of graphics acceleration available in that environment), but it might be fast enough if you just wanted to do a single sphere.