I want to draw an image using a HTML5 canvas, translate the image and then change the image but keep the transformations I've made. Is this possible?
Here's some pseudo-code to illustrate my problem:
// initially draw an image and translate it
var context = canvas.getContext("2d");
context.putImageData(someData, 0, 0);
context.translate(200, 10);
// then later somewhere else in code
// this should be drawn # 200/10
var context = canvas.getContext("2d");
context.putImageData(someOtherData, ?, ?);
I thought this would be possible by some save/restore calls but I did not succeed yet, so how can I achieve this?
The problem here is that putImageData is not affected by the transformation matrix.
This is by the Spec's orders:
The current path, transformation matrix, shadow attributes, global alpha, the clipping region, and global composition operator must not affect the getImageData() and putImageData() methods.
What you can do is putImageData to an in-memory canvas, and then
inMemCtx.putImageData(imgdata, 0, 0);
context.drawImage(inMemoryCanvas, x, y)
onto your regular canvas, and the translation will be applied to the drawImage call.
Put the image and its associated data in a custom data structure. For instance:
var obj = {
imgData: someData,
position: [0, 0],
translate: function (x, y) {
this.position[0] = x;
this.position[1] = y;
}
};
Now you can do successive updates to both imgData and its position. Use obj.position[0] and obj.position[1] as xy-coordinates when drawing.
obj.translate(200, 10);
Related
If both use hardware acceleration (GPU) to execute code, why WebGL is so most faster than Canvas?
I mean, I want to know why at low level, the chain from the code to the processor.
What happens? Canvas/WebGL comunicates directly with Drivers and then with Video Card?
Canvas is slower because it's generic and therefore is hard to optimize to the same level that you can optimize WebGL. Let's take a simple example, drawing a solid circle with arc.
Canvas actually runs on top of the GPU as well using the same APIs as WebGL. So, what does canvas have to do when you draw an circle? The minimum code to draw an circle in JavaScript using canvas 2d is
ctx.beginPath():
ctx.arc(x, y, radius, startAngle, endAngle);
ctx.fill();
You can imagine internally the simplest implementation is
beginPath creates a buffer (gl.bufferData)
arc generates the points for triangles that make a circle and uploads with gl.bufferData.
fill calls gl.drawArrays or gl.drawElements
But wait a minute ... knowing what we know about how GL works canvas can't generate the points at step 2 because if we call stroke instead of fill then based on what we know about how GL works we need a different set of points for a solid circle (fill) vs an outline of a circle (stroke). So, what really happens is something more like
beginPath creates or resets some internal buffer
arc generates the points that make a circle into the internal buffer
fill takes the points in that internal buffer, generates the correct set of triangles for the points in that internal buffer into a GL buffer, uploads them with gl.bufferData, calls gl.drawArrays or gl.drawElements
What happens if we want to draw 2 circles? The same steps are likely repeated.
Let's compare that to what we would do in WebGL. Of course in WebGL we'd have to write our own shaders (Canvas has its shaders as well). We'd also have to create a buffer and fill it with the triangles for a circle, (note we already saved time as we skipped the intermediate buffer of points). We then can call gl.drawArrays or gl.drawElements to draw our circle. And if we want to draw a second circle? We just update a uniform and call gl.drawArrays again skipping all the other steps.
const m4 = twgl.m4;
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
attribute vec4 position;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * position;
}
`;
const fs = `
precision mediump float;
uniform vec4 u_color;
void main() {
gl_FragColor = u_color;
}
`;
const program = twgl.createProgram(gl, [vs, fs]);
const positionLoc = gl.getAttribLocation(program, 'position');
const colorLoc = gl.getUniformLocation(program, 'u_color');
const matrixLoc = gl.getUniformLocation(program, 'u_matrix');
const positions = [];
const radius = 50;
const numEdgePoints = 64;
for (let i = 0; i < numEdgePoints; ++i) {
const angle0 = (i ) * Math.PI * 2 / numEdgePoints;
const angle1 = (i + 1) * Math.PI * 2 / numEdgePoints;
// make a triangle
positions.push(
0, 0,
Math.cos(angle0) * radius,
Math.sin(angle0) * radius,
Math.cos(angle1) * radius,
Math.sin(angle1) * radius,
);
}
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);
gl.enableVertexAttribArray(positionLoc);
gl.vertexAttribPointer(positionLoc, 2, gl.FLOAT, false, 0, 0);
gl.useProgram(program);
const projection = m4.ortho(0, gl.canvas.width, 0, gl.canvas.height, -1, 1);
function drawCircle(x, y, color) {
const mat = m4.translate(projection, [x, y, 0]);
gl.uniform4fv(colorLoc, color);
gl.uniformMatrix4fv(matrixLoc, false, mat);
gl.drawArrays(gl.TRIANGLES, 0, numEdgePoints * 3);
}
drawCircle( 50, 75, [1, 0, 0, 1]);
drawCircle(150, 75, [0, 1, 0, 1]);
drawCircle(250, 75, [0, 0, 1, 1]);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Some devs might look at that and think Canvas caches the buffer so it can just reuse the points on the 2nd draw call. It's possible that's true but I kind of doubt it. Why? Because of the genericness of the canvas api. fill, the function that does all the real work doesn't know what's in the internal buffer of points. You can call arc, then moveTo, lineTo, then arc again, then call fill. All of those points will be in the internal buffer of points when we get to fill.
const ctx = document.querySelector('canvas').getContext('2d');
ctx.beginPath();
ctx.moveTo(50, 30);
ctx.lineTo(100, 150);
ctx.arc(150, 75, 30, 0, Math.PI * 2);
ctx.fill();
<canvas></canvas>
In other words, fill needs to always look at all the points. Another thing, I suspect arc tries to optimize for size. If you call arc with a radius of 2 it probably generates less points than if you call it with a radius of 2000. It's possible canvas caches the points but given the hit rate would likely be small it seems unlikely.
In any case, the point is WebGL let's you take control at a lower level allowing you skip steps that canvas can't skip. It also lets you reuse data that canvas can't reuse.
In fact if we know we want to draw 10000 animated circles we even have other options in WebGL. We could generate the points for 10000 circles which is a valid option. We could also use instancing. Both of those techniques would be vastly faster than canvas since in canvas we'd have to call arc 10000 times and one way or another it would have to generate points for 10000 circles every single frame instead of just once at the beginning and it would have to call gl.drawXXX 10000 times instead of just once.
Of course the converse is canvas is easy. Drawing the circle took 3 lines of code. In WebGL, because you need to setup and write shaders it probably takes at least 60 lines of code. In fact the example above is about 60 lines not including the code to compile and link shaders (~10 lines). On top of that canvas supports transforms, patterns, gradients, masks, etc. All options we'd have to add with lots more lines of code in WebGL. So canvas is basically trading ease of use for speed over WebGL.
Canvas does not execute a pipeline of layers of processing to transition sets of vertices and indices into triangles which then are given textures and lighting all in hardware as does OpenGL/WebGL ... this is the root cause of such speed differences ... Canvas counterparts to such formulations are all done on CPU with only the final rendering sent to the graphics hardware ... speed differences are particularly evident when massive number of such vertices are attempted to be synthesized/animated on Canvas versus WebGL ...
Alas we are on the cusp on hearing the public announcement of the modern replacement to OpenGL : Vulkan who's remit includes exposing general purpose compute in a more pedestrian way than OpenCL/CUDA as well as baking in use of multi-core processors which might just shift Canvas like processing onto hardware
I suppose this doesn't work because canvas is drawing a bitmap of a vector (and a bitmap is not a path).
Even if it did work, the bitmap is likely always has a rectangular permitter.
Is there any way to leverage something like isPointInPath when using drawImage?
example:
The top canvas is drawn using drawImage and isPointInPath does not work.
The bottom canvas is drawn using arc and isPointInPath works.
a link to my proof
** EDIT **
I draw a circle on one canvas, and use isPointInPath to see if the mouse pointer is inside the circle (bottom canvas in my example).
I also "copy" the bottom canvas to the top canvas using drawImage. Notice that isPointInPath will not work on the top canvas (most likely due to reasons I mentioned above). Is there a work-around I can use for this that will work for ANY kind of path (or bitmap)?
A canvas context has this hidden thing called the current path. ctx.beginPath, ctx.lineTo etc create this path.
When you call ctx.stroke() or ctx.fill() the canvas strokes or fills that path.
Even after it is stroked or filled, the path is still present in the context.
This path is the only thing that isPointInPath tests.
If you want to test if something is in an image you have drawn or a rectangle that was drawn with ctx.fillRect(), that is not possible using built in methods.
Typically you'd want to use a is-point-in-rectangle function that you write yourself (or get from someone else).
If you're looking for how to do pixel-perfect (instead of just the image rectangle) hit detection for an image there are various methods of doing that discussed here: Pixel perfect 2D mouse picking with Canvas
You could try reimplementing ctx.drawImage() to always draw a box behind the image itself, like so (JSFiddle example):
ctx.customDrawImage = function(image, x, y){
this.drawImage(image, x, y);
this.rect(x, y, image.width, image.height);
}
var img1 = new Image();
img1.onload = function(){
var x = y = 0;
ctx.drawImage(img1, x, y);
console.log(ctx.isPointInPath(x + 1, y + 1));
x = 1.25 * img1.width;
ctx.customDrawImage(img1, x, y);
console.log(ctx.isPointInPath(x + 1, y + 1));
Note: you might get side effects like the rectangle appearing over the image, or bleeding through from behind if you are not careful.
To me, isPointInPath failed after canvas was moved. So, I used:
mouseClientX -= gCanvasElement.offsetLeft;
mouseclientY -= gCanvasElement.offsetTop;
I had some more challenges, because my canvas element could be rescaled. So first when I draw the figures, in my case arc, I save them in an array together with a name and draw them:
if (this.coInit == false)
{
let co = new TempCO ();
co.name= sensor.Name;
co.path = new Path2D();
co.path.arc(c.X, c.Y, this.radius, 0, 2 * Math.PI);
this.coWithPath.push(co);
}
let coWP = this.coWithPath.find(c=>c.name == sensor.Name);
this.ctx.fillStyle = color;
this.ctx.fill(coWP.path);
Then in the mouse event, I loop over the items and check if the click event is in a path. But I also need to rescale the mouse coordinates according to the resized canvas:
getCursorPosition(event) {
const rect = this.ctx.canvas.getBoundingClientRect();
const x = ((event.clientX - rect.left ) / rect.width) * this.canvasWidth;
const y = ((event.clientY - rect.top) / rect.height) * this.canvasHeight;
this.coWithPath.forEach(c=>{
if (this.ctx.isPointInPath(c.path, x, y))
{
console.log("arc is hit", c);
//Switch light
}
});
}
So I get the current size of the canvas and rescale the point to the original size. Now it works!
This is how the TempCO looks like:
export class TempCO
{
path : Path2D;
name : string;
}
This question already has answers here:
What's the best way to set a single pixel in an HTML5 canvas?
(14 answers)
Closed 7 years ago.
Drawing a line on the HTML5 canvas is quite straightforward using the context.moveTo() and context.lineTo() functions.
I'm not quite sure if it's possible to draw a dot i.e. color a single pixel. The lineTo function wont draw a single pixel line (obviously).
Is there a method to do this?
For performance reasons, don't draw a circle if you can avoid it. Just draw a rectangle with a width and height of one:
ctx.fillRect(10,10,1,1); // fill in the pixel at (10,10)
If you are planning to draw a lot of pixel, it's a lot more efficient to use the image data of the canvas to do pixel drawing.
var canvas = document.getElementById("myCanvas");
var canvasWidth = canvas.width;
var canvasHeight = canvas.height;
var ctx = canvas.getContext("2d");
var canvasData = ctx.getImageData(0, 0, canvasWidth, canvasHeight);
// That's how you define the value of a pixel
function drawPixel (x, y, r, g, b, a) {
var index = (x + y * canvasWidth) * 4;
canvasData.data[index + 0] = r;
canvasData.data[index + 1] = g;
canvasData.data[index + 2] = b;
canvasData.data[index + 3] = a;
}
// That's how you update the canvas, so that your
// modification are taken in consideration
function updateCanvas() {
ctx.putImageData(canvasData, 0, 0);
}
Then, you can use it in this way :
drawPixel(1, 1, 255, 0, 0, 255);
drawPixel(1, 2, 255, 0, 0, 255);
drawPixel(1, 3, 255, 0, 0, 255);
updateCanvas();
For more information, you can take a look at this Mozilla blog post : http://hacks.mozilla.org/2009/06/pushing-pixels-with-canvas/
It seems strange, but nonetheless HTML5 supports drawing lines, circles, rectangles and many other basic shapes, it does not have anything suitable for drawing the basic point. The only way to do so is to simulate a point with whatever you have.
So basically there are 3 possible solutions:
draw point as a line
draw point as a polygon
draw point as a circle
Each of them has their drawbacks.
Line
function point(x, y, canvas){
canvas.beginPath();
canvas.moveTo(x, y);
canvas.lineTo(x+1, y+1);
canvas.stroke();
}
Keep in mind that we are drawing to South-East direction, and if this is the edge, there can be a problem. But you can also draw in any other direction.
Rectangle
function point(x, y, canvas){
canvas.strokeRect(x,y,1,1);
}
or in a faster way using fillRect because render engine will just fill one pixel.
function point(x, y, canvas){
canvas.fillRect(x,y,1,1);
}
Circle
One of the problems with circles is that it is harder for an engine to render them
function point(x, y, canvas){
canvas.beginPath();
canvas.arc(x, y, 1, 0, 2 * Math.PI, true);
canvas.stroke();
}
the same idea as with rectangle you can achieve with fill.
function point(x, y, canvas){
canvas.beginPath();
canvas.arc(x, y, 1, 0, 2 * Math.PI, true);
canvas.fill();
}
Problems with all these solutions:
it is hard to keep track of all the points you are going to draw.
when you zoom in, it looks ugly
If you are wondering, what is the best way to draw a point, I would go with filled rectangle. You can see my jsperf here with comparison tests
In my Firefox this trick works:
function SetPixel(canvas, x, y)
{
canvas.beginPath();
canvas.moveTo(x, y);
canvas.lineTo(x+0.4, y+0.4);
canvas.stroke();
}
Small offset is not visible on screen, but forces rendering engine to actually draw a point.
The above claim that "If you are planning to draw a lot of pixel, it's a lot more efficient to use the image data of the canvas to do pixel drawing" seems to be quite wrong - at least with Chrome 31.0.1650.57 m or depending on your definition of "lot of pixel". I would have preferred to comment directly to the respective post - but unfortunately I don't have enough stackoverflow points yet:
I think that I am drawing "a lot of pixels" and therefore I first followed the respective advice for good measure I later changed my implementation to a simple ctx.fillRect(..) for each drawn point, see http://www.wothke.ch/webgl_orbittrap/Orbittrap.htm
Interestingly it turns out the silly ctx.fillRect() implementation in my example is actually at least twice as fast as the ImageData based double buffering approach.
At least for my scenario it seems that the built-in ctx.getImageData/ctx.putImageData is in fact unbelievably SLOW. (It would be interesting to know the percentage of pixels that need to be touched before an ImageData based approach might take the lead..)
Conclusion: If you need to optimize performance you have to profile YOUR code and act on YOUR findings..
This should do the job
//get a reference to the canvas
var ctx = $('#canvas')[0].getContext("2d");
//draw a dot
ctx.beginPath();
ctx.arc(20, 20, 10, 0, Math.PI*2, true);
ctx.closePath();
ctx.fill();
I am using HTML5 canvas as follows:
Display an image that fills the canvas area.
Display a black text label over the image.
On click of the text label highlight it by drawing a filled red rect + white text.
I have that part all working fine. Now what I want to do is remove the red rect and restore the image background that was originally behind it. I'm new to canvas and have read a fair amount, however I can't see how to do this. That said I am sure it must be quite simple.
I think there are some ways...
Redraw all stuff after the click release
This is simple but not really efficient.
Redraw only the altered part
drawImage with 9 arguments to redraw only the altered background image part, then redraw the black text over.
Save image data before click and then restore it
This uses getImageData and putImageData of the 2D context. (Not sure that it's widely implemented though.)
Here the specification:
ImageData getImageData(in double sx, in double sy, in double sw, in double sh);
void putImageData(in ImageData imagedata, in double dx, in double dy, in optional double dirtyX, in double dirtyY, in double dirtyWidth, in double dirtyHeight);
So for instance if the altered part is in the rect from (20,30) to (180,70) pixels, simply do:
var ctx = canvas.getContext("2d");
var saved_rect = ctx.getImageData(20, 30, 160, 40);
// highlight the image part ...
// restore the altered part
ctx.putImageData(saved_rect, 20, 30);
Use two superposed canvas
The second canvas, positioned over the first, will hold the red rect and the white text, and will be cleared when you want to "restore" the original image.
For another Stack Overflow question I created an example showing how to save and restore a section of a canvas. In summary:
function copyCanvasRegionToBuffer( canvas, x, y, w, h, bufferCanvas ){
if (!bufferCanvas) bufferCanvas = document.createElement('canvas');
bufferCanvas.width = w;
bufferCanvas.height = h;
bufferCanvas.getContext('2d').drawImage( canvas, x, y, w, h, 0, 0, w, h );
return bufferCanvas;
}
function draw(e){
ctx.drawImage(img, 0, 0);
if(e){
ctx.fillStyle='red';
ctx.fillRect(5, 5, 50, 15);
ctx.fillStyle='white';
}else{
ctx.fillStyle='black';
}
ctx.fillText('Label', 10, 17);
}
draw();
document.onclick=draw;
Is there a way to apply a colorTransform to a BitmapData in a circle rather than in a rectangle?
Instead of erasing rectangular parts of an image by reducing the alpha channel as in the code below, I'd like to do it in circles.
_bitmap.colorTransform(new Rectangle(mouseX-d/2, mouseY-d/2, d, d),
new ColorTransform(1, 1, 1, .5, 0, 0, 0, 1));
I do have some code which loops through the pixels, extracts the alpha value and uses setPixel but it seams significantly slower than the colorTransform function.
Try creating a circle using the drawing API (flash.display.Graphics) and then drawing that onto the bitmap data with BlendMode.ERASE. That might solve your problem, if I understand it correctly.
var circle : Shape = new Shape;
circle.graphics.beginFill(0xffcc00, 1);
circle.graphics.drawEllipse(-50, -50, 100, 100);
// Create a transformation matrix for the draw() operation, with
// a translation matching the mouse position.
var mtx : Matrix = new Matrix();
mtx.translate(mouseX, mouseY);
// Draw circle at mouse position with the ERASE blend mode, to
// set affected pixels to alpha=0.
myBitmap.draw(circle, mtx, null, BlendMode.ERASE);
I'm not 100% sure that the ERASE blend mode works satisfyingly with the draw() command, but I can't see why it shouldn't. Please let me know how it works out!