webgl: how to clone the canvas as-is - html

I have a webgl canvas. It is being continuously updated (simulation).
Now I want to freeze the current content of the canvas. I am continuously getting updates for the simulation which I need to keep feeding to the visualizer. So my idea of achieving this is to clone the exact state of the current webgl canvas on to a new one, and hide the current one, which continues to get updated. Then I can remove the frozen one and the live simulation is being shown again.
I haven't been able to achieve this, and examples I've found on the web like this one:Any way to clone HTML5 canvas element with its content?
only apply to 2D canvases.
Google search didn't help much either.
This one:
how to copy another canvas data on the canvas with getContex('webgl')?
seemed promising but I haven't been able to figure out how to apply it.

Cloning the canvas appear to me to be an heavy and weird solution.
The simplest way to achieve what you want to do is to prevent the frame buffer to be presented (swapped, then cleared) to HTML canvas. Do do so, you simply have to avoid calling any gl.clear, gl.drawArrays or gl.drawElements during your loop.
For example suppose you have two functions, one running your simulation, the other your GL draw:
function simulate() {
// update simulation here
}
function draw() {
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT|gl.DEPTH_BUFFER_BIT);
// do drawing stuff here
gl.drawArrays(gl.TRIANGLES, 0, 12345);
// etc...
}
From this point, if you want to "freeze" the canvas content, you simply have to stop calling the "draw" function within your global loop. For example:
function loop() {
simulate();
if(!freeze) draw();
requestAnimationFrame(loop);
}
You may uses other methods to achieve the same effect. For example, you can draw your scene to a texture, then draw the texture on the canvas. By this way, you also can control when the texture is cleared and drawn again, while it still rendered in the canvas.
However, to implements the render-to-texture method, you will have some more heavy modification to done in your code: you'll need an additionnal shader to draw the texture on screen, and take some time to play with frameBuffer and renderBuffer objects.

Related

Actionscript 3: Drawing lines and bitmaps the right way

I'm just getting started with Flash/ActionScript and it seems to be the general consensus to create Sprites, Bitmaps, MovieClips, etc for various objects in order to represent pictures and other graphics.
However, the way I'm used to writing games and whatnot in other languages is to just loop repeatedly and each frame use something similar to the Graphics object to redraw the scene on the main Sprite. Is this how it's also done in Flash, and is it good practice? I can do it this way, but I'm wondering if there's some Flash ecosystem standard instead.
Here's an example of the way I'm used to:
public class MyApp extends Sprite
{
public function MyApp()
{
var t:Timer = new Timer(20);
t.addEventListener(TimerEvent.TIMER, update);
t.start();
}
public function update(e:TimerEvent)
{
this.graphics.clear();
//Rendering code and updating of objects.
}
}
Is this acceptable?
Well, it depends.
In Flash, you have the option of relying on the Flash Player's vector rasterizer and rendering system, which will figure out all the redrawing for you. For instance, you can draw once to a Sprite then simply apply transforms to the sprite (set x, y, width, height, rotation, scaleX, scaleY, transform.matrix, transform.colorTransform, etc). Any of these objects could be a vector shape or a bitmap, and you can also use cacheAsBitmap and cacheAsBitmapMatrix for even more redraw optimization. The Flash Player will only redraw areas that change, on the frame that they change. I would consider this the traditional "Flash way".
Using the Graphics API is just a programmatic way to create vector shape data. Think of it as a code alternative to drawing in the Flash IDE. You could draw using Graphics once when the object is created, or if you needed to change the actual shape (ie not just the transform) you are correct that you would clear() and redraw it. However, ideally you would not be doing that a lot. If you find yourself redrawing the shape a lot, you might want to move to a pre-rendered sprite-sheet approach. In that case you use BitmapData to more quickly copy pre-drawn pixel data to a Bitmap object. This is generally faster than relying on the vector rasterizer to render your Graphics commands, as long as you use the fast pixel methods like copyPixels(). This is probably closer to the sort of rendering systems you are used to in other platforms that don't have a vector rasterizer built in.
Lastly, it's worth noting that the newest (and fastest) way to render objects in Flash is completely different than all that. It's called Stage3D and it uses a completely different rendering pipeline than the vector rasterizer. It's powered by GPU rendering APIs, so it's blazing fast (great for games) but has no vector rasterizing abilities. It can be used for both 3D and 2D. It's a bit more involved to work with, but there are some useful frameworks to make it easier, most notably the Starling 2D framework.
Hope that helps.
The "Flash way" is to use EnterFrame event instead of using timer to draw. You must make your calculation whenever you want but let flash draw you scene.
It works the same way in actionscript.
public class App extends Sprite // adding "my" to identifier names doesn't add any information, so there's no real point in doing it
{
public function App()
{
addEventListener(Event.ENTER_FRAME, update); // "each frame"
}
private function update(e:Event):void //not just parameters of functions have a type, but also their return value
{
graphics.clear(); // no need for "this" here
//Rendering code and updating of objects.
}
}
Keep in mind that the Graphics API is vector based and as such will only draw so many things before dropping performance.
Sprite is a general purpose container, not to be confused with what the term "sprite" stands for in a sprite sheet.
What you are probably referring to when saying "main Sprite" is some rectangular region of pixels that you can manipulate.In this case, a BitmapData is what you want, which is displayed with a Bitmap object.
BitmapData does not offer a graphics property. Essentially, drawing vectors and manipulating pixels are treated separately in As3. If you want to draw a line in a BitmapData object, you'd have to first draw the line as a vector into a Sprite (or better Shape, if all you want to do is draw on it) using its graphics property, then use draw() of BitmapData to set its pixels according to the drawn line.

Why cloning canvas? Need explanation on tutorial

I've followed tutorial here: http://hashrocket.com/blog/posts/using-tiled-and-canvas-to-render-game-screens to create Tiled map on cavas. I've made some improvements to the solution, but rendering stuff remained the same:
var self = this,
layer = self._canvas.canvas.cloneNode( false );
layer = layer.getContext( "2d" );
Basically, I have somewhere reference to canvas HTML, and here I'm cloning it (just like in tutorial). Next I made some logic and draw tile on that clone:
layer.drawImage( ... );
Finally after whole drawing tiles is over, the clone is painted on main canvas:
self._canvas.drawImage( layer.canvas, 0, 0 );
My question is why? When I did same algorithm not on layer, but main canvas instead, rendered image was the same. Is there some logic behind it? Only thing that came to my mind is that we can somehome prevent rendering layer, on catched error, to canvas. Tutorial meantion only about we’ll set up a scratch canvas to render to for a slight performance improvement
You're drawing on a back buffer. This prevents the browser from trying to render the canvas to screen while drawing, and aside from the potential performance improvement also prevents potential flickering. (This applies mostly to double buffering, but this method is quite similar)
About buffering and canvas
A) As the scratch layer is memory-only there is no need for the browser to try to update the content for each monitor refresh - it is draw once only to the main canvas which then is updated in whole.
B) If you moved things around (which is typical when tiling) using drawImage() with offset/clipping and to itself, the browser does not have to create a temporary bitmap, copy the content over, then copy back to a different position, and finally destroy the temporary bitmap.

KInetic JS scale shapes on mouse move

Is it possible to create an application using Kinetic.js where you create a shape, then scale it as you move the mouse around? I couldn't find anything in the Kinetic APIs, so I am mixing in "raw" JQuery. In particular, I use $("canvas").last().mousemove function, but it turns out this is actually very slow.
Here is the JSFiddle.
Any tips on making it faster?
I don't think the Kinetic.js has support for layer.on("mousemove", fn), because it seems to only apply to shapes.
yes. You would do something like this:
$('#container').on('mousemove', function(evt) {
shape.setScale(someValue);
layer.batchDraw();
});
In other words, attach a mousemove listener to the container div element (the one you pass into the Kinetic stage). set the shape scale with the setScale() method. Use batchDraw() instead of draw() so that the draw operation hooks into the KineticJS animation engine for much better performance. Otherwise, if you use draw(), you'll be redrawing the entire layer each time a mousemove event is detected, which could be hundreds of thousands of times per second depending on the browser

Canvas, negative coordinates: Is it bad to draw paths that start off canvas, and continue on?

I only want to show a portion of a shape drawn on a canvas.
My line is essentially this, and it works fine:
ctx.fillRect( xPosition, rectHeight - offsetV , rectWidth, rectHeight);
The second variable there is going to be negative. So, my quesiton is: is it bad practice (or am I setting myself for errors down the road) to draw a path that starts off the canvas (with a negative coordinate) and then continue drawing on to the canvas.
No problem at all. If you have very large number of drawing object you can (like GameAlchemist said) prevent drawing that object .If you use canvas like map for explore (zoom out/in ctx, translate whole context) that preventing draw can cost more that clip cost. And its complicated ...
I have some expire with drawing object out of canvas. You can have a problem if you put calculation and other (no drawing) staff intro draw function.
Important :
-Make canvas draw function code clear(only draw canvas code).
-If your app no need for const update make update call only when it needs.
-Clear canvas only in (0,0,canvas.w,canvas.h)
-Use style only when it needs (stroke,fill,font etc.)

Draw shapes on HTML5 Canvas...with video

I've been Googling around a bit for an answer and haven't found a definitive one either way: is it possible to play a video using an HTML5 canvas, and also allow the user to draw on this video? The use case, for some context, is to play a video on infinite loop so the user can draw multiple boxes over specific areas to indicate regions of interest.
As a bonus (:P), if I can figure out how to do this on its own, any hints as to how this could be done within Drupal? I'm already looking at the Canvas Field module, but if you have any hints on this point too (though the first one is the priority), that'd be awesome!
You can draw html5 video elements onto a canvas. The drawImage method accepts a video element in the first parameter just like an image element. This will take the current "frame" of the video element and render it onto the canvas. To get fluid playback of the video you will need to draw the video to the canvas repeatedly.
You can then draw on the canvas normally, making sure you redraw everything after each update of the video frame.
Here is a demo of video on canvas
here is a in-depth look into video and the canvas
I recently received this request from a client to provide this feature, and it must be CMS-friendly. The technique involves three big ideas
a drawing function
repeatedly calling upon the same drawing function
using requestAnimationFrame to paint the next frame
Assuming you have a video element already, you'd take the following steps
Hide the video element
Create a canvas element whose height/width match the video element, store this somewhere
Get the context of the canvas element with `canvas.getContext('2d') and also store that somewhere
Create a drawing function
In that drawing function, you would use canvas.drawImage(src, x, y) where src is the edited version of the current frame of the video;
In that drawing function, use recursion to call itself again
I can give you two examples of this being done (and usable for content management systems)
The first is here: https://jsfiddle.net/yywL381w/19/
A company called SDL makes a tool called Media Manager that hosts videos. What you see is a jQuery plugin that takes its parameters from a data-* , makes a request from the Media Manager Rest API, creates a video, and adds effects based entirely on data* attributes. That plugin could easily be tweaked to work with videos called from other sources. You can look at the repo for it for more details on usage.
Another example is here: http://codepen.io/paceaux/pen/egLOeR
That is not a jQuery plugin; it's an ES6 class instead. You can create an image/video and apply a cropping effect with this:
let imageModule = new ImageCanvasModule(module);
imageModule.createCanvas();
imageModule.drawOnCanvas();
imageModule.hideOriginal();
You'll observe, in the ImageCanvasModule class, this method:
drawFrame () {
if (this.isVideo && this.media.paused) return false;
let x = 0;
let width = this.media.offsetWidth;
let y = 0;
this.imageFrames[this.module.dataset.imageFrame](this.backContext);
this.backContext.drawImage(this.media, x, y, width, this.canvas.height);
this.context.drawImage(this.backCanvas, 0, 0);
if (this.isVideo) {
window.requestAnimationFrame(()=>{
this.drawFrame();
});
}
}
The class has created a second canvas, to use for drawing. That canvas isn't visible, it's just their to save the browser some heartache.
The "manipulation" that is content manageable is this.imageFrames[this.module.dataset.imageFrame](this.backContext);
The "frame" is an attribute stored on the image/video (Which could be output by a template in the CMS). This gets the name of the imageFrame, and runs it as a matching function. It also sends in the context (so I can toggle between drawing on the back canvas or main canvas if needed)
then this.backContext.drawImage(this.media, x, y, width, this.canvas.height); draws the image on the back context.
Finally, this appears on the main canvas with this.context.drawImage(this.backCanvas, 0, 0); where I take the backcanvas, and draw it on to the main canvas. So the canvas that's visible has the least amount of manipulations possible.
And at the end, because this is a video, we want to draw a new frame. So we have the function call itself:
if (this.isVideo) {
window.requestAnimationFrame(()=>{
this.drawFrame();
});
This whole setup allows us to use the CMS to output data-* attributes containing the type of frame the user wants to be drawn around the image. the JavaScript then produces a canvasified version of that image or video. Sample markup might look like:
<video muted loop autoplay data-image-frame="wedgeTop">