Why cloning canvas? Need explanation on tutorial - html

I've followed tutorial here: http://hashrocket.com/blog/posts/using-tiled-and-canvas-to-render-game-screens to create Tiled map on cavas. I've made some improvements to the solution, but rendering stuff remained the same:
var self = this,
layer = self._canvas.canvas.cloneNode( false );
layer = layer.getContext( "2d" );
Basically, I have somewhere reference to canvas HTML, and here I'm cloning it (just like in tutorial). Next I made some logic and draw tile on that clone:
layer.drawImage( ... );
Finally after whole drawing tiles is over, the clone is painted on main canvas:
self._canvas.drawImage( layer.canvas, 0, 0 );
My question is why? When I did same algorithm not on layer, but main canvas instead, rendered image was the same. Is there some logic behind it? Only thing that came to my mind is that we can somehome prevent rendering layer, on catched error, to canvas. Tutorial meantion only about we’ll set up a scratch canvas to render to for a slight performance improvement

You're drawing on a back buffer. This prevents the browser from trying to render the canvas to screen while drawing, and aside from the potential performance improvement also prevents potential flickering. (This applies mostly to double buffering, but this method is quite similar)
About buffering and canvas

A) As the scratch layer is memory-only there is no need for the browser to try to update the content for each monitor refresh - it is draw once only to the main canvas which then is updated in whole.
B) If you moved things around (which is typical when tiling) using drawImage() with offset/clipping and to itself, the browser does not have to create a temporary bitmap, copy the content over, then copy back to a different position, and finally destroy the temporary bitmap.

Related

webgl: how to clone the canvas as-is

I have a webgl canvas. It is being continuously updated (simulation).
Now I want to freeze the current content of the canvas. I am continuously getting updates for the simulation which I need to keep feeding to the visualizer. So my idea of achieving this is to clone the exact state of the current webgl canvas on to a new one, and hide the current one, which continues to get updated. Then I can remove the frozen one and the live simulation is being shown again.
I haven't been able to achieve this, and examples I've found on the web like this one:Any way to clone HTML5 canvas element with its content?
only apply to 2D canvases.
Google search didn't help much either.
This one:
how to copy another canvas data on the canvas with getContex('webgl')?
seemed promising but I haven't been able to figure out how to apply it.
Cloning the canvas appear to me to be an heavy and weird solution.
The simplest way to achieve what you want to do is to prevent the frame buffer to be presented (swapped, then cleared) to HTML canvas. Do do so, you simply have to avoid calling any gl.clear, gl.drawArrays or gl.drawElements during your loop.
For example suppose you have two functions, one running your simulation, the other your GL draw:
function simulate() {
// update simulation here
}
function draw() {
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT|gl.DEPTH_BUFFER_BIT);
// do drawing stuff here
gl.drawArrays(gl.TRIANGLES, 0, 12345);
// etc...
}
From this point, if you want to "freeze" the canvas content, you simply have to stop calling the "draw" function within your global loop. For example:
function loop() {
simulate();
if(!freeze) draw();
requestAnimationFrame(loop);
}
You may uses other methods to achieve the same effect. For example, you can draw your scene to a texture, then draw the texture on the canvas. By this way, you also can control when the texture is cleared and drawn again, while it still rendered in the canvas.
However, to implements the render-to-texture method, you will have some more heavy modification to done in your code: you'll need an additionnal shader to draw the texture on screen, and take some time to play with frameBuffer and renderBuffer objects.

Transparency issues with 3d particles and 3d models, libgdx

I got some strange issues with transparency and 3d particles. A short vid to illustrate:
https://youtu.be/ZHKI1X3MjhY
As you can see I have a 3d particle effect, fire burning. Inside it is a 3 model with no alpha blending and it shows just fine. then in the far distance there is a small skeleton (with blending and alphatest turned on) and it also shows just fine through the fire. Then I turn camera and look at the warrior skeleton and it just disappear and instead you see what is behind him. I turn camera again and the mage skeleton also vanishes, but you can see the trees a bit further away just fine and they have the exact same settings for blending and alpha test. If I move the character say 20 yards away it also starts showing through the fire effect.
So it seems to have something to do with distance from the 3d particle effect...
The 3d particle batch is an extended BillboardParticleBatch like this:
protected Renderable allocRenderable(){
BlendingAttribute ba=new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE,1f);
Renderable r = super.allocRenderable();
r.material = new Material( ba,
// new DepthTestAttribute(GL20.GL_LEQUAL, 0.0f, 0.5f, true),
// r.material.set(new FloatAttribute(FloatAttribute.AlphaTest, 0.0f),
TextureAttribute.createDiffuse(texture));
return r;
}
All the characters and the trees are created with following attributes:
if (alpha) {
FloatAttribute floatAttribute = new FloatAttribute(FloatAttribute.AlphaTest, 0.5f);
BlendingAttribute blendingAttribute = new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA, 1f);
for (int i = 0; i < bulletEntity.modelInstance.materials.size; i++){
bulletEntity.modelInstance.materials.get(i).set(blendingAttribute);
bulletEntity.modelInstance.materials.get(i).set(floatAttribute);
}
}
The models are drawn first then the particles, I tried changing order but no difference. I have tried a lot of different setups for alphatest, depthtest and blendingattribute but can not find anything that works.
EDIT
I removed the Blending attribute from the 3d-models and now it looks as it should regarding the particle effect. However I need most materials on my character models to have blending set..
Anyone got any clue why this is happening when I enable blending?
I also tried to use the BillboardParticleBatch without extending it in case I had done something there but the effect then is even worse. All models with blending enabled appear in-front of the particle effect even though they stand behind it.
ModelBatch sorts your render calls (check this link, really, it is a must read), to avoid incorrect behavior (as you're experiencing). The actual sorting/rendering happens at the call to ModelBatch#end. By default it uses the DefaultRenderableSorter, which is a default implementation. Of course, because that implementation isn't aware of your scene, it might not fit exactly your needs.
The DefaultRenderableSorter tries to guess the location of each model based on their transformation matrix. Based on that location and the camera's location it will sort them so that:
First all opaque objects are rendered from front to back (because whatever is behind an opaque object isn't visible anyway, so that reduces unneeded calls to the fragment shader).
Secondly all transparent objects are rendered from back to front (because as soon as a transparent object is rendered then everything that is rendered after that and is behind it, will not be visible).
To decide whether an object is transparent, the BlendingAttribute#blended member is used. (So you could, if you really wanted to, set that member to false to force it to be treated (sorted) as if it was opaque)
So, the order in which you call ModelBatch#render is not necessarily the order in which they are actually executed. If you want to force to render whatever you've added to the batch in between, then call the ModelBatch#flush(). Of course, doing this a lot defeats some of the purpose of ModelBatch in the first place.
Instead you could implement your own RenderableSorter which has more knowledge about your scene and can therefor do a better job sorting than the default implementation. (however if flush() works for you and there's no other issue, then just flush might be the easiest solution for you).
That said, there a various other solutions you could try as well. E.g. the regions of the particles are fully transparent, so the fragment shader might as well discard those all together. Try adding FloatAttribute.AlphaTest with a value of 0.5f to the particles. If that messes with your blending then gradually reduce the value to e.g. 0.05f.
Also, you could add a DepthTestAttribute with depthMask set to false (new DepthTestAttribute(false)). This will prevent the particles from writing to the depth buffer. (but also might cause other things to show in front of the particles).

HTML Canvas: How to address latency between user interaction and draw events?

I'm working on a game that allows players to click on a card and drag it across the screen in any random direction. There are a total of 64 100x80 overlapping cards on a 800x800 canvas at any one time and each one is a procedural draw. As some of you probably suspect, canvas doesn't like redrawing that entire canvas for every move. To work around this, I'm utilizing a buffer canvas to draw the card and then attempting to paint that buffer canvas to the main canvas using drawImage(). To ensure there is no drawing buildup, I clear the region of the canvas associated with where I plan to drawImage() using a clearRect().
The problem I'm experiencing is that because the (x,y) coordinates used for the clearRect() and drawImage() are coming from the location of the mouse, if the user moves too fast, the coordinates will differ from the time drawImage() was last executed to the time clearRect() is called during the next draw sequence. The result is residual draw from the last sequence - proportionate to how fast the card is being dragged.
I tried maintaining the (x,y) coordinates from the drawImage() and using those (instead of the current mouse location) for the clearRect() in the next sequence. However, now instead of residual draw being shown, residual we have residual clear (erase).
Thoughts on how I can address this?
NOTE: My animation rate is handled using RequestAnimationFrame and not SetInterval().
Assuming your canvas is static during the drag drop operation, a pretty easy way to get a good increase in performance would be to just cache the rendering.
In other words, when a drag drop operation begins, save the current canvas into another one. Stop all rendering except for the one involved with dragging the card. Now, whenever you need to repaint, simply repaint from your copy-canvas. Since you're basically just copying from one to another, it should be pretty fast.
On each processing cycle, you would take the current position of the dragged card, fill that with data from the copy, then redraw the dragged card in the new position.
Other approaches you could try would be to use some kind of a placeholder for the drag. For example, consider using a same-sized DIV which you display while dragging. This should have the benefit of not requiring to touch the canvas while dragging and thus also run a better performance for it.

html canvas pixel buffer

I don't know the correct term, but in GTK I believe it was called a pixel buffer. You could copy all or some of the drawing area to a pixbuf, and then later dump the pixbuf back to the screen, rather than going through and rendering the entire thing all over again. I am implementing a menubar, and the menubar drops down and occludes everything underneath it. However, it takes a few seconds to draw the entire canvas, so I was wondering if there is a correct way to copy everything that will be occluded by the drop down menu, then when the drop down menu is closed, redraw it to the screen. I imagine this can be done with the context.getImageData() function, but I have read that it is extremely inefficient.
It is true, getImageData() is far too inefficient. But there's a better start for specifically what you're trying to do:
With canvas context's drawImage method, you can pass in an image but you can also pass in another canvas. So construct a temp canvas that will never be added to the page:
// only exists in javascript, not on the page
tempcanvas = document.createElement('canvas');
tempcanvas.height = (normal canvas width);
tempcanvas.width = (normal canvas height);
Then you can call tempcanvasContext.drawImage(normalCanvas, 0, 0) to take a snapshot of the current canvas right before the drop down menu is created. When the drop down menu disappears, you call normalcanvasContext.drawImage(tempcanvas, 0, 0) to draw it back.
I hope this gives a good general idea, and it should be much faster than getImageData. You can make it even more efficient by only copying the exact portions of the screen you want to redraw.

Drag objects in canvas

Im looking for an easy to use method of assigning drag behavior to multiple objects (images, shapes etc) in canvas. Does anyone have a good way or know of any libraries for dragging objects around? Thanks
Creating your own mouse events takes a little work - ideally you should either create or use some kind of mini-library. I'm thinking of creating something like this in the near future. Anyway, I created a drag and drop demo on jsFiddle showing how to drag images - you can view it here.
You can create draggable images like this:
var myImage = new DragImage(sourcePath, x, y);
Let me know if you have any questions about this. Hope it helps.
EDIT
There was a bug when dragging multiple images. Here is a new version.
Another thing you might want to check out is easeljs it sort of in the style of AS3... mouseEvents dragging etc...
The HTML Canvas—unlike SVG or HTML—uses a non-retained (or immediate) graphics API. This means that when you draw something (like an image) to the canvas no knowledge of that thing remains. The only thing left is pixels on the canvas, blended with all the previous pixels. You can't really drag a subset of pixels; for one thing, the pixels that were 'under' them are gone. What you would have to do is:
Track the mousedown event and see if it's in the 'right' location for dragging. (You'll have to keep track of what images/objects are where and perform mouse hit detection.)
As the user drags the mouse, redraw the entire canvas from scratch, drawing the image in a new location each time based on the offset between the current mouse location and the initial mousedown location.
Some alternatives that I might suggest:
SVG
Pure HTML
Multiple layered canvases, and drag one transparent canvas over another.
The HTML Canvas is good for a lot of things. User interaction with "elements" that appear to be distinct (but are not) is not one of those things.
Update: Here are some examples showing dragging on the canvas:
http://developer.yahoo.com/yui/examples/dragdrop/dd-region.html
http://www.redsquirrel.com/dave/work/interactivecanvas/
http://langexplr.blogspot.com/2008/11/using-canvas-html-element.html
None of these have created a separate library for tracking your shapes for you, however.
KineticJS is one such Javascript Library that u can use exclusively for animations
Heres the Link html5canvastutorials
Canvas and jCanvas
You're definitely gonna want to check out jCanvas. It's a super clean wrapper for Canvas, which kicks open a lot of doors without adding code complexity. It makes things like this a breeze.
For example, here's a little sandbox of something close to what you're after, with dragging and redrawing built right in:
Drawing an Arrow Between Two Elements.
I ventured down the road of doing everything with DIVs and jQuery but it always fell short on interactivity and quality.
Hope that helps others, like me.
JP
As you create new objects whether they are windows, cards, shapes or images to be draggable, you can store them in an array of "objects currently not selected". When you click on them or select them or start dragging them you can remove them from the array of "objects not selected". This way you can control what can move in the event of a particular mousedown event or mousemove event by checking if it isn't selected. If it is selected it will not be in the "not selected" array and you can move the mouse pointer over other shapes while dragging shapes without them becoming dragged.
Creating arrays of objects you would like to drag also helps with hierarchy. Canvas draws the pixels belonging to the foremost object last. So if the objects are in an array you simply switch their instance as in element in the array say from objectArray[20] to objectArray[4] as you iterate through the array and draw the objects stored in the array elements you can change whether other objects are seen on top or behind other objects.