Maximum number of canvases (used as layers)? - html

I am writing an HTML5 canvas app in javascript. I am using multiple canvas elements as layers to support animation without having to re-draw the whole image every frame.
Is there a maximum number of canvas elements that I can layer on top of each other in this way -- (and see an appropriate result on all of the HTML5 platforms, of course).
Thank you.

I imagine you will probably hit a practical performance ceiling long before you hit the hard specified limit of somewhere between several thousand and 2,147,483,647 ... depending on the browser and what you're measuring (number of physical elements allowed on the DOM or the maximum allowable z-index).
This is correlated to another of my favorite answers to pretty much any question that involves the phrase "maximum number" - if you have to ask, you're probably Doing It Wrong™. Taking an approach that is aligned with the intended design is almost always just as possible, and avoids these unpleasant murky questions like "will my user's iPhone melt if I try to render 32,768 canvas elements stacked on top of each other?"

This is a question of the limits of the DOM, which are large. I expect you will hit a performance bottleneck before you hit a hard limit.
The key in your situation, I would say, is to prepare some simple benchmarks/tests that dynamically generate Canvases (of arbitrary number), fill them with content, and add them to the DOM. You should be able to construct your tests in such a way where A) if there is a hard limit you will spot it (using identifiable canvas content or exception handling), or B) if there is a performance limit you will spot it (using profiling or timers). Then perform these tests on a variety of browsers to establish your practical "limit".
There are also great resources available here https://developers.facebook.com/html5/build/games/ from the Facebook HTML5 games initiative. Therein are links to articles and open source benchmarking tools that address and test different strategies similar to yours.

Related

Fast and responsive interactive charts/graphs: SVG, Canvas, other?

I am trying to choose the right technology to use for updating a project that basically renders thousands of points in a zoomable, pannable graph. The current implementation, using Protovis, is underperformant. Check it out here:
http://www.planethunters.org/classify
There are about 2000 points when fully zoomed out. Try using the handles on the bottom to zoom in a bit, and drag it to pan around. You will see that it is quite choppy and your CPU usage probably goes up to 100% on one core unless you have a really fast computer. Each change to the focus area calls a redraw to protovis which is pretty darn slow and is worse with more points drawn.
I would like to make some updates to the interface as well as change the underlying visualization technology to be more responsive with animation and interaction. From the following article, it seems like the choice is between another SVG-based library, or a canvas-based one:
http://www.sitepoint.com/how-to-choose-between-canvas-and-svg/
d3.js, which grew out of Protovis, is SVG-based and is supposed to be better at rendering animations. However, I'm dubious as to how much better and what its performance ceiling is. For that reason, I'm also considering a more complete overhaul using a canvas-based library like KineticJS. However, before I get too far into using one approach or another, I'd like to hear from someone who has done a similar web application with this much data and get their opinion.
The most important thing is performance, with a secondary focus on ease of adding other interaction features and programming the animation. There will probably be no more than 2000 points at once, with those small error bars on each one. Zooming in, out, and panning around need to be smooth. If the most recent SVG libraries are decent at this, then perhaps the ease of using d3 will outweigh the increased setup for KineticJS, etc. But if there is a huge performance advantage to using a canvas, especially for people with slower computers, then I would definitely prefer to go that way.
Example of app made by the NYTimes that uses SVG, but still animates acceptably smoothly:
http://www.nytimes.com/interactive/2012/05/17/business/dealbook/how-the-facebook-offering-compares.html . If I can get that performance and not have to write my own canvas drawing code, I would probably go for SVG.
I noticed that some users have used a hybrid of d3.js manipulation combined with canvas rendering. However, I can't find much documentation about this online or get in contact with the OP of that post. If anyone has any experience doing this kind of DOM-to-Canvas (demo, code) implementation, I would like to hear from you as well. It seems to be a good hybrid of being able to manipulate data and having custom control over how to render it (and therefore performance), but I'm wondering if having to load everything into the DOM is still going to slow things down.
I know that there are some existing questions that are similar to this one, but none of them exactly ask the same thing. Thanks for your help.
Follow-up: the implementation I ended up using is at https://github.com/zooniverse/LightCurves
Fortunately, drawing 2000 circles is a pretty easy example to test. So here are four possible implementations, two each of Canvas and SVG:
Canvas geometric zooming
Canvas semantic zooming
SVG geometric zooming
SVG semantic zooming
These examples use D3's zoom behavior to implement zooming and panning. Aside from whether the circles are rendered in Canvas or SVG, the other major distinction is whether you use geometric or semantic zooming.
Geometric zooming means you apply a single transform to the entire viewport: when you zoom in, circles become bigger. Semantic zooming in contrast means you apply transforms to each circle individually: when you zoom in, the circles remain the same size but they spread out. Planethunters.org currently uses semantic zooming, but it might be useful to consider other cases.
Geometric zooming simplifies the implementation: you apply a translate and scale once, and then all the circles are re-rendered. The SVG implementation is particularly simple, updating a single "transform" attribute. The performance of both geometric zooming examples feels more than adequate. For semantic zooming, you'll notice that D3 is significantly faster than Protovis. This is because it's doing a lot less work for each zoom event. (The Protovis version has to recalculate all attributes on all elements.) The Canvas-based semantic zooming is a bit more zippy than SVG, but SVG semantic zooming still feels responsive.
Yet there is no magic bullet for performance, and these four possible approaches don't begin to cover the full space of possibilities. For example, you could combine geometric and semantic zooming, using the geometric approach for panning (updating the "transform" attribute) and only redrawing individual circles while zooming. You could probably even combine one or more of these techniques with CSS3 transforms to add some hardware acceleration (as in the hierarchical edge bundling example), although that can be tricky to implement and may introduce visual artifacts.
Still, my personal preference is to keep as much in SVG as possible, and use Canvas only for the "inner loop" when rendering is the bottleneck. SVG has so many conveniences for development—such as CSS, data-joins and the element inspector—that it is often premature optimization to start with Canvas. Combining Canvas with SVG, as in the Facebook IPO visualization you linked, is a flexible way to retain most of these conveniences while still eking out the best performance. I also used this technique in Cubism.js, where the special case of time-series visualization lends itself well to bitmap caching.
As these examples show, you can use D3 with Canvas, even though parts of D3 are SVG-specific. See also this force-directed graph and this collision detection example.
I think that in your case the decision between canvas and svg is not like a decision between »riding a Horse« or driving a »Porsche«. For me it is more like the decision about the cars color.
Let me explain:
Assuming that, based on the framework the operations
draw a star,
add a star and
remove a star
take linear time. So, if your decision of the framework was good it is a bit faster, otherwise a bit slower.
If you go on assuming that the framework is just fast, than it becomes totally obvious that the lack of performance is caused be the high amount of stars and handling them is something none of the frameworks can do for you, at least I do not know about this.
What I want to say is that the base of the problem leads to a basic problem of computational geometry, namely: range searching and another one of computer graphics: level of detail.
To solve your performance problem you need to implement a good preprocessor which is able to find very fast which stars to display and is perhaps able to cluster stars which are close together, depending on the zoom. The only thing that keeps your view vivid and fast is keeping the number of stars to draw as low possible.
As you stated, that the most important thing is performance, than I would tend to use canvas, because it works without DOM operations. It also offers the opportunity to use webGL, what increases graphic performance a lot.
BTW: did you check paper.js? It uses canvas, but emulates vector graphics.
PS: In this Book you can find a very detailed discussion about graphics on the web, the technologies, pros and cons of canvas, SVG and DHTML.
I recently worked on a near-realtime dashboard (refresh every 5 seconds) and chose to use charts that render using canvas.
We tried Highcharts(SVG based JavaScript Charting library) and CanvasJS(Canvas based JavaScript Charting library). Although Highcharts is a fantastic charting API and offers way more features we decided to use CanvasJS.
We needed to display at least 15 minutes of data per chart (with option to pick range of max two hours).
So for 15 minutes: 900 points(data point per second) x2(line and bar combination chart) x4 charts = 7200 points total.
Using chrome profiler, with CanvasJS the memory never went above 30MB while with Highcharts memory usage exceeded 600MB.
Also with refresh time of 5 seconds CanvasJS rendering was allot more responsive then Highcharts.
We used one timer (setInterval 5 seconds) to make 4 REST API calls to pull the data from back end server which connected to Elasticsearch. Each chart updated as data is received by JQuery.post().
That said for offline reports I would go with Highcharts since its more flexible API.
There's also Zing charts which claims to use either SVG or Canvas but haven't looked at them.
Canvas should be considered when performance is really critical. SVG for flexibility. Not that canvas frameworks aren't flexible, but it takes allot more work for canvas framework to get the same functionality as an svg framework.
Might also look into Meteor Charts, which is built on top of the uber fast KineticJS framework: http://meteorcharts.com/
I also found when we print to PDF a page with SVG graphics, the resulting PDF still contains a vector-based image, while if you print a page with Canvas graphics, the image in the resulting PDF file is rasterized.

Overlapping sounds and clipping in StandingWave3

I've been playing with the dynamic audio library standingwave 3. Almost the first thing one notices is that if one tries out the code samples in the developer's guide, namely this code:
// Create a chord of three simultaneous sine tones: A3, E4, A4.
var sequence:ListPerformance = new ListPerformance();
sequence.addSourceAt(0, new SineSource(new AudioDescriptor(), 0.1, 440));
sequence.addSourceAt(0, new SineSource(new AudioDescriptor(), 0.1, 660));
sequence.addSourceAt(0, new SineSource(new AudioDescriptor(), 0.2, 880));
// Play it back.
var source:IAudioSource = new AudioPerformer(sequence);
player.play(source);
then one gets a really unpleasant sound and trace messages that read "AUDIO CLIPPING". The developer explains in one of the issue reports on github that you need to reduce the gain of samples when you mix them together to avoid this, and that there's no easy way to know dynamically how much reduction is needed.
My question is, how is it that stangingwave2 seems to have dealt with this automatically? For instance, the code quoted above did not clip in SW2. Likewise consider this SW2 example demo - if you increase the sustain and hold (the S/H sliders) and press one of the sequence buttons, multiple tones will overlap without clipping, even though the source doesn't show any apparent sign of changing the gain or the volume of the sin tones, they just get mixed together.
What's going on here - did SW2 have some way of dealing with this automagically, or is there some robust way of generally overlaying arbitrary numbers of sounds dynamically without causing clipping? Thanks!
As no activity seems likely here, I'll note for the ages that apparently SW2 simply returns sine sources at much less than full scale, but it turns out that if you combine enough sources you do get clipping. SW3 returns the sources at full scale so the clipping becomes apparent with fewer sources.

how many div's can you have before the dom slows and becomes unstable?

I am developing a jQtouch app and each request done via ajax creates a new div in the document for the loaded content. Only a single div is shown at any one time.
How many div's can I have before the app starts getting unresponsive and slow?
Anyone have any ideas on this?
EDIT: Its an iPad app running on Safari, and it would be less than 1000 div's with very basic content
I've had tens of thousands, maybe even a hundred thousand divs, on screen at once.
Performance is either fine, or bad, depending on:
Parsed from HTML or generated Dynamically in JavaScript?
Parsed from HTML means you have a LARGE html source, and this can make browsers hang. Generated in JS is surprisingly fast, even on Internet Explorer, which is the slowest of all browsers for JS.
To be honest, if you really need an absolute answer to this question, then you might want to reconsider your design.
No answer given here will be right, as it depends upon many factors that are specific to your application. E.g. heavy vs. little CSS use, size of the divs, amount of actual graphics rendering required per div, target browser/platform, number of DOM event listeners etc..
Just because you can doesn't mean that you should! :-)
As others have said, there's really no answer.
However, in this talk about the Google Maps API version 3, the speaker brings up the number ten thousand several times, as a basic threshold for browser unhappiness.
http://code.google.com/apis/maps/documentation/javascript/
Without defining a particular environment, it's not possible to answer your question.
And even then, anything anyone tells you is just a guess. You need to do your own testing on real-world configurations with different browsers and hardware. You'll also need to establish some performance benchmarks to decide what "too slow" even means.
I've been able to add several thousand divs without a problem. Depends on what you'll be doing afterwards, of course, and the memory on the client machine. Everyone else is right about that.
As Harpo said, 10K is probably a good ceiling. At one time, I noticed speed problems starting at about 4K divs, but hardware has improved since then.
And, as Neil N said, adding the divs via scripting is better than having a huge HTML source.
And, to answer Harpo's comment, one way to "break it up" so that JS doesn't lock the page and produce a "page is running slowly" error is to call a timer at the end of each "add a div" routine, and the timer in turn calls your "add a div" function again.
Now, MY question is: is it possible to "paint" so that you don't need to add thousands of divs? This can be done with the canvas tag with some browsers, but I don't think it's possible with VML (the excanvas project) on IE. Or is it? I think VML "paints" by adding new elements to the DOM, at which point you may as well use DIVs, unless it's a simple shape.
Is it possible to alter the source of an image via scripting? (the image in the DOM, of course -- not the original image on the server.)

How much faster is it to use inline/base64 images for a web site than just linking to the hard file?

How much faster is it to use a base64/line to display images than opposed to simply linking to the hard file on the server?
url(data:image/png;base64,.......)
I haven't been able to find any type of performance metrics on this.
I have a few concerns:
You no longer gain the benefit of caching
Isn't a base64 A LOT larger in size than what a PNG/JPEG file size?
Let's define "faster" as in: the time it takes for a user to see a full rendered HTML web page
'Faster' is a hard thing to answer because there are many possible interpretations and situations:
Base64 encoding will expand the image by a third, which will increase bandwidth utilization. On the other hand, including it in the file will remove another GET round trip to the server. So, a pipe with great throughput but poor latency (such as a satellite internet connection) will likely load a page with inlined images faster than if you were using distinct image files. Even on my (rural, slow) DSL line, sites that require many round trips take a lot longer to load than those that are just relatively large but require only a few GETs.
If you do the base64 encoding from the source files with each request, you'll be using up more CPU, thrashing your data caches, etc, which might hurt your servers response time. (Of course you can always use memcached or such to resolve that problem).
Doing this will of course prevent most forms of caching, which could hurt a lot if the image is viewed often - say, a logo that is displayed on every page, which could normally be cached by the browser (or a proxy cache like squid or whatever) and requested once a month. It will also prevent the many many optimizations web servers have for serving static files using kernel APIs like sendfile(2).
Basically, doing this will help in certain situations, and hurt in others. You need to identify which situations are important to you before you can really figure out if this is a worthwhile trick for you.
I have done a comparison between two HTML pages containing 1800 one-pixel images.
The first page declares the images inline:
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAANSURBVBhXYzh8+PB/AAffA0nNPuCLAAAAAElFTkSuQmCC">
In the second one, images reference an external file:
<img src="img/one-gray-px.png">
I found that when loading multiple times the same image, if it is declared inline, the browser performs a request for each image (I suppose it base64-decodes it one time per image), whereas in the other scenario, the image is requested once per document (see the comparison image below).
The document with inline images loads in about 250ms and the document with linked images does it in 30ms.
(Tested with Chromium 34)
The scenario of an HTML document with multiple instances of the same inline image doesn't make much sense a priori. However, I found that the plugin jquery lazyload defines an inline placeholder by default for all the "lazy" images, whose src attribute will be set to it. Then, if the document contains lots of lazy images, a situation like the one described above can happen.
You no longer gain the benefit of caching
Whether that matters would vary according to how much you depend on caching.
The other (perhaps more important) thing is that if there are many images, the browser won't get them simultaneously (i.e. in parallel), but only a few at a time -- so the protocol ends up being chatty. If there's some network end-to-end delay, then many images divided by a few images at a time multiplied by the end-to-end delay per image results in a noticeable time before the last image is loaded.
Isn't a base64 A LOT larger in size than what a PNG/JPEG file size?
The file format / image compression algorithm is the same, I take it, i.e. it's PNG.
Using Base-64, each 8-bit character represents 6-bits: therefore binary data is being decompressed by a ratio of 8-to-6, i.e. only about 35%.
How much faster is it
Define 'faster'. Do you mean HTTP performance (see below) or rendering performance?
You no longer gain the benefit of caching
Actually, if you're doing this in a CSS file it will still be cached. Of course, any changes to the CSS will invalidate the cache.
In some situations this could be used as a huge performance boost over many HTTP connections. I say some situations because you can likely take advantage of techniques like image sprites for most stuff, but it's always good to have another tool in your arsenal!

How much more efficient is one big image rather than many small images. Facebook style

So I was looking at the facebook HTML with firebug, and I chanced upon this image
and came to the conclusion that facebook uses this large image (with tricky image positioning code) rather than many small ones for its graphical elements. Is this more efficient than storing many small images?
Can anybody give any clues as to why facebook would do this.
These are called CSS sprites, and yes, they're more efficient - the user only has to download one file, which reduces the number of HTTP requests to load the page. See this page for more info.
The problem with the pro-performance viewpoint is that it always seems to present the "Why" (performance), often without the "How", and never "Why Not".
CSS Sprites do have a positive impact on performance, for reasons that other posters here have gone into in detail. However, they do have a downside: maintainability; removing, changing, and particularly resizing images becomes more difficult - mostly because of the alterations that need to be made to the background-position-riddled stylesheet along with every change to the size of a sprite, or to the shape of the map.
I think it's a minority view, but I firmly believe that maintainability concerns should outweigh performance concerns in the vast majority of cases. If the performance is necessary, then go ahead, but be aware of the cost.
That said, the performance impact is massive - particularly when you're using rollovers and want to avoid that effect you get when you mouseover an image then the browser goes away to request the rollover. It's appropriate to refactor your images into a sprite map once your requirements have settled down - particularly if your site is going to be under heavy traffic (and certainly the big examples people have been pulling out - facebook, amazon, yahoo - all fit that profile).
There are a few cases where you can use them with basically no cost. Any time you're slicing an image, using a single image and background-position tags is probably cheaper. Any time you've got a standard set of icons - especially if they're of uniform size and unlikely to change. Plus, of course, any time when the performance really matters, and you've got the budget to cover the maintenance.
If at all possible, use a tool and document your use of it so that whoever has to maintain your sprites knows about it. http://csssprites.org/ is the only tool I've looked into in any detail, but http://spriteme.org/ looks seriously awesome.
The technique is dubbed "css sprites".
See:
What are the advantages of using CSS
Sprites in web applications?
Performance of css sprites
How do CSS sprites speed up a web
site?
Since other users have answered this already, here's how to do it, and another way is here.
Opening connections is more costly than simply continuing a transfer. Similarly, the browser only needs to cache one file instead of hundreds.
yet another resource: http://www.smashingmagazine.com/2009/04/27/the-mystery-of-css-sprites-techniques-tools-and-tutorials/
One of the major benefits of CSS sprites is that it add virtually 0 server overhead and is all calculated client side. A huge gain for no server side performance hit.
Simple answer, you only have to 'fetch' one image file and it is 'cut' for different views, if you used multiple images that would be multiple files you would need to download, which simply would equate into additional time to download everything.
Cutting up the large image into 'sprites' makes one HTTP request and provides a no flicker approach as well to 'onmouseover' elements (if you reuse the same large image for a mouse over effect).
Css Sprites tecnique is a method for reducing the number of image requests using background position.
Best Practices for Speeding Up Your Web Site
CSS Sprites: Image Slicing’s Kiss of Death
Google also does it - I've written a blog post on it here: http://www.stevefenton.co.uk/Content/Blog/Date/200905/Blog/Google-Uses-Image-Sprites/
But the essence of it is that you make a single http request for one big image, rather than 20 small http requests.
If you watch a http request, they spend more time waiting to start downloading than actually downloading, so it's much faster to do it in one hit - chunky, not chatty!