I develop a small application to test CSS3 and translate3d. The idea is to render several DIVs moving randomly on the screen. It's kind of particle system, I know I could probably use WebGL or Canvas to have better performances but I also want it to work smoothly on mobile browsers hence I thought that DOM manipulation would be better for performances.
You will find the result after a couple of hours at this url
I'd like to reach the best performance possible to increase the number of DIVs.
But here is my problem, I have a "rendering issue" that I spotted when I used TimeLine on Chrome or Safari. From time to time the whole page is rendered generating a small lag perceptible on Safari iPhone or Chrome Android+iPhone.
So if one of you one is up for the challenge don't hesitate I tried many things but I didn't figure out how to avoid this expensive redraw.
BTW, if one of you have extra ideas to optimize this snippets don't hesitate to reply.
Thanks
---------- UPDATE 1 ----------
Based on Ariya advices I updated by code (url) and added another test using only top/left.
Based on the FPS counter provided by Chrome I can see that the fps is more stable using top/left properties with almost the same framerate.
Do you have any idea if I could optimize the CSS3 version to have even better performances? I though that css3 with GPU Acceleration would be faster I probably did something wrong.
---------- UPDATE 2 ----------
I updated my code to use requestAnimFrame and only fire it when I need to redraw.
And I found what is killing the perf gray gradient background that I defined in the css was redraw often and killing the performance.
However top/left seems still better than CSS transition :( from a pure performance point of view.
When looking at the Timeline profile in Google Chrome's Developer Tools, it's evident that there is a lot of style recalculation. This is to be blamed at this particular line:
lastSheet.insertRule('#-webkit-keyframes '+keyframeName+' { ....
In other words, continuously changing the style sheet is expensive. Since the element animation in this example is about moving them around, rather than using keyframe-based animation I would recommend simplifying to simple transition.
Related
I am trying to choose the right technology to use for updating a project that basically renders thousands of points in a zoomable, pannable graph. The current implementation, using Protovis, is underperformant. Check it out here:
http://www.planethunters.org/classify
There are about 2000 points when fully zoomed out. Try using the handles on the bottom to zoom in a bit, and drag it to pan around. You will see that it is quite choppy and your CPU usage probably goes up to 100% on one core unless you have a really fast computer. Each change to the focus area calls a redraw to protovis which is pretty darn slow and is worse with more points drawn.
I would like to make some updates to the interface as well as change the underlying visualization technology to be more responsive with animation and interaction. From the following article, it seems like the choice is between another SVG-based library, or a canvas-based one:
http://www.sitepoint.com/how-to-choose-between-canvas-and-svg/
d3.js, which grew out of Protovis, is SVG-based and is supposed to be better at rendering animations. However, I'm dubious as to how much better and what its performance ceiling is. For that reason, I'm also considering a more complete overhaul using a canvas-based library like KineticJS. However, before I get too far into using one approach or another, I'd like to hear from someone who has done a similar web application with this much data and get their opinion.
The most important thing is performance, with a secondary focus on ease of adding other interaction features and programming the animation. There will probably be no more than 2000 points at once, with those small error bars on each one. Zooming in, out, and panning around need to be smooth. If the most recent SVG libraries are decent at this, then perhaps the ease of using d3 will outweigh the increased setup for KineticJS, etc. But if there is a huge performance advantage to using a canvas, especially for people with slower computers, then I would definitely prefer to go that way.
Example of app made by the NYTimes that uses SVG, but still animates acceptably smoothly:
http://www.nytimes.com/interactive/2012/05/17/business/dealbook/how-the-facebook-offering-compares.html . If I can get that performance and not have to write my own canvas drawing code, I would probably go for SVG.
I noticed that some users have used a hybrid of d3.js manipulation combined with canvas rendering. However, I can't find much documentation about this online or get in contact with the OP of that post. If anyone has any experience doing this kind of DOM-to-Canvas (demo, code) implementation, I would like to hear from you as well. It seems to be a good hybrid of being able to manipulate data and having custom control over how to render it (and therefore performance), but I'm wondering if having to load everything into the DOM is still going to slow things down.
I know that there are some existing questions that are similar to this one, but none of them exactly ask the same thing. Thanks for your help.
Follow-up: the implementation I ended up using is at https://github.com/zooniverse/LightCurves
Fortunately, drawing 2000 circles is a pretty easy example to test. So here are four possible implementations, two each of Canvas and SVG:
Canvas geometric zooming
Canvas semantic zooming
SVG geometric zooming
SVG semantic zooming
These examples use D3's zoom behavior to implement zooming and panning. Aside from whether the circles are rendered in Canvas or SVG, the other major distinction is whether you use geometric or semantic zooming.
Geometric zooming means you apply a single transform to the entire viewport: when you zoom in, circles become bigger. Semantic zooming in contrast means you apply transforms to each circle individually: when you zoom in, the circles remain the same size but they spread out. Planethunters.org currently uses semantic zooming, but it might be useful to consider other cases.
Geometric zooming simplifies the implementation: you apply a translate and scale once, and then all the circles are re-rendered. The SVG implementation is particularly simple, updating a single "transform" attribute. The performance of both geometric zooming examples feels more than adequate. For semantic zooming, you'll notice that D3 is significantly faster than Protovis. This is because it's doing a lot less work for each zoom event. (The Protovis version has to recalculate all attributes on all elements.) The Canvas-based semantic zooming is a bit more zippy than SVG, but SVG semantic zooming still feels responsive.
Yet there is no magic bullet for performance, and these four possible approaches don't begin to cover the full space of possibilities. For example, you could combine geometric and semantic zooming, using the geometric approach for panning (updating the "transform" attribute) and only redrawing individual circles while zooming. You could probably even combine one or more of these techniques with CSS3 transforms to add some hardware acceleration (as in the hierarchical edge bundling example), although that can be tricky to implement and may introduce visual artifacts.
Still, my personal preference is to keep as much in SVG as possible, and use Canvas only for the "inner loop" when rendering is the bottleneck. SVG has so many conveniences for development—such as CSS, data-joins and the element inspector—that it is often premature optimization to start with Canvas. Combining Canvas with SVG, as in the Facebook IPO visualization you linked, is a flexible way to retain most of these conveniences while still eking out the best performance. I also used this technique in Cubism.js, where the special case of time-series visualization lends itself well to bitmap caching.
As these examples show, you can use D3 with Canvas, even though parts of D3 are SVG-specific. See also this force-directed graph and this collision detection example.
I think that in your case the decision between canvas and svg is not like a decision between »riding a Horse« or driving a »Porsche«. For me it is more like the decision about the cars color.
Let me explain:
Assuming that, based on the framework the operations
draw a star,
add a star and
remove a star
take linear time. So, if your decision of the framework was good it is a bit faster, otherwise a bit slower.
If you go on assuming that the framework is just fast, than it becomes totally obvious that the lack of performance is caused be the high amount of stars and handling them is something none of the frameworks can do for you, at least I do not know about this.
What I want to say is that the base of the problem leads to a basic problem of computational geometry, namely: range searching and another one of computer graphics: level of detail.
To solve your performance problem you need to implement a good preprocessor which is able to find very fast which stars to display and is perhaps able to cluster stars which are close together, depending on the zoom. The only thing that keeps your view vivid and fast is keeping the number of stars to draw as low possible.
As you stated, that the most important thing is performance, than I would tend to use canvas, because it works without DOM operations. It also offers the opportunity to use webGL, what increases graphic performance a lot.
BTW: did you check paper.js? It uses canvas, but emulates vector graphics.
PS: In this Book you can find a very detailed discussion about graphics on the web, the technologies, pros and cons of canvas, SVG and DHTML.
I recently worked on a near-realtime dashboard (refresh every 5 seconds) and chose to use charts that render using canvas.
We tried Highcharts(SVG based JavaScript Charting library) and CanvasJS(Canvas based JavaScript Charting library). Although Highcharts is a fantastic charting API and offers way more features we decided to use CanvasJS.
We needed to display at least 15 minutes of data per chart (with option to pick range of max two hours).
So for 15 minutes: 900 points(data point per second) x2(line and bar combination chart) x4 charts = 7200 points total.
Using chrome profiler, with CanvasJS the memory never went above 30MB while with Highcharts memory usage exceeded 600MB.
Also with refresh time of 5 seconds CanvasJS rendering was allot more responsive then Highcharts.
We used one timer (setInterval 5 seconds) to make 4 REST API calls to pull the data from back end server which connected to Elasticsearch. Each chart updated as data is received by JQuery.post().
That said for offline reports I would go with Highcharts since its more flexible API.
There's also Zing charts which claims to use either SVG or Canvas but haven't looked at them.
Canvas should be considered when performance is really critical. SVG for flexibility. Not that canvas frameworks aren't flexible, but it takes allot more work for canvas framework to get the same functionality as an svg framework.
Might also look into Meteor Charts, which is built on top of the uber fast KineticJS framework: http://meteorcharts.com/
I also found when we print to PDF a page with SVG graphics, the resulting PDF still contains a vector-based image, while if you print a page with Canvas graphics, the image in the resulting PDF file is rasterized.
For our website, we are developing a "component" that would display images in a similar fashion to Time Machine on Mac OS X. So it will be many images on top of each other, positioned slightly differently and with a smooth animation as you scroll forward and backward.
We have a spike implementation with CSS3 animations but it's not very smooth in Firefox and IE9 is not supported at all (though we may live with it if the other options are even worse).
We are considering implementation in SVG or Canvas but don't have much experience with it so I thought we'd ask first. So:
Requirements:
It must be fast. The animation must be smooth and that is a hard requirement.
It should be supported in as many browsers as possible.
Required browsers are Chrome 20+, Firefox 14+ and IE10+.
We would very much like to have support for IE9 too but can live without it if absolutely necessary.
Opera is nice to have but not necessary.
Options and our current experience / opinion on them:
CSS3 - seems like the "appropriate" technology for the task but unfortunately the implementation doesn't work so well. Maybe we have inefficiencies in our prototype code but the support seems quite different between different browsers at the moment.
SVG - at least it's vector graphics / DOM elements but we don't have any experience with it.
Canvas - we hope that it should perform well as it is used even for games but we can't quite imagine how would all the pixel redrawing work. Maybe we should use some libraries like processingjs?
Flash or other plugins - I happen to know Flash quite well and I know that the Time Machine-like effect would be quite an easy task there but we'd rather stay away from plugins at the moment.
Thanks for advice.
If the size of the component does not have to be very large, but can be limited to say around 800x600 pixels, then it sounds like Canvas should be up to the job.
If you only draw (scaled) bitmaps to the Canvas then performance is very good in my experience, even on iPad2. Performance only really starts to suffer at higher resolutions (1920x1080 and above), so if you use it for a fullscreen feature you need to watch out! Also, fancy features such as drop shadow etc. can slow down performance considerably as well.
Canvas has very few quirks between browsers, so it will almost certainly be less painful than using CSS3 or SVG in terms of getting it to work as expected across a wide range of browsers.
I would recommend whipping together a quick and dirty prototype with Canvas to see if it will meet your first requirement regarding performance.
If you decide to go with Canvas, I would definitely recommend using a library. Since you know Flash quite well you might want to take a look at EaselJS. It has a display list inspired by Flash, and the performance cost of using it is negligible in most cases. You also get basic events for interactivity. Also, if you go with EaselJS, it would be quite simple to port the code to Flash later if you decide to.
Are you looking for something like this? It uses SMIL animation so you'd have to integrate something like fakesmile to get IE support.
I'm troubleshooting a performance regression in a large webapp. I recently made some changes to remove an IFRAME and put the contents directly into the original DOM, to make performance better. Indeed, initial load time is better, but I've found a strange problem.
It seems that various layout (animation and scrolling) changes are MUCH slower with this iframe removal. I've narrowed it down to know it's not javascript.
I've removed all javascript that was running on timers and events.
I can see the slow performance when simply setting a classname on an element which has a 1 second CSS transition, which changes its style.top and style.left. (It's already absolutely positioned). This element animates to the new location very slowly .. seems like about 5-10 FPS, whereas with the IFRAME it was 40+ FPS.
So -- I'm wondering if there is some way to measure actual browser layout performance. I see this problem across the board on Safari, IE, Firefox and Chrome -- so any of these would be fine to use (though I prefer Firefox because the problem seems to be most defined there).
A good place to start - Speed Tracer and Page Speed. They will show you a lot of useful information about how your layout affects performance and what you can do to improve it. Although Speed Tracer is a Chrome extension its data will also reflect performance in other browsers too.
Here's a really interesting test for the browser itself:
Maze Solver: CSS3 Layout Performance Test
Performance on the web is multi-dimensional. In this test we focus on the browser layout engine to exercise the browser's handling of CSS 2.1 and CSS 3 layout constructs. These constructs are used to style HTML, and the layout engine is an important component of overall web browser performance.
Again, this test is for the browser itself, not your code, which, if I understand correctly, is what you're looking for.
I'm a newbie in html5/CSS3/jquery, and I'm making this (not finished yet):
http://catherinearnould.sio4.net/autres/kat/
The problem is that, because of the large canvas with particles, the animations are not as fluid as it could.
So if you're bored, don't hesitate to have a look at my code and give me some advice to improve the fluidity ^^
Many thanks!
For one using RequestAnimationFrame() instead of setTimeout() is likely to make things smoother. See Paul Irish his blog post requestAnimationFrame for smart animating.
The big performance hits are most likely caused by live calculated/rendered CSS attributes such as transparency, shadows and rounded corners.
Also be aware of that changes to DOM elements which cause reflow is costly (such as animations), see http://code.google.com/speed/articles/reflow.html.
I see a big difference just after running this:
$('*').css({backgroundColor:'transparent', opacity:1, boxShadow:'none'});
If you can, replace all (semi-)transparent and rounded graphics with equivalent png images.
You could also think of using css3 transition for some of your animation and removing and adding new classes to the elements to changes their styles rather than doing it with javascript(jQuery). Use jQuery as a fallback for older-browsers and IE.
http://www.w3.org/TR/css3-transitions/
http://net.tutsplus.com/tutorials/html-css-techniques/css-fundametals-css-3-transitions/
This gives the browser the power to do the rendering, and in some cases like in the iOS you can get hardware accelerated for the rendering.
For the canvas element, I have little experience with that, but I'm interested in that effect you're creating. But I think the massive canvas animation at the start is a bit much, there is so much going on already, maybe there is no need for that effect? Just my opinion as a user.
Just imagine building Google Maps for a large building floor plan with 3000 rooms.
I need to display up to 3000 rectangles (the best would be to also be able to render random polygons, but at this point, this is not the biggest issue). Each of them should have events attached to them such as mouseover and click that will have some effects on other dom elements on the page. I also need to be able to zoom in and out.
I know I can do it with SVG (Raphael.js), plain divs rendering or canvas.
I am wondering if anyone has specific recommendations to make for what I am trying to build. It needs to render fast enough (around 1 second or so) on the slowest browsers. (IE8,Firefox 3.6 and hopefully IE7, even though I am not dreaming too much...)
Thanks for the help,
Nicolas.
PS: So far, I have experienced that rendering 3000 rectangles takes up to 7 seconds on IE8 with Raphael.js, which is rather slow. It also seems than rendering plain div is up to 6 times faster on IE8.
3000 DOM objects with events attached is going to be very painful for some machines to handle. Generally once you get into the "thousands" range the performance of DOM-based solutions (divs, SVG) gets really bad. It is nigh impossible to get good loading times with that many DOM elements.
The performance of excanvas itself is also really bad. The second there is any animation the performance of excanvas turns awful. Since excanvas merely mimicks Canvas by making VML (SVG), its going to be at least just as slow (and almost always slower) than doing just SVG/VML alone.
See my answer here for a detailed analysis: HTML5 Canvas vs. SVG vs. div
I believe that one of the requirements on your list has got to go. The number of objects, the performance, or the platform.
My suggestion to you would be to drop support for the older browsers if possible and go with Canvas.