How to measure complete all-in performance of DOM changes? - html

I've found lots of information about measuring load time for pages and quite a bit about profiling FPS performance of interactive applications, but this is something slightly different than that.
Say I have a chart rendered in SVG and every click I make causes the chart to render slightly differently. I want to get a sense of the complete time elapsed between the click and the point in time that the pixels on the screen actually change. Is there a way to do this?
Measuring the Javascript time is straight forward but that doesn't take into consideration any of the time the browser spends doing any layout, flow, paint, etc.
I know that Chrome timeline view shows a ton of good information about this, which is great for digging into issues but not so great for taking measurements because the tool itself affects performance and, more importantly, it's Chrome only. I was hoping there was a browser independent technique that might work. Something akin to how the Navigation Performance API works for page load times.

you may consider using capturing hdmi capturing hardware (just google for it) or a high speed camera to create a video, which could be analyzed offline.
http://www.webpagetest.org/ supports capturing using software only, but I guess it would be too slow for what you want to measure.

Related

Is Starling worth implementing for my AS3 MMO?

For the past year I have been working on an isometric city builder. So far I have not used any framework apart from a loose PureMVC clone.
I have heard of Starling but only recently have I played with it.
From my research, the performance boost is fenomenal, but this forces me to manage my resource a lot tighter.
At the moment, I am exporting building animations one building at a time, in ~16 frames/pngs. These are cropped, resized and exported in Photoshop by a script and then imported in Flash, then exported as a swf, to be loaded / preloaded / postloaded on demand.
The frames are way too big to make a spritesheet with them, per building. I believe its called an atlas.
These pngs are then blited between lock() and unlock(). After the buildings + actors walking around are sorted, that is.
I am unsure if just using starling.Movieclip for the buildings, where instead of loading the pngs, I would build a MovieClip symbol with its frames. So bliting wouldn't even be necessary. Unless adding bliting on top of Starling would improve performance even more. That would allow fatter features such as particles effects, maybe some lighting.
Google isn't offering me a strait answer, thus I am asking here.
Google isn't offering a straight answer because there isn't such. It depends very very much on what you've done, how much knowledge you've got and what are your goals.
Using Starling gives benefits as well as drawbacks. Your idea of resources will change totally. If you really have enormous amount of assets, then putting them into GPU will be really slow process. You must start from there - learn what Starling does, how resources are managed with it and what you need to change in order to make the transition between the two.
If this is not that hard and time consuming task, you will have some performance optimization. BUT again it depends on your current code. Your current code is really important in this situation as if it's perfectly optimized your gain won't be that much (but commonly would still be).
If you need to switch between graphics regularly or you need to have dynamic assets (as images for example) you must keep in mind that uploading to GPU is the slowest part when talking about Starling and Stage3D.
So again, there is not a straight answer. You must think of GPU memory and limit, GPU upload time, as well as assets management. You also need to think of the way your code is built and if you are going to have any impact if you make the switch (if your code heavily depends on the MovieClip like structure, with all that frames and things) - it will be hard for you. One of the toughest things I fought with Stage3D was the UI implementation - there is almost only Feathers UI which will take you a few weeks to get along with.
On the other hand, Starling performs pretty well, especially on mobile devices. I was able to maintain a stable 45fps on a heavy UI app with a lot of dynamic loading content and multiple screens on an old iPhone 4S, which I find great. Latest mobile devices top at 60fps.
It's up to you to decide, but I'll advise you to have some experimental long-lasting project to test with, and then start applying this approach to your regular projects. I've done the dive to use it in a regular very tightened deadline project, and it was a nightmare. Everything worked out great, but I thought I would have a heart attack - the switch is not that easy :)
I would suggest using DMT for rasterizing your vector assets into Straling sprites at runtime, and it'll also keep your DisplayTree! meaning that you'll still have the parent/child relations that you had in your Flash Assets.
DMT will not duplicate assets, and will rasterize the vectors into texture atlases only one time (Cache is saved)
you can find it here: https://github.com/XTDStudios/DMT

Fast and responsive interactive charts/graphs: SVG, Canvas, other?

I am trying to choose the right technology to use for updating a project that basically renders thousands of points in a zoomable, pannable graph. The current implementation, using Protovis, is underperformant. Check it out here:
http://www.planethunters.org/classify
There are about 2000 points when fully zoomed out. Try using the handles on the bottom to zoom in a bit, and drag it to pan around. You will see that it is quite choppy and your CPU usage probably goes up to 100% on one core unless you have a really fast computer. Each change to the focus area calls a redraw to protovis which is pretty darn slow and is worse with more points drawn.
I would like to make some updates to the interface as well as change the underlying visualization technology to be more responsive with animation and interaction. From the following article, it seems like the choice is between another SVG-based library, or a canvas-based one:
http://www.sitepoint.com/how-to-choose-between-canvas-and-svg/
d3.js, which grew out of Protovis, is SVG-based and is supposed to be better at rendering animations. However, I'm dubious as to how much better and what its performance ceiling is. For that reason, I'm also considering a more complete overhaul using a canvas-based library like KineticJS. However, before I get too far into using one approach or another, I'd like to hear from someone who has done a similar web application with this much data and get their opinion.
The most important thing is performance, with a secondary focus on ease of adding other interaction features and programming the animation. There will probably be no more than 2000 points at once, with those small error bars on each one. Zooming in, out, and panning around need to be smooth. If the most recent SVG libraries are decent at this, then perhaps the ease of using d3 will outweigh the increased setup for KineticJS, etc. But if there is a huge performance advantage to using a canvas, especially for people with slower computers, then I would definitely prefer to go that way.
Example of app made by the NYTimes that uses SVG, but still animates acceptably smoothly:
http://www.nytimes.com/interactive/2012/05/17/business/dealbook/how-the-facebook-offering-compares.html . If I can get that performance and not have to write my own canvas drawing code, I would probably go for SVG.
I noticed that some users have used a hybrid of d3.js manipulation combined with canvas rendering. However, I can't find much documentation about this online or get in contact with the OP of that post. If anyone has any experience doing this kind of DOM-to-Canvas (demo, code) implementation, I would like to hear from you as well. It seems to be a good hybrid of being able to manipulate data and having custom control over how to render it (and therefore performance), but I'm wondering if having to load everything into the DOM is still going to slow things down.
I know that there are some existing questions that are similar to this one, but none of them exactly ask the same thing. Thanks for your help.
Follow-up: the implementation I ended up using is at https://github.com/zooniverse/LightCurves
Fortunately, drawing 2000 circles is a pretty easy example to test. So here are four possible implementations, two each of Canvas and SVG:
Canvas geometric zooming
Canvas semantic zooming
SVG geometric zooming
SVG semantic zooming
These examples use D3's zoom behavior to implement zooming and panning. Aside from whether the circles are rendered in Canvas or SVG, the other major distinction is whether you use geometric or semantic zooming.
Geometric zooming means you apply a single transform to the entire viewport: when you zoom in, circles become bigger. Semantic zooming in contrast means you apply transforms to each circle individually: when you zoom in, the circles remain the same size but they spread out. Planethunters.org currently uses semantic zooming, but it might be useful to consider other cases.
Geometric zooming simplifies the implementation: you apply a translate and scale once, and then all the circles are re-rendered. The SVG implementation is particularly simple, updating a single "transform" attribute. The performance of both geometric zooming examples feels more than adequate. For semantic zooming, you'll notice that D3 is significantly faster than Protovis. This is because it's doing a lot less work for each zoom event. (The Protovis version has to recalculate all attributes on all elements.) The Canvas-based semantic zooming is a bit more zippy than SVG, but SVG semantic zooming still feels responsive.
Yet there is no magic bullet for performance, and these four possible approaches don't begin to cover the full space of possibilities. For example, you could combine geometric and semantic zooming, using the geometric approach for panning (updating the "transform" attribute) and only redrawing individual circles while zooming. You could probably even combine one or more of these techniques with CSS3 transforms to add some hardware acceleration (as in the hierarchical edge bundling example), although that can be tricky to implement and may introduce visual artifacts.
Still, my personal preference is to keep as much in SVG as possible, and use Canvas only for the "inner loop" when rendering is the bottleneck. SVG has so many conveniences for development—such as CSS, data-joins and the element inspector—that it is often premature optimization to start with Canvas. Combining Canvas with SVG, as in the Facebook IPO visualization you linked, is a flexible way to retain most of these conveniences while still eking out the best performance. I also used this technique in Cubism.js, where the special case of time-series visualization lends itself well to bitmap caching.
As these examples show, you can use D3 with Canvas, even though parts of D3 are SVG-specific. See also this force-directed graph and this collision detection example.
I think that in your case the decision between canvas and svg is not like a decision between »riding a Horse« or driving a »Porsche«. For me it is more like the decision about the cars color.
Let me explain:
Assuming that, based on the framework the operations
draw a star,
add a star and
remove a star
take linear time. So, if your decision of the framework was good it is a bit faster, otherwise a bit slower.
If you go on assuming that the framework is just fast, than it becomes totally obvious that the lack of performance is caused be the high amount of stars and handling them is something none of the frameworks can do for you, at least I do not know about this.
What I want to say is that the base of the problem leads to a basic problem of computational geometry, namely: range searching and another one of computer graphics: level of detail.
To solve your performance problem you need to implement a good preprocessor which is able to find very fast which stars to display and is perhaps able to cluster stars which are close together, depending on the zoom. The only thing that keeps your view vivid and fast is keeping the number of stars to draw as low possible.
As you stated, that the most important thing is performance, than I would tend to use canvas, because it works without DOM operations. It also offers the opportunity to use webGL, what increases graphic performance a lot.
BTW: did you check paper.js? It uses canvas, but emulates vector graphics.
PS: In this Book you can find a very detailed discussion about graphics on the web, the technologies, pros and cons of canvas, SVG and DHTML.
I recently worked on a near-realtime dashboard (refresh every 5 seconds) and chose to use charts that render using canvas.
We tried Highcharts(SVG based JavaScript Charting library) and CanvasJS(Canvas based JavaScript Charting library). Although Highcharts is a fantastic charting API and offers way more features we decided to use CanvasJS.
We needed to display at least 15 minutes of data per chart (with option to pick range of max two hours).
So for 15 minutes: 900 points(data point per second) x2(line and bar combination chart) x4 charts = 7200 points total.
Using chrome profiler, with CanvasJS the memory never went above 30MB while with Highcharts memory usage exceeded 600MB.
Also with refresh time of 5 seconds CanvasJS rendering was allot more responsive then Highcharts.
We used one timer (setInterval 5 seconds) to make 4 REST API calls to pull the data from back end server which connected to Elasticsearch. Each chart updated as data is received by JQuery.post().
That said for offline reports I would go with Highcharts since its more flexible API.
There's also Zing charts which claims to use either SVG or Canvas but haven't looked at them.
Canvas should be considered when performance is really critical. SVG for flexibility. Not that canvas frameworks aren't flexible, but it takes allot more work for canvas framework to get the same functionality as an svg framework.
Might also look into Meteor Charts, which is built on top of the uber fast KineticJS framework: http://meteorcharts.com/
I also found when we print to PDF a page with SVG graphics, the resulting PDF still contains a vector-based image, while if you print a page with Canvas graphics, the image in the resulting PDF file is rasterized.

how many div's can you have before the dom slows and becomes unstable?

I am developing a jQtouch app and each request done via ajax creates a new div in the document for the loaded content. Only a single div is shown at any one time.
How many div's can I have before the app starts getting unresponsive and slow?
Anyone have any ideas on this?
EDIT: Its an iPad app running on Safari, and it would be less than 1000 div's with very basic content
I've had tens of thousands, maybe even a hundred thousand divs, on screen at once.
Performance is either fine, or bad, depending on:
Parsed from HTML or generated Dynamically in JavaScript?
Parsed from HTML means you have a LARGE html source, and this can make browsers hang. Generated in JS is surprisingly fast, even on Internet Explorer, which is the slowest of all browsers for JS.
To be honest, if you really need an absolute answer to this question, then you might want to reconsider your design.
No answer given here will be right, as it depends upon many factors that are specific to your application. E.g. heavy vs. little CSS use, size of the divs, amount of actual graphics rendering required per div, target browser/platform, number of DOM event listeners etc..
Just because you can doesn't mean that you should! :-)
As others have said, there's really no answer.
However, in this talk about the Google Maps API version 3, the speaker brings up the number ten thousand several times, as a basic threshold for browser unhappiness.
http://code.google.com/apis/maps/documentation/javascript/
Without defining a particular environment, it's not possible to answer your question.
And even then, anything anyone tells you is just a guess. You need to do your own testing on real-world configurations with different browsers and hardware. You'll also need to establish some performance benchmarks to decide what "too slow" even means.
I've been able to add several thousand divs without a problem. Depends on what you'll be doing afterwards, of course, and the memory on the client machine. Everyone else is right about that.
As Harpo said, 10K is probably a good ceiling. At one time, I noticed speed problems starting at about 4K divs, but hardware has improved since then.
And, as Neil N said, adding the divs via scripting is better than having a huge HTML source.
And, to answer Harpo's comment, one way to "break it up" so that JS doesn't lock the page and produce a "page is running slowly" error is to call a timer at the end of each "add a div" routine, and the timer in turn calls your "add a div" function again.
Now, MY question is: is it possible to "paint" so that you don't need to add thousands of divs? This can be done with the canvas tag with some browsers, but I don't think it's possible with VML (the excanvas project) on IE. Or is it? I think VML "paints" by adding new elements to the DOM, at which point you may as well use DIVs, unless it's a simple shape.
Is it possible to alter the source of an image via scripting? (the image in the DOM, of course -- not the original image on the server.)

Reducing load time, or making the user think the load time is less

I've been working on a website, and we've managed to reduce the total content for a page load from 13.7MiB's to 2.4, but the page still takes forever to load.
It's a joomla site (ick), and it has a lot of redundant DOM elements (2000+ for the home page), and make 60+ HttpRequest's per page load, counting all the css, js, and image requests. Unlike drupal, joomla won't merge them all on the fly, and they have to be kept separate or else the joomla components will go nuts.
What can I do to improve load time?
Things I've done:
Added colors to dom elements that have large images as their background so the color is loaded, then the image
Reduced excessively large images to much smaller file sizes
Reduced DOM elements to ~2000, from ~5000
Loading CSS at the start of the page, and javascript at the end
Not totally possible, joomla injects it's own javascript and css and it does it at the header, always.
Minified most javascript
Setup caching and gziping on server
Uncached size 2.4MB, cached is ~300KB, but even with so many dom elements, the page takes a good bit of time to render.
What more can I do to improve the load time?
Check out this article.
http://www.smashingmagazine.com/2010/01/06/page-performance-what-to-know-and-what-you-can-do/
If the link gets removed or lost the tools mentioned are:
YSlow (by Yahoo)
Google's Page speed
AOLs web page test
Smush.it (Image compression tool)
It sounds like you've done a great job of working around the real problem: those giant graphics. You can probably squeeze some more efficiency out of caching, minifying, etc., but there has to be a way to reduce the size of the images. I worked with a team of some of the pickiest designers on Earth and they never required uncompressed JPEGs. Do you mean images cut out of Photoshop and saved on full quality (10)? If so, the real solution (and I appreciate that you may not be able to accomplish this) is to have a hard conversation where you explain to the design company, "You are not your users." If the purpose of the site is to only impress other visual designers with the fidelity of your imagery, maybe it's ok. If the purpose of the site is to be a portfolio that gains your company work, they need to re-asses who their audience is and what the audience wants. Which, I'm guessing, is not 2 minute load times.
Have you enabled HTTP Compression (gzip) on your web servers? That will reduce the transfer size of all text-based files by 60-90%. And your page will load 6-10x faster.
Search StackOverflow or Google for how to enable it. (It varies per server software: Apache, IIS, etc).
Are all the DOM elements necessary? If they are, is it possible to hide them as the page loads? Essentially, you would have your important need-to-be-there dom elements render with the page, and then when the document is loaded, you could unhide the rest of the elements as necessary
$('.hidden').removeClass('hidden')
Evidently, there are plugins for compressing and combining JS/CSS files:
http://extensions.joomla.org/extensions/site-management/site-performance
http://extensions.joomla.org/extensions/site-management/site-performance/7350
I would say you can't do anything. You're close to the absolute limit.
Once you get to a critical point, you'll have to compare the amount of effort you'd need to input into further compressing the site against the effort of throwing the most bandwidth expensive components out the window. For example, you might spend 20 hours further compressing your site by 1%. You might also spend 40 hours on throwing the Joomla theme away and starting again saving 50%. (though probably not).
I hope you find an easy answer. I feel your pain as I am struggling to finish a Drupal project than has been massacred by a designer who hasn't implemented his own code in years and has become institutionalized in a cubicle somewhere near a tuna salad sandwich.

How much more efficient is one big image rather than many small images. Facebook style

So I was looking at the facebook HTML with firebug, and I chanced upon this image
and came to the conclusion that facebook uses this large image (with tricky image positioning code) rather than many small ones for its graphical elements. Is this more efficient than storing many small images?
Can anybody give any clues as to why facebook would do this.
These are called CSS sprites, and yes, they're more efficient - the user only has to download one file, which reduces the number of HTTP requests to load the page. See this page for more info.
The problem with the pro-performance viewpoint is that it always seems to present the "Why" (performance), often without the "How", and never "Why Not".
CSS Sprites do have a positive impact on performance, for reasons that other posters here have gone into in detail. However, they do have a downside: maintainability; removing, changing, and particularly resizing images becomes more difficult - mostly because of the alterations that need to be made to the background-position-riddled stylesheet along with every change to the size of a sprite, or to the shape of the map.
I think it's a minority view, but I firmly believe that maintainability concerns should outweigh performance concerns in the vast majority of cases. If the performance is necessary, then go ahead, but be aware of the cost.
That said, the performance impact is massive - particularly when you're using rollovers and want to avoid that effect you get when you mouseover an image then the browser goes away to request the rollover. It's appropriate to refactor your images into a sprite map once your requirements have settled down - particularly if your site is going to be under heavy traffic (and certainly the big examples people have been pulling out - facebook, amazon, yahoo - all fit that profile).
There are a few cases where you can use them with basically no cost. Any time you're slicing an image, using a single image and background-position tags is probably cheaper. Any time you've got a standard set of icons - especially if they're of uniform size and unlikely to change. Plus, of course, any time when the performance really matters, and you've got the budget to cover the maintenance.
If at all possible, use a tool and document your use of it so that whoever has to maintain your sprites knows about it. http://csssprites.org/ is the only tool I've looked into in any detail, but http://spriteme.org/ looks seriously awesome.
The technique is dubbed "css sprites".
See:
What are the advantages of using CSS
Sprites in web applications?
Performance of css sprites
How do CSS sprites speed up a web
site?
Since other users have answered this already, here's how to do it, and another way is here.
Opening connections is more costly than simply continuing a transfer. Similarly, the browser only needs to cache one file instead of hundreds.
yet another resource: http://www.smashingmagazine.com/2009/04/27/the-mystery-of-css-sprites-techniques-tools-and-tutorials/
One of the major benefits of CSS sprites is that it add virtually 0 server overhead and is all calculated client side. A huge gain for no server side performance hit.
Simple answer, you only have to 'fetch' one image file and it is 'cut' for different views, if you used multiple images that would be multiple files you would need to download, which simply would equate into additional time to download everything.
Cutting up the large image into 'sprites' makes one HTTP request and provides a no flicker approach as well to 'onmouseover' elements (if you reuse the same large image for a mouse over effect).
Css Sprites tecnique is a method for reducing the number of image requests using background position.
Best Practices for Speeding Up Your Web Site
CSS Sprites: Image Slicing’s Kiss of Death
Google also does it - I've written a blog post on it here: http://www.stevefenton.co.uk/Content/Blog/Date/200905/Blog/Google-Uses-Image-Sprites/
But the essence of it is that you make a single http request for one big image, rather than 20 small http requests.
If you watch a http request, they spend more time waiting to start downloading than actually downloading, so it's much faster to do it in one hit - chunky, not chatty!