So I am making this horizontal scroll site which has a ton of images. I planned on using svgs for the entire site, but with only 20-30 svg images of medium to high complexity used in the page, and chrome already seems to be showing som jank and high paint times for scroll (and firefox is even worse, though safari seems to do a lot better).
Scroll timeline
View the site (scroll works on mac only, windows users can use arrow keys)
My question is, if I were to use pngs instead of svgs, would it reduce the paint times and hence the jank? Why is the browser struggling with only around 20 odd svg images?
As was my doubt, the problem turned out to be something completely different. Browsers are more than capable of handling multiple vector images. But what they aren't good at (and understandably so) is at redrawing those images very often.
Problem
My long horizontal scroll site was quite wide (30,000px). I had a background-color property applied to one of lower z-index'ed div's to represent the sky throughout the scroll site. I didn't want the sky to stretch the entire 30,000px since it essentially didn't change much, and hence gave it viewport width and height, with:
position:fixed;
Not a very smart move. Turns out that this property was causing my document layer to be repainted on every frame. Initially I though it was normal for browsers to do so on scroll, since Robby Leonardi's site, which I used as reference also repainted every frame.
Solution
Thanks to this article by one of the chrome dev tools developers, I set aside conventional wisdom, and make the sky layer
position:absolute;
and stretched it the entire site width, and boom! The paint rectangles were gone. And the scroll performance was smoother than butter.
Other solutions I tried
Hiding elements not near the viewport to make painting lighter, as suggested by #philipp, but didn't yeild any appreciable difference. Also, it felt super-hacky, and it wasn't targeting the root cause of the problem.
I tried modularizing my site into scenes and using translateZ(0) hack on each scene so that only the smaller scenes get repainted instead of the document. This actually helped quite a bit, and scroll was decent. Then,
I gave all the svg images their own layer by using translateZ(0). I started getting FPS of around 60 now, but again, this wasn't the right way of doing things.
I once had a similar thing. The SVG was 10 or more times as wide as the one shown above, it contained ~20k elements and was about 3MB in size. The only Thing what brought back performance (since it was a jump and run game) was an algorithm which was able to find all elements whose bounding box overlapped the viewport. With this I could use display: none; to hide everything what is invisible.
That reduced the amout of visible elements to ~150 per frame and the game ran again fluently.
I used a balanced binary tree (avl tree) and a one dimensional range query, since the height of the viewport was always the same as the image.
Good luck!
[EDIT]
Forgot to leave something like an anwer. From my Experience are large/huge SVG Graphics a bottleneck in rendering, especially if there is a lot of scripting happening. If you do not need any Interactivity with the elements from the Graphic, so nothing than a large Background Image, I would recommend to use a Tile map, based on PNG images, that is the standard way in Jump'n'Run games with huges »worlds«, so you can gain perfomance in two Points:
Rendering is faster,
You can »lazy ajax load« tiles, depending on visibility, to prevent users to download the »whole world« at startup.
Additinally you could use something like PIXI.js to render with WebGL, what will push the performance drastically and with it comes support for Tilemaps and Spritesheets.
If you insist on the advantages of Vector Grafics (Scaling, Interactivity) than you need to find a way to hide as much elements as possible to keep the frame rate high.
Related
I have a page that has pretty heavy (mid-weight rather) canvas operations going on. To cater for users on mobile devices and older computers I was thinking I could implement a mechanism that will check if the canvas element is actually visible and decide if the constant calculations and canvas updates (animation running at 30fps) do have to be done or not.
This is working fine, yet when doing a performance test with the Chrome Dev Tools I noticed that even when I disable my visibility check and just let things render all the time the CPU usage of the function in question drops quite a bit when no part of the canvas element(s) is visible (although in theory it should still be performing the same tasks). So: at least on my computer running Chrome 17 it does not make a real difference if I check for the element's actual visibility.
To cut a long story short: Do I need to do this or are browsers smart enough to handle such a case without even telling them (and I can save the visibility checking)?
EDIT:
So I made some "research" on this topic and built this fiddle.
What happens is that it just generates noise at 30 frames per second. Not too pleasing to the eye but, well... The upper part is just a plain div to block the viewport. When I scroll down and have the canvas element in the viewport CPU Usage tells me it's taking up about 40%, so apparently the browser does have quite a lot to do here. When I scroll back up so that I just have the maroon colored div in my viewport and profile the CPU usage it drops to sth around 10%. When I scroll back down: usage goes up again.
So when I implement a visibility check like in this modified fiddle, I do see an increase (a tiny one to be honest) in CPU usage instead of a drop (as it has the additional task of checking if the canvas is inside the viewport).
So I am still wondering if this is some side effect of something that I am not aware of (or I am making some major mistake when profiling) or if I can expect browsers to be smart enough to handle such situations?
If anyone could shed a light on that I'd be very thankful!
I think you're confused between whether the logic is running and whether the rendering is happening. Many browsers now hardware-accelerate their canvases so all rendering happens on the GPU, so actual pixel pushing takes no CPU time anyway. However your tick function has non-trivial code to generate random noise on the CPU. So you're only really concerned over whether the tick function is running. If the canvas is offscreen, it certainly won't be rendered to the display (it's not visible). As for the canvas draw calls, it probably depends on the browser. It could render all draw calls to an off-screen canvas in case you suddenly scroll it back in to view, or it could just queue up all the draw calls and not actually do anything with them until you scroll the canvas in to view. I'm not sure what each browser does there.
However, you shouldn't use setInterval or setTimeout for animating Canvas. Use the new requestAnimationFrame API. Browsers don't know what you do in a timer call so will always call the timer. requestAnimationFrame on the other hand is designed specifically for visual things, so the browser has the opportunity to not call the tick function, or to reduce the rate it's called at, if the canvas or page is not visible.
As for how browsers actually handle it, I'm not sure. However, you should definitely prefer it since future browsers may be able to better optimise requestAnimationFrame in ways they cannot optimise setInterval or setTimeout. I think modern browsers also reduce the ordinary timers to 1 Hz if the page is not visible, but it's definitely much easier for the browser to optimise requestAnimationFrame, plus some browsers get you V-syncing and other niceness with it.
So I'm not certain requestAnimationFrame will mean your tick function is not called if the canvas is scrolled out of view. So I'd recommend using both requestAnimationFrame and the existing visibility check. That should guarantee you the most efficient rendering.
From my own experience it renders whatever you tell it to render regardless of position on screen.
An example is if you draw tiles, that exceeds the canvas size, you will still see the performance drop unless you optimize the script.
Try your function with a performance demanding animation, and see if you still get the same results.
The designer I'm working with has given me a monster of an implementation issue...
Page background is grey, and atop of it is a crumpled paper texture (non-repeating with painted design elements) for the first 600 pixels (by 1400 pixels across; currently centered as a non-repeating background). At the bottom is another div with more text on it -- with a dropshadow, complex line pattern for the background and ripped edges, hovered slightly above the top div.
Saving the top part as a JPG and the bottom part as a transparent PNG leads to filesizes of +1mb.
Saving the top part as a JPG and the bottom part as a JPG doesn't work very well due to the drop shadow. It would technically work to save the bottom part as a slice with elements of the top part underneath the dropshadow, but it would have to line up pixel-perfect always or else look crappy. And at that point, I might as well save the whole site as one big image...
If the bottom part had a solid colour for the background, I could set each edge to have a different transparent PNG. However, the line pattern on the bottom part means that this wouldn't work.
My question is ultimately:
How the heck do people do ripped edges these days without making their site one big image?
Thanks!
Screengrab:
CSS3 does provide a border-image property, which should be able to help you with the ripped border effect (although even then, it would help if it was a repeating image).
See here for the W3 specification.
However it may not be much use to you, because browser support for this feature isn't great -- IE doesn't support it at all (not even IE9), and while most other browsers do support it, they all currently have gaps in their support and require a vendor prefix in the CSS property.
See CanIUse.com for a full browser support table for it.
To be honest, I think you should just go back to your designer and ask him to make it easier to work with -- he's probably just created something he thinks looks good, but is unaware of the limitations of the design he's put together; if you explain the issue to him, he may well be able to produce something a bit more usable for you.
There's really not a whole lot you can do here.
Page edges are ideally seamlessly repeated via repeat-y, and in your case it looks like the texture is one big image. You're either going to have to settle for sub-par performance or present the designer with your issues.
Check the archive of this blog for a good example.
You either have to fix the background images and use the entire image (or the top image AND the bottom image) and make the background non-scrolling. OR you have to get him to design a pattern that can repeat and then use a smaller PNG.
Clearly, your designer has a print background....
Ok, there's ways that will most likely theoretically work. But theoretical isn't always practical. I suspect your desire is to have cross-browser capability, as all of us should. So, start by throwing most new CSS3 tricks out, thanks to legacy IE. Forget box-shadow, forget crazy png tranparencies without hacks, etc.
What you're left with is doing a gigantic .jpg background. That will load....eventually.....
In this case, you can see the storm on the horizon, so run for cover. Go back to the designer, explain why this is about as smart an idea as texture layered over gradients, and help them understand why our buddies at Microsoft have made this virtually impossible. Just like a fully-flashed out site, it can be done somehow....but it's probably not the best use of time and resources. The web isn't print, it's dynamic...and when you put something "on a page" it's not going to stay put as it would in Illustrator, nor can you guarantee that your user is going it experience it in 100# glossy with a metallic overlay. Yes, I was a designer before I was a developer.....
Sound like a cop-out? Maybe it is. But I've been in your shoes, building sites for credit cards. My team was forced to waste thousands of dollars of the bank's money trying to make sites work with designs that probably shouldn't have been done on the web, thanks to print designers doing double-duty, getting designs approved prior to talking to the tech team....after, of course, we presented management with the options. Ultimately, it got the boss fired for going over budget.
although this is untypical, I would recommend cutting a big square shaped hole in the center of the image so that you're only left with the edges themselves and a transparent center and saving to PNG. Then saving the center part itself as a jpeg and putting the jpeg directly on top of the PNG in the correct position.
This way, the majority of the very large PNG will contain very little data and be a very small file size. The rest of the data would then, obviously, be jpeg and therefore smaller.
I'm rendering images using HTML5 canvas and I'm facing serious performance issues when resizing multiple (hundreds) canvas at once.
Is there any tricks to get resizing as smooth as in mobileme gallery ?
Thanks
We don't have much information about your procedures, but the slowness either resides in the fact that you have hundreds(!) of canvases, versus mobileme's 20-something.
You'll notice as you scroll down in mobileme that the number of canvases does not increase. There are only ever as many canvases as the page needs. Or rather, when you scroll down the canvases you can no longer see are no longer there (so to speak).
The only other place for optimization is in your redrawing code, since when you do a canvas resize you need to be redrawing as well. But first try to optimize the number of canvases you are using to be fewer.
I have a design I'm creating in CSS, and it has started to sort of, er, lazy scroll. By that I mean the scrollbar lags a bit when you are scrolling. What are common causes of this so that I can debug it from my site?
EDIT:
The document has very little content (not even a paragraph), so not much at all. No flash, two images.
EDIT:
I feel so stupid. Improperly formatted background: property was causing the issue. Thanks nonetheless, everyone.
It's likely to be due to heavy processing requirements via css.
(CSS does affect scrolling in every browser) I have seen this scenario many times (the worst case is with SVG). It usually hits browsers like Chrome hard because of it's AA.
There was a great website that detailed the heaviest to the safest properties to use in regards to CSS effects, sorry I don't have the link. Though from my experience I would say to consider:
Gradients: The more you feature or the larger the area they cover the more exponential the rendering calculations. Abusing stops and additional colors also adds to the mayhem.
Border-Radius: Is usually clipping off its internal content whatever it may be. I've noticed differences when excluded.
Opacity can be the main issue if coupled with other css effects. In certain scenarios I've found great improvements when removing opacity or reducing it's usage. As it's not just transparency it's driving it's also for some browsers anti-aliasing text.
Images: The way images can affect scrolling should be obvious, though I've discovered re-sizing imaged from it's native resolution can become a more noticeable factor.
Use of properties such as background-size:; draws huge power in certain situations, a workaround could be to scrap the div, replace with < img > and overlay with a blank div
containing text/ content.
Animations transitions & translations are obvious power eaters if abused, especially animation that loops continuously or re-sizes to the browser via percentages.
Bare in mind someone on a low spec celeron PC will have a terrible experience on a site that lags on your reasonably/ high powered PC/ mac
I'm working on a site that will be showing a lot of information via divs. Its basically a chart out of divs. The way its setup is I've got an outer div with a set size that you can scroll the contents around (click and drag like google maps).
From here I can 2 ways of going, have 1 large inner div that is moved around with all the chart divs within it. This would be by far the easiest approach. The other option I see is a tiled approach where I break the large inner div up into smaller divs and dynamically add/remove them as needed.
The chart itself is potentially 1999425014 pixels square. Each point is made up of 6 divs and there are 100,000+ points.
What would be the best way to move forward?
I believe most modern browsers render sections of pages only as needed. If the entire page fits in RAM, only the section that is currently visible is being drawn. I conclude this because large documents do not slow down my computer when I use other programs, but there is a bit of lag when I scroll across one. Plus, when the window is repainted through the operating system's APIs, only the pixels that are visible should be rendered. That would only make sense when designing it, anyway.
Surely you can fit as many as you want on a page without worrying about real-time performance hits, unless you're talking about scrolling. Rendering the document should take less time than retrieving it out of system memory and determining where the scroll bars are.
Best of luck, and merry Christmas.
-Tom