Chrome render layer - creation conditions? - html

I am experiencing slowness with an animation in my website.
After some investigation I found out (via the DevTools timeline tab) that the problem is that the entire page is being re-painted instead of just the animated div.
I checked the "Show composited layer borders" option, and found out that sometimes the animated div is in another render layer.
But I can't find a consistent behavior:
When the div is not in another layer - the animation is slow.
When the div is in another layer, sometimes the animation is fast and sometimes it is slow, depending on the presence of other elements in the page (a div with position:fixed, a marquee, etc). These other elements appear to be totally unrelated to the animated div in the DOM tree but obviously have an effect on the rendering of the page during the animation.
I found a few articles (1, 2, 3, 4, 5) that suggest possible ways to "force" Chrome to render an element in another render layer but most of them are old (things might have changed).
Also, they generally don't address how elements can affect each other with regard to the render layers.
How does Chrome decide which element to put in which layer?
How can I find out what was decided in my case? (i.e. debug the render layers)
How can different elements affect each other with regard to the render layer?
How can an animation of an element that is in another layer, cause a re-paint of the whole page? (at some cases this happens)
How can I ensure a fast render of my animation? i.e - force the element into another layer and make sure the animation doesn't cause a re-paint of the entire page.
And lastly - how can I stay on top of changes to the browser's rendering algorithm so that these problems don't return in the future?

OK so I finally found a solution to my problem.
This SO answer explains how to enable the "Layers panel" in chrome DevTools. That panel allows you to see in real time (even during animations) which elements are in which layers on the page.
Also, each layer has properties that tell you why chrome decided to make a layer out of it.
Using this tool I was able to determine that one of my elements which is an overlay of the whole page (to mask the page when there's a modal div) sometimes gets its own layer and sometimes not.
The reason it got a layer only when some other elements like marquee were present on the page was that chrome detected that "the element may overlap other composited elements".
When these "other composited elements" are not there, this overlay element does not get it's own layer. And when I open the modal there's also an animation on the opacity of the overlay div. And since it was not in a separate layer - it caused the entire page to re-paint itself in each frame (this is sometimes referred to as a "paint storm").
I solved the problem by making sure the overlay div will always get its own layer, by adding -webkit-backface-visibility: hidden to the style of this div.

Related

Visible browser reflow

I'm trying to make a window based application for web browsers. The number of windows is considerably high, so I'm storing them as HTML files (one per window) that I asynchronously retrieve from the server according to user interaction.
To add a window to the main page, I first add the link elements (CSS) of the downloaded document to its head section, and then I append the content of the body section to a certain div. When a window is closed, I just remove these elements.
This approach seems to be working nicely, but I can see that sometimes when I add a window, its elements are visible out of position with no style, and after a brief moment they are correctly painted.
I don't have a strong background in web programming, but I suspect this might be related to what it is called "browser reflow". Does it mean that it is taking too much time to repaint everything? Is it possible to just hide these "unstyled" elements until it is safe to show them?
Any guidance would be appreciated.
Some time away from the computer seems to have relaxed my mind. I was erroneously assuming that adding new link elements to the head section would load the CSS files immediately. Obviously, the browser needs to retrieve them from the server first. So, the DOM elements I'm adding don't show their style because the CSS files have not been downloaded yet. I think this is the right answer.

HTML5 fullscreen display: alternating between two canvases

We are facing the following challenge: We are creating a behavioral experimentation library, which both needs to be able to show random shapes as well as display forms.
For the shape drawing part we use pixi.js, and even though we know it can also use canvas2D, we prefer it to use WebGL as its rendering engine, which uses the 3D context of the canvas. Pixi however doesn't really have the ability to draw form elements on the canvas, so we decided to use Zebra/Zebkit for this, but zebra can only draw to 2d context.
According to many sources, it's impossible to use 2D and 3D context simultaneously with a single canvas, or switch from 2D and 3D context (and vice versa) after the canvas has been initialized. We therefore decided to create 2 separate canvases, one with a 3D context to use with Pixi.js, and one with a 2D context to use with Zebra/zebkit. When necessary, we switch the canvases by showing one and hiding the other.
This works quite well when the canvases are integrated in the web page, but less optimal when we want to display the experiment fullscreen. It is very difficult to switch from one canvas to the other in fullscreen, because you can only choose one DOM element at the time to be displayed full screen, and weird stuff happens when you start hiding the full screen element to show another. My question is: what would be the best approach to tackle this problem? I already have several in mind:
Put both canvases in a container div, and display this container fullscreen instead of the canvases itself. I don't know if this is possible, or if this will have any negative side effects compared to showing a canvas in fullscreen directly.
Render the zebkit canvas on top of the pixi canvas by making sure it is on top of the overlay item, as suggested in How do I make a DIV visible on top of an HTML5 fullscreen video?. This situation seems very hacky though, and I smell inconsistency issues between the various browsers that are around already.
Use the method described in How do I make a DIV visible on top of an HTML5 fullscreen video? to render normal HTML form elements on the pixi canvas. I predict there will be some resolution/rendering problems to tackle though, because you don't have the degree of control over the pixel raster as you have with canvas items.

Are svg images harder to render for a browser than bitmaps?

So I am making this horizontal scroll site which has a ton of images. I planned on using svgs for the entire site, but with only 20-30 svg images of medium to high complexity used in the page, and chrome already seems to be showing som jank and high paint times for scroll (and firefox is even worse, though safari seems to do a lot better).
Scroll timeline
View the site (scroll works on mac only, windows users can use arrow keys)
My question is, if I were to use pngs instead of svgs, would it reduce the paint times and hence the jank? Why is the browser struggling with only around 20 odd svg images?
As was my doubt, the problem turned out to be something completely different. Browsers are more than capable of handling multiple vector images. But what they aren't good at (and understandably so) is at redrawing those images very often.
Problem
My long horizontal scroll site was quite wide (30,000px). I had a background-color property applied to one of lower z-index'ed div's to represent the sky throughout the scroll site. I didn't want the sky to stretch the entire 30,000px since it essentially didn't change much, and hence gave it viewport width and height, with:
position:fixed;
Not a very smart move. Turns out that this property was causing my document layer to be repainted on every frame. Initially I though it was normal for browsers to do so on scroll, since Robby Leonardi's site, which I used as reference also repainted every frame.
Solution
Thanks to this article by one of the chrome dev tools developers, I set aside conventional wisdom, and make the sky layer
position:absolute;
and stretched it the entire site width, and boom! The paint rectangles were gone. And the scroll performance was smoother than butter.
Other solutions I tried
Hiding elements not near the viewport to make painting lighter, as suggested by #philipp, but didn't yeild any appreciable difference. Also, it felt super-hacky, and it wasn't targeting the root cause of the problem.
I tried modularizing my site into scenes and using translateZ(0) hack on each scene so that only the smaller scenes get repainted instead of the document. This actually helped quite a bit, and scroll was decent. Then,
I gave all the svg images their own layer by using translateZ(0). I started getting FPS of around 60 now, but again, this wasn't the right way of doing things.
I once had a similar thing. The SVG was 10 or more times as wide as the one shown above, it contained ~20k elements and was about 3MB in size. The only Thing what brought back performance (since it was a jump and run game) was an algorithm which was able to find all elements whose bounding box overlapped the viewport. With this I could use display: none; to hide everything what is invisible.
That reduced the amout of visible elements to ~150 per frame and the game ran again fluently.
I used a balanced binary tree (avl tree) and a one dimensional range query, since the height of the viewport was always the same as the image.
Good luck!
[EDIT]
Forgot to leave something like an anwer. From my Experience are large/huge SVG Graphics a bottleneck in rendering, especially if there is a lot of scripting happening. If you do not need any Interactivity with the elements from the Graphic, so nothing than a large Background Image, I would recommend to use a Tile map, based on PNG images, that is the standard way in Jump'n'Run games with huges »worlds«, so you can gain perfomance in two Points:
Rendering is faster,
You can »lazy ajax load« tiles, depending on visibility, to prevent users to download the »whole world« at startup.
Additinally you could use something like PIXI.js to render with WebGL, what will push the performance drastically and with it comes support for Tilemaps and Spritesheets.
If you insist on the advantages of Vector Grafics (Scaling, Interactivity) than you need to find a way to hide as much elements as possible to keep the frame rate high.

onscroll - browser rendering optimization - layers hack - will-change

I've an application with a scrollable element that contains a list of items.
I'd like to optimize the rendering for jank free scrolling, with some tricks that are described in http://aerotwist.com/blog/on-translate3d-and-layer-creation-hacks/ for exemple
Just would like to know: where should I force the browser to create layers? Am I supposed to create a rendering layer around each of my list items?
I also would like to know why the browser isn't able to do this on his own, because when an element is scrollable, it makes sens to me that we will move the content of this element up and down without changing the rendering of the inner content right? So why doesn't the browser creates a layer for the inner content of any scrollable element?
By the way, is this layer creation hack consistent across browsers?
Edit:
I've noted that it is now possible to indicate to the browser that some changes will happen.
I could use for exemple: will-change: scroll-position; according to this article
However, I still don't understand why the browser needs this, because if we set overflow-y: auto;
or overflow-y: scroll;, it seems obvious that the scroll position is expected to change.
If you are scrolling list of elements that are 'static' (they don't change their relative position, size, opacity etc. while you scroll) there isn't much that you can do to improve the scrolling performance. That's because your scrollable content is already promoted to a separate layer. You can easily test it yourself:
open this demo (it doesn't use translate3d nor will-change),
enable 'Show composited layer borders' in the DevTools
and observe the result.
Orange border around the scrollable content indicates that it is on a separate layer.
If you are experiencing janky scrolling then try to narrow your issue down (e.g. using Timeline in the DevTools). translateZ(0) is not a silver bullet for all performance problems.

browser repaints and performance

I have a single-page app that uses a lot of CSS3 shadows. The app consists of 8 panels that represent pages. Only 1 panel is visible at a time, while the other 7 are hidden (style display:none;). The user clicks on the menu to move from panel to panel.
Google's Speed Tracer that about 75% of resources is spent on repaints. My question is this: do browser repaints affect A) only visible elements of the page or B) every element, whether visible or not?
If it's B then there's not much I can do. If it's A then I could clear the html of the hidden panels and have inner DOM elements for only the visible panel with the goal of reducing repaint time.
Let me know.
The way to answer performance questions is to perform benchmarks. If you want to know how much performance overhead there is to display:none elements, do the following:
Test 1: Load the page with all the display:none elements, measure the repaint speed.
Test 2: Modify the page so that the display:none elements are removed completely, load this page, and measure the repaint speed.
The difference between the two tests is the repaint overhead of display:none elements. Hopefully it will be minimal.
Hidden elements shouldn't repaint. Are you animating the panel? Animating (esp. translating) an element that has shadows really puts the load on repainting. Also, the bigger the shadow, the longer it takes to repaint. I haven't done exact measurements, just drawing from personal experience; the 'jank' gets pretty obvious pretty fast, at least in older machines. Maybe try a background, data-uri image instead.