Does it impact performance to have classes that are not used for styling on elements?
E.g.:
<div class="translatable">...</div>
where .translatable is used to find all the elements that are going to have their content dynamically altered in certain situations.
These classes increase the document load time (more text = more time) and have a very tiny impact on the time required to interpret any class reference (I assume class names are in hashtables and an extra name could cause such a hashtable to be allocated a little larger).
So... there will be an impact, but unless your unused classes make up a significant percentage of your CSS, it will be hard to see or measure. I can't see worrying about a single class.
if you are using it purely for lookups later then it should be fine but if you have a large document and then start updating that specific style then you are going to hit performance issues when the browser does a reflow and repaint.
Stoyan Stefanov of Yahoo! explains it quite well on his blog http://www.phpied.com/rendering-repaint-reflowrelayout-restyle/
Related
There was a recommendation by Google PageSpeed that asked web developers to Use efficient CSS selectors:
Avoiding inefficient key selectors that match large numbers of
elements can speed up page rendering.
Details
As the browser parses HTML, it constructs an internal document tree
representing all the elements to be displayed. It then matches
elements to styles specified in various stylesheets, according to the
standard CSS cascade, inheritance, and ordering rules. In Mozilla's
implementation (and probably others as well), for each element, the
CSS engine searches through style rules to find a match. The engine
evaluates each rule from right to left, starting from the rightmost
selector (called the "key") and moving through each selector until it
finds a match or discards the rule. (The "selector" is the document
element to which the rule should apply.)
According to this system, the fewer rules the engine has to evaluate
the better. [...]. After that, for pages that contain large numbers of
elements and/or large numbers of CSS rules, optimizing the definitions
of the rules themselves can enhance performance as well. The key to
optimizing rules lies in defining rules that are as specific as
possible and that avoid unnecessary redundancy, to allow the style
engine to quickly find matches without spending time evaluating rules
that don't apply.
This recommendation has been removed from current Page Speed Insights rules. Now I am wondering why this rule was removed. Did browsers get efficient at matching CSS rules in the meantime? And is this recommendation valid anymore?
In Feb 2011, Webkit core developer Antti Koivisto made several improvements to CSS selector performance in Webkit.
Antti Koivisto taught the CSS Style Selector to skip over sibling selectors and faster sorting, which bring some minor improvements, after which he landed two more awesome patches: one which enables ancestor identifier filtering for tree building, halving the remaining time in style matching over a typical page load, and a fast path for simple selectors that speed up matching up another 50% on some websites.
CSS Selector Performance has changed! (For the better) by Nicole Sullivan runs through these improvements in greater detail. In summary -
According to Antti, direct and indirect adjacent combinators can still be slow, however, ancestor filters and rule hashes can lower the impact as those selectors will only rarely be matched. He also says that there is still a lot of room for webkit to optimize pseudo classes and elements, but regardless they are much faster than trying to do the same thing with JavaScript and DOM manipulations. In fact, though there is still room for improvement, he says:
“Used in moderation pretty much everything will perform just fine from the style matching perspective.”
While browsers are much faster at matching CSS selectors, it's worth reiterating that CSS selectors should still be optimised (eg. kept as 'flat' as possible) to reduce file sizes and avoid specificity issues.
Here's a thorough article (which is dated early 2014)
I am quoting Benjamin Poulain, a WebKit Engineer who had a lot to say about the CSS selectors performance test:
~10% of the time is spent in the rasterizer. ~21% of the time is spent
on the first layout. ~48% of the time is spent in the parser and DOM
tree creation ~8% is spent on style resolution ~5% is spent on
collecting the style – this is what we should be testing and what
should take most of the time. (The remaining time is spread over many
many little functions)
And he continues:
“I completely agree it is useless to optimize selectors upfront, but
for completely different reasons:
It is practically impossible to predict the final performance impact
of a given selector by just examining the selectors. In the engine,
selectors are reordered, split, collected and compiled. To know the
final performance of a given selectors, you would have to know in
which bucket the selector was collected, how it is compiled, and
finally what does the DOM tree looks like.
All of that is very different between the various engines, making the
whole process even less predictable.
The second argument I have against web developers optimizing selectors
is that they will likely make things worse. The amount of
misinformation about selectors is larger than correct cross-browser
information. The chance of someone doing the right thing is pretty
low.
In practice, people discover performance problems with CSS and start
removing rules one by one until the problem go away. I think that is
the right way to go about this, it is easy and will lead to correct
outcome.”
There are approaches, like BEM for example, which models the CSS as flat as possible, to minimize DOM hierarchy dependency and to decouple web components so they could be "moved" across the DOM and work regardless.
Maybe because doing CSS for CMSes or frameworks is more common now and it's hard then to avoid using general CSS selectors. This to limit the complexity of the stylesheet.
Also, modern browsers are really fast at rendering CSS. Even with huge stylesheets on IE9, it did not feel like the rendering was slow. (I must admit I tested on a good computer. Maybe there are benchmarks out there).
Anyway, I think you must write very inefficient CSS to slow down Chrome or Firefox...
There's a 2 years old post on performance # Which CSS selectors or rules can significantly affect front-end layout / rendering performance in the real world?
I like his one-liner conclusion : Anything within the limits of "yeah, this CSS makes sense" is okay.
Do unused class names on html elements impede rendering performance (with no corresponding style in the style).
eg: I have a number of game types that have a fixed number set, some game types require the number set be styled differently. The game type key is added to the parent of all game types to allow the number set to be styled differently for each game type if required, although most use the default styling as such have unused classes.
Not on active performance. It will only give your stylesheets itself more data-weight to download, but might also seem trivial if you count in browser-caching.
Classes are lazy-loaded and aren't chunked in as a whole into your rendering-set of your browser. They are only searched for when they are needed. If they are never used, it will not impact the performance of your website.
There's one final note tho; if you use different class chains (.abc .def:nth-child(1) .ghi) with complex selectors, it might take some browsers a bit time (fractions of miliseconds) to try to still figure out what's happening tho. You really need to benchmark these situations yourself and may differ strongly per browser.
The browser will have to read those, yes, and this will certainly take a little time, but it should not be anywhere near enough time to affect performance in any way you would notice (unless we're talking hundreds or thousands of unused classes maybe). I would consider this a micro-optimization and move on.
In order to track an almost entirely dynamic DOM layout I am considering using the HTML5 data attribute to track elements.
Would placing one on each DOM element begin to affect load performance, or negatively affect other searching mechanisms such as getElementById or $(#Selector)?
It will not affect any other searching mechanism. As far as load performance goes, if you were to measure it down to the microsecond, sure... The more markup gets rendered, the slower it will be. If you're talking about data- attributes, the difference is probably negligible.
I was reading today about OOCSS which says by using that approach have 2 benefits
Shorter CSS = Better performance
Better maintainability
I'm agree with second point but The first benefit point is to make css shorter by adding more classes to html which increase re-usability but CSS file of whole website can be cached in browser but HTML of each page is different.
My question is how a shorter CSS file can increase the overall site performance by adding more bytes (classes) into html, while css is a single file and will be downloaded at once in cache?
By simplifying CSS selectors, keeping the properties DRY and using class attributes in HTML, reflows and repaints will (in theory) be light-weight and therefore increase the smoothness and overall performance of the site.
Reflows and repaints occour when
Resizing the window
Changing the font
Adding or removing a stylesheet
Content changes, such as a user typing text in an input box
Activation of CSS pseudo classes such as :hover (in IE the activation of the pseudo class of a sibling)
Manipulating the class attribute
A script manipulating the DOM
Calculating offsetWidth and offsetHeight
Setting a property of the style attribute
(above list copied from Reflows & Repaints: CSS Performance making your JavaScript slow? by Nicole Sullivan, creator of OOCSS)
Also watch this video to see reflows and repaints in action: http://www.youtube.com/watch?v=ZTnIxIA5KGw (about 30 seconds of you time)
That said, easily parsed CSS will also improve your site's responsiveness (as in smoothness), not just the quality of maintainable code.
Obviously this doesn't have any meaningful answer - what's the definition of fast? How many bytes is too much?
The short answer is that if you are gzipping your html, caching things correctly and making sensible reuse of things, then it makes no meaningful difference.
If you are worried about adding some extra CSS classes, then remove all your </li>s, ''s etc, as well as your </body> and your </html>. Also, for any attributes that are single words and don't contain any of the problematic characters, drop the " surrounding them. Those changes should balance out adding the classes.
(In case that sounded a little snarky, I would actually recommend doing that in your caching layer - something like this will do the job:
$page_content = str_replace(array("</option>","</td>","</tr>","</th>","</dt>","</dd>","</li>","</body>","</html>"),"",$page_content);
$page_content = preg_replace('/(href|src|id|class|name|type|rel|sizes|lang|title|itemtype|itemprop)=(\"|\')([^\"\'\`=<>\s]+)(\"|\')/i', '$1=$3', $page_content);
$page_content = preg_replace('!\s+!', ' ', $page_content);
)
The performance gains from "shorter css" are twofold:
Smaller style sheet
Shorter selectors
Long css selector are inefficient. Steve Souders (among others) have written extensively about CSS selector performance. More efficient selectors probably more than offset the few extra bytes for multiple classes.
Using a CSS meta language like LESS or Sass, esp. if you employ #extend, or mixins gives you the best of all worlds.
CSS allows an HTML element to have multiple classes:
<div class="cat persian happy big"> Nibbles </div>
But is there a limit on how many classes are allowed per item?
You're only limited by the maximum length of an (X)HTML attribute's value, something covered well by this answer.
Browsers are often very forgiving of standards violations, so individual browsers may allow much longer class attributes. Additionally you are likely able to add a practically infinite number of classes to a DOM element via JavaScript, limited by the amount of memory available to the browser.
For all intents and purposes, there is no limit. I'm assuming you're asking out of curiosity; it goes without saying that if you're seriously worried about hitting this limit, you've done something very wrong.
No. I don't think, I have ever come across any such limit/
EDIT: Sorry for the casual remark.
According to the specifications, there isn't any limit but someone has tried to reach this limit and it seems the limit for Opera, Safari supported well over 4000 classes, and Firefox at least 2000 classes!
Source: http://kilianvalkhof.com/2008/css-xhtml/maximum-number-of-supported-classes-per-element/
There is no technical limit (barring the amount of memory the browser may be consuming), but one should heavily consider having loads of classes on any element as the browser will have to parse all the classes, apply those styles and render the page.
Also, if you need to search the DOM for elements of a particular class and elements contain loads of classes, you may likely see a performance issue if the JavaScript interpreter has to parse loads of classes.