What's the difference between Polymer's shady DOM vs shadow DOM? - polymer

I'm having issues using shadow DOM for one of the web-components (paper-stepper) and it requires the use of the shady DOM instead. I'm not sure what the differences are and why that is the case.

Here's a good explanation of why.
Tl;DR:
Shadow DOM:
Shadow DOM works by hiding the scoped DOM trees from the traditional
tree walking functions and accessors (childNodes, children, firstChild
and so on). These accessors return only the elements in your scope.
What this means is that it hides a layer of complexity from you. One of the examples I found online was about the <video></video> tag. It explains how within it there are the video controls, but those are abstracted away and you cannot see them. This is what the Shadow DOM does, but for all the web components.
Shadow DOM sounds nice, but there are limitations
It’s a lot of code.
It’s slow to indirect all the DOM API.
Structures like NodeList can simply not be emulated.
There are certain accessors that cannot be overwritten (for example,
window.document,window.document.body).
The polyfill returns objects that are not actually Nodes, but Node
proxies, which can be very confusing.
This is where shady DOM comes in.
Shady DOM:
Shady DOM is a super-fast shim for shadow DOM that provides
tree-scoping, but has drawbacks. Most importantly, one must use the
shady DOM APIs to work with scoped trees.
By using the Shady DOM, you now don't have an abstracted view of the components. You can see everything. However with Shady DOM, you can examine how the tree would look like if Shadow DOM was being used instead by running this:
var arrayOfNodes = Polymer.dom(YOUR_SELECTOR_GOES_HERE).children;
So, taking all this information into consideration on how the different DOMs work, it seems like the paper-stepper web-component requires access to the whole tree to work properly. Since the Shadow DOM abstracts the inner elements, you have to use Shady DOM or refactor the code in such a way that the inner elements don't need to be accessed from outside the abstraction.

Related

FontAwesome SVG + JS with pseudo-elements performance issue on Select2

I'm actually using the FontAwesome 5 package, using the SVG+JS implementation with the "data-search-pseudo-elements" option.
I'm in a context where I use a "Select2" plug-in to display a <select> element, which is containing nearly 600 options (for a timezone selection). But when I try to open the select to choose an option, it takes a very very long time to open (which doesn't occur when using the CSS framework, or when pseudo-elements are disabled)!
A little look in browser performances panel seems to show that it's the FontAwesome script which is responsible of this, while there is no pseudo-element in the elements generated by Select2.
Is there any way to improve FontAwesome performance, or to avoid its activation for some HTML elements?
As long as you have data-search-pseudo-elements enabled, Font Awesome will scan the DOM when changes are made, looking for any pseudo-elements that represent icons that should be converted into <svg> elements.
Unfortunately, a scenario like you've described is the Achilles heel of this feature. Scanning the DOM for all possible pseudo-elements can be slow when there are many DOM elements. And the Mutation Observer causes re-scans to occur whenever the DOM changes--which is what sounds like is happening when you open that select control.
So it's probably best to avoid SVG/JS with pseudo-elements in a situation like this.
While I would not recommend putting more effort into trying a work around, if you're up against a wall and for some reason have a requirement to continue using SVG/JS and pseudo-elements together like this, then here are two possibilities:
If you don't need the MutationObserver to watch for changes, then you could disable it altogether using the Configuration API. For example, add data-observe-mutations="false" to your <script> tag.
If you do need the MutationObserver to watch for changes elsewhere in the DOM, but not on this select control, then after disabling the MutationObserver on load (using the above), you could kick it off programmatically on a smaller region of the DOM using the dom.watch() API with a observeMutationsRoot parameter that is more narrowly scoped. By default, the MutationObserver, when enabled, scans everything under a root of document.body, but this is a way that you can make it work on a smaller region of the DOM.
If you have a requirement to support pseudo-elements, and especially if you need to support that in a DOM with many elements, and especially especially if the DOM is changing a lot, it's almost certainly going to be best for you to use the CSS/Webfont technology.

How to navigate accross shadow DOMs recursively

I want to validate a custom polymer element. To do this, I want in javascript to access all my nested polymer elements to see if they are valids.
I can't find an easy way to do this.
this.querySelectorAll does not find my inputs that are nested in other polymer elements. It seems I can't use "/deep/" in these selectors.
Is there an easy way to do this ? Or do I have to do a recursive javascript methods that will call a querySelectorAll in all elements with shadow roots ?? (I guess performances will get ugly...)
Thanks for your help.
If there is no fast solution, I will probably try the other way around (have my inputs register to the parent)
Answer:
element.querySelectorAll() will find some elements when using /deep/, however, it only goes so far (1 shadow dom level). This would indeed necessitate recursive calls from each ElementNode.
Note:
This type of behavior largely goes against the core tenets of HTML (i.e. that the web page works no matter how well-formed the content is). In other words, all elements are valid no matter their placement.
As an example, I have made a custom element that only renders specific child elements and hides all others. This still keeps in line with the above tenet, as an element's base rendering is controlled by the element/agent, but allows for the developer/designer to customize its presentation aside from the standard presentation.

Is it possible query all elements including shadow dom with Polymer?

For example, lets say we want to do querySelectorAll('canvas') to get all canvases in the document, including the ones in the shadow dom. Is that possible with polymer?
No. For a period of time there was a proposal whereby you could use the /deep/ combinator, but it was found to be bad for encapsulation and has been deprecated. Code that relies upon it will break.
Instead, if you need to, you can take an element and look into its shadow root specifically and query within it.

Which Polymer lifecycle event should be used to modify contained elements before they render?

I'm writing a <my-codeblock> element, in which I'd like to strip leading and trailing whitespace from the content. Of the polymer lifecycle events, which is the best to use to traverse the contents of the custom element and modify them?
I definitely want to get the modification done before the first paint, and it would be nice to help the polyfill/browser avoid extra work when distributing the nodes I'm going to modify into the shadow dom.
Ironically, I'm working on this exact element as we speak :)
attached() is typically the best place to access light dom children, parent elements, or interrogate distributed nodes. From the faq:
How do I access the DOM in a <content>?
Why do elements report zero (light DOM) children at created/ready time
When is the best time to access an element’s parent node?
Something that hasn't made it to the documentation yet is the domReady callback. If you add domReady to the element, Polymer calls it when the element's initial set of children are guaranteed to exist. If you need to handle dynamically added/removed children, add a Mutation Observer.

Why Shadow DOM when we have iframes?

I heard of shadow DOM which seems to solve the problem of encapsulation in web widget development. DOM and CSS rules get encapsulated which is good for maintenance. But then isn't this what iframes are for? What problems are there with iframes that made it necessary for W3C to come up with Shadow DOM or HTML5 Web Components?
Today, iframes are commonly used to assure separate scope and styling. Examples include Google's map and YouTube videos.
However, iframes are designed to embed another full document within the current HTML document. This means accessing values in a given DOM element in an iframe from the parent document is a hassle by design. The DOM elements are in a completely separate context, so you need to traverse the iframe’s DOM to access the values you’re looking for. Contrast this with web components which offer an elegant way to expose a clean API for accessing the values of custom elements.
Imagine creating a page using a set of 5 iframes that each contain one component. Each component would need a separate URL to host the iframe’s content. The resulting markup would be littered with iframe tags, yielding markup with low semantic meaning that is also clunky to read and manage. In contrast, web components support declaring rich semantic tags for each component. These tags operate as first class citizens in HTML. This aids the reader (in other words, the maintenance developer).
In summary, while both iframes and the shadow DOM provide encapsulation, only the shadow DOM was designed for use with web components and thus avoids the excessive separation, setup overhead, and clunky markup that occurs with iframes.
iframes are use as just encapsulation objects...
with the exception of SVG (more on that later), today’s Web platform
offers only one built-in mechanism to isolate one chunk of code from
another — and it ain’t pretty. Yup, I am talking about iframes. For
most encapsulation needs, frames are too heavy and restrictive.
Shadow DOM allows you to provide better and easier encapsulation, by creating another clone of the DOM or part of it.
For example imagine you build a widget (as I have) that is used across websites.
You widget might be affected by the css on the page and look horrible, whereas with Shadow DOM it will not :)
Here is an excellent article on the topic:
What The Heck is Shadow DOM/