WebComponents/Polymer - Lifecycle Callback Ordering in Trees - polymer

Let's say I have the following html:
<my-element-one>
<my-element-two>
<my-element-three></my-element-three>
</my-element-two>
</my-element-one>
Now, let's say this was parsed into a DocumentFragment. Now, I then insert the fragment into the Document. What order will the attachedCallbacks of these custom elements fire? Will they consistently fire depth first (three, two, one)? Or will they fire from top to bottom (one, two, three)? Or is it entirely undetermined? If I remove the entire tree later, what order will the detachedCallbacks fire?
Finally, is this behavior consistent between the polyfill and the W3C spec's intended behavior? I've read through a bunch of the spec and haven't found a clear explanation on how this ordering should play out.

Although I assume your original question is about Custom Elements in general, I've put together an example using Polymer which tries to replicate the tree ordering you're interested in:
http://jsbin.com/yisaqe/3/edit
In this case, we see that the lifecycle callbacks are executed from top to bottom (one, two, three) rather than depth first (three, two, one):
If you remove the entire tree later on, the detached callbacks are similarly executed in a top to bottom order (one, two, three - see console)
http://jsbin.com/mejija/1/edit
I assume that this is consistent between the polyfill and the spec's intended behaviour, but I haven't been able to ascertain from the spec whether this is meant to differ. I hope that at least these proof of concepts are useful.

Related

Aren't DOM searches for elements with classes depth first?

I was under the impression that when a browser (generally) searches for an element that has a class it is a depth first search.
Recently I was asked to put some code together for a colleague, and asked to identify forms on a page with the substring of 'webform' in the class. I knew there was a form on a page I tested and used the following JS:
document.querySelector("[class*=webform]")
However, this returned the body element of the page whose class attribute had the substring of 'webform' in it. Generally (this question being browser dependant) is the searching in the DOM for elements containing a certain class depth first? Is it totally implementation or browser dependant (as in querySelector will use one method and another function will use a different method)?
Many thanks.
#hungerstar is right. Apologies for the brain fart, seems I need to brush up on my trees a little!
So in conclusion, it proved that it is depth-first. Great!

Comparing different DOM nodes which ones are more performant?

Trying to reduce the amount of DOM nodes I did some research on this, but have not found any comparing numbers, like is it better to use two DOM elements instead of two pseudo-elements or is it better to use 20-characters text-nodes instead of 10-characters DOM-elements (assuming it contains much more extra parameters than a text-node (aren't they cached or something?))?
The target is to simplify work with big DOM tree (mostly a table with some structure in each cell with about a 30000 DOM-elements total).
I've read W3 specs on pseudo-elements, but did not found any useful info.
So is there any common rules or is it may be discovered only with benchmarks?
I've seen this question as well, but it did not help a lot, as my question is about comparing different nodes - which should be preferred to improve performance?
And yes, I know about the cell-reusing approach, but it also depends on the complexity of cells (in IE11 scrolling lagging a lot, much more than just render entire structure at once).

How to navigate accross shadow DOMs recursively

I want to validate a custom polymer element. To do this, I want in javascript to access all my nested polymer elements to see if they are valids.
I can't find an easy way to do this.
this.querySelectorAll does not find my inputs that are nested in other polymer elements. It seems I can't use "/deep/" in these selectors.
Is there an easy way to do this ? Or do I have to do a recursive javascript methods that will call a querySelectorAll in all elements with shadow roots ?? (I guess performances will get ugly...)
Thanks for your help.
If there is no fast solution, I will probably try the other way around (have my inputs register to the parent)
Answer:
element.querySelectorAll() will find some elements when using /deep/, however, it only goes so far (1 shadow dom level). This would indeed necessitate recursive calls from each ElementNode.
Note:
This type of behavior largely goes against the core tenets of HTML (i.e. that the web page works no matter how well-formed the content is). In other words, all elements are valid no matter their placement.
As an example, I have made a custom element that only renders specific child elements and hides all others. This still keeps in line with the above tenet, as an element's base rendering is controlled by the element/agent, but allows for the developer/designer to customize its presentation aside from the standard presentation.

How to find out what element is going to be extended, while registering my own in Polymer

I recently asked How to extend multiple elements with Polymer and it turned out, in fact, you can't really. The idea was to create a web component that can be applied to different elements to "decorate" them.
Addy Osmani answered this question with a few approaches to handle that use case.
One of them was:
The alternative (if you strictly want to do this all in one custom
element, which imo, makes this less clean) is to do something like
checking against the type of element being extended, which could
either be done in the manner you linked to or by checking as part of
your element registration process.
Despite from fact that this approach might be less clean, my question is:
How I can find out what element is going to be extended while I'm registering my own?

Are DOM nodes synchronous?

I was wondering whether DOM node attributes are synchronous in terms of styling information? I was reading the following article, and I read the following line
Scripts asking for style information, like "offsetHeight" can trigger incremental layout synchronously.
From the article, it seems to indicate that there is a "dirty node" system that will pause script execution until the document has been fully laid out. So, if I had a dirty node n, if n.offsetHeight is called from javascript, the article suggest that n.offsetHeight will not return until the offset height has been fully reified. Is my understanding of this correct? Can I rely on the browser to always give me the current stable version of any attached DOM element.
Put succinctly, if I modify some styling on a node (using the style attribute, class names, dynamic css, whatever else), and then read some property that depends on said styling, can I always be certain that the value I get back will be the value of the node with my previous styling applied? If this is not the case, how can I know when my styling changes have been applied?
When you read information from the DOM elements, you will always get the current value, and properties that rely on other properties or other elements will always be correctly calculated when you read them.
When you change the DOM so that the layout changes, all the elements are not recalculated directly when you make the change. That would just be a waste, if you change something more that would need another recalculation. The layout remains uncalculated as long as there is no need for the recalculation. If you read a property that is depending on that recalculation, it will be done before the value is returned.
So, by planning how you set and read properties, you can avoid unnessecary recalculations.