How to navigate accross shadow DOMs recursively - polymer

I want to validate a custom polymer element. To do this, I want in javascript to access all my nested polymer elements to see if they are valids.
I can't find an easy way to do this.
this.querySelectorAll does not find my inputs that are nested in other polymer elements. It seems I can't use "/deep/" in these selectors.
Is there an easy way to do this ? Or do I have to do a recursive javascript methods that will call a querySelectorAll in all elements with shadow roots ?? (I guess performances will get ugly...)
Thanks for your help.
If there is no fast solution, I will probably try the other way around (have my inputs register to the parent)

Answer:
element.querySelectorAll() will find some elements when using /deep/, however, it only goes so far (1 shadow dom level). This would indeed necessitate recursive calls from each ElementNode.
Note:
This type of behavior largely goes against the core tenets of HTML (i.e. that the web page works no matter how well-formed the content is). In other words, all elements are valid no matter their placement.
As an example, I have made a custom element that only renders specific child elements and hides all others. This still keeps in line with the above tenet, as an element's base rendering is controlled by the element/agent, but allows for the developer/designer to customize its presentation aside from the standard presentation.

Related

FontAwesome SVG + JS with pseudo-elements performance issue on Select2

I'm actually using the FontAwesome 5 package, using the SVG+JS implementation with the "data-search-pseudo-elements" option.
I'm in a context where I use a "Select2" plug-in to display a <select> element, which is containing nearly 600 options (for a timezone selection). But when I try to open the select to choose an option, it takes a very very long time to open (which doesn't occur when using the CSS framework, or when pseudo-elements are disabled)!
A little look in browser performances panel seems to show that it's the FontAwesome script which is responsible of this, while there is no pseudo-element in the elements generated by Select2.
Is there any way to improve FontAwesome performance, or to avoid its activation for some HTML elements?
As long as you have data-search-pseudo-elements enabled, Font Awesome will scan the DOM when changes are made, looking for any pseudo-elements that represent icons that should be converted into <svg> elements.
Unfortunately, a scenario like you've described is the Achilles heel of this feature. Scanning the DOM for all possible pseudo-elements can be slow when there are many DOM elements. And the Mutation Observer causes re-scans to occur whenever the DOM changes--which is what sounds like is happening when you open that select control.
So it's probably best to avoid SVG/JS with pseudo-elements in a situation like this.
While I would not recommend putting more effort into trying a work around, if you're up against a wall and for some reason have a requirement to continue using SVG/JS and pseudo-elements together like this, then here are two possibilities:
If you don't need the MutationObserver to watch for changes, then you could disable it altogether using the Configuration API. For example, add data-observe-mutations="false" to your <script> tag.
If you do need the MutationObserver to watch for changes elsewhere in the DOM, but not on this select control, then after disabling the MutationObserver on load (using the above), you could kick it off programmatically on a smaller region of the DOM using the dom.watch() API with a observeMutationsRoot parameter that is more narrowly scoped. By default, the MutationObserver, when enabled, scans everything under a root of document.body, but this is a way that you can make it work on a smaller region of the DOM.
If you have a requirement to support pseudo-elements, and especially if you need to support that in a DOM with many elements, and especially especially if the DOM is changing a lot, it's almost certainly going to be best for you to use the CSS/Webfont technology.

Different ID of element in different browser instance (python-selenium-chrome)

I was locating the elements for selenium through the inspect function of my actual browser (chrome) and never had any issues. Now i had a case where the located element couldn't be found and i figured out that in my selenium chrome instance, the element has another ID then in my "normal" one and that's why i can't locate it.
This is the case with some other elements too and i can't spot the pattern.
In "Ikognito"-mode of chrome, i get the same values as in my normal browser, and the ids that my selenium browser gets are the same aswell everytime i launch the programm.
Does anybody have an explanation for this ?
Is this common practise upon web developers and whats the way to go about this issue on future projects ?
Do i always have to run my selenium browser first and then extract the elements IDs out of there ?
Although locating elements by id is the preferred way, it is not the only one, the are way more options like:
name attribute
link text for a elements
partial link text for a elements
HTML tag name
class attribute
CSS selector
XPath selector
The latter one - XPath is the most powerful as it's almost a programming language. Unlike other selector strategies XPath selectors have full awareness of DOM page, can lookup all attributes, text, parent/child objects, traverse axes and if it's not enough you can go for functions and operators to precisely select whatever element is needed.
With regards to dynamic IDs - it's quite a common practice when the page is not deterministic and the content is dynamic. Theoretically you can ask your application developers to come up with a custom HTML attribute which will be used for automation and maybe user tracking, but if for some reason it is not possible - you will have to define another way of locating the element.s

Are there technical reasons not to use undefined custom elements for everything?

Some people seem to like to rewrite
<div id="homepage">[...]</div>
as
<app-homepage>[...]</app-homepage>
Are there any technical or spec related reasons not to do this? Mind that I am talking purely about changing this on the level of the HTML and CSS; The elements have not been defined using the custom elements API.
Tl;dr: Don't do it. Use the custom element spec for what it's made for. Hacking together your HTML syntax is not what it's made for.
Semantics
First of all custom elements in general can break up the semantics of the DOM structure. When custom elements are used properly you receive a lot of power in return, but in this way you give up semantics without any benefit. Instead use the proper HTML5 elements like <header>, <article>, etc.
Undefined custom element state
According to the custom elements spec such elements have a custom element state of
"undefined" (not defined, not custom)
Now, the way the HTML spec works any element which isn't recognized has a defined behavior of create just a default undefined HTML element instance.
Up till the custom element spec it was incredibly dangerous to define such elements because there was a risk that a future version of HTML would implement it, but now all elements with a - are reserved for custom elements.
Does that mean that you are entirely safe? No, because you put yourself in the same namespace as all other custom elements. And unlike with external stylesheets there is no proper way to namespace them, so if you wish to do something like this you will have to write code like
<my-app-name-homepage>[...]</my-app-name-homepage>
and even then you still end up with an element with an undefined state.

Is it possible query all elements including shadow dom with Polymer?

For example, lets say we want to do querySelectorAll('canvas') to get all canvases in the document, including the ones in the shadow dom. Is that possible with polymer?
No. For a period of time there was a proposal whereby you could use the /deep/ combinator, but it was found to be bad for encapsulation and has been deprecated. Code that relies upon it will break.
Instead, if you need to, you can take an element and look into its shadow root specifically and query within it.

How to extend multiple elements with Polymer

I know there is this question on multiple inheritance/composition. However, it seems like this question is more about how to reuse functionality from multiple existing elements in other elements. And obviously, the solution for that are mixins.
I would like to know, how I can actually "decorate" existing elements without really borrow functionality from them. We know there is this extends property one can use to extend an existing element with Polymer.
So making a normal <button> behave like a mega-button is as simple as attaching <button is="mega-button"> and write a component for it. But it turns out, that it's not possible to extend multiple elements. So something like extends="foo bar" doesn't work. What if I want to build a web component, that can actually be applied to different elements?
For example, I don't want to only extend <button> elements with mega-button but probably also an <a> element so that it looks like and behaves like a mega-button too?
The mixin approach doesn't really help here (as far as I get it), because they do nothing more then providing shared logic for different web components. That means, you create multiple components, and reuse logic (packed in a mixin) from a mixin.
What I need is a way to create one web component that can be applied to multiple elements.
Any idea how to solve that?
UPDATE
Addy answered with some approaches to handle that use case. Here's a follow up question based on one approach
How to find out what element is going to be extended, while registering my own in Polymer
And another one on Is it possible to share mixins across web components (and imports) in Polymer?
UPDATE 2
I've written an article and concludes my experiences and learnings about inheritance and composition with polymer: http://pascalprecht.github.io/2014/07/14/inheritance-and-composition-with-polymer/
If you need to have just a single import that has support for being applied to multiple elements, your element could include multiple element definitions which may or may not take advantage of Polymer.mixin in order to share functionality between your decorating elements.
So pascal-decorator.html could contain Polymer element definitions for <pascal-span> and <pascal-button>, both of which mixin logic from some object defined within pascal-decorator.html. You can then do <button is="pascal-button"> and <button is="pascal-span"> whilst the logic for doing so remains inside the same import.
The alternative (if you strictly want to do this all in one custom element, which imo, makes this less clean) is to do something like checking against the type of element being extended, which could either be done in the manner you linked to or by checking as part of your element registration process.
In general, I personally prefer to figure out what logic I may need to share between elements that could be decorated, isolate that functionality into an element and then just import them into dedicated elements that have knowledge about the tag (e.g <addy-button>, <addy-video> etc).