Certain elements, like iron-list, require template elements as children.
However, Polymer 3 strips templates of its content.
How are these seemingly conflicting principles supposed to work together?
See the jsbin examples # https://www.webcomponents.org/element/#polymer/iron-list (they don't work due to the empty nested template).
The preserve-content leads to missing binding capability, so that's no viable solution.
What's the reason for this template-stripping anyways? (Docs just say "better performance".)
The given examples have been updated and work now.
It seems like the combination of v3 elements and v2 polymer-core caused the failure.
Also my personal guess is that template-based elements are manually Templatizer-ing their child templates to stamp the content of the template even though the template's actual content is undefined.
Templatizer seems to access the cache content that is noted in the docs.
Related
I am using angular-ui-select within a website where the styled select fields are configured with an own tag named ui-select. This works great, but doing a W3C validation leads to this error:
Element ui-select not allowed as child of element div in this context. (Suppressing further errors from this subtree.)
Here's an example code:
<!doctype html>
<html lang="en">
<head><title>x</title></head>
<body>
<div>
<ui-select></ui-select>
</div>
</body></html>
I understand that <ui-select> is not expected to be there but how can I handle this better?
Can I wrap it into a different tag or is there a different approach for ui-select instead of using HTML markup?
W3C HTML5 validator maintainer here. The short answer with regard to the validator behavior right now is, the validator's going to emit errors for any custom elements you use in documents, and currently there's no way you as a user can work around it doing that—and it's going to continue that way for some time longer until we get around to figuring out a solution.
We're having some ongoing discussions about how to solve this. Changing the validator to just ignore any element name with a hyphen is not viable as a complete solution, because the consequence of that is we could then not practically check any child elements it might have—we'd just have to ignore the entire subtree, because to do otherwise would lead to other errors. So that's way short of being an ideal solution.
Anyway, I'd love to find a good way to solve this, so if others have ideas I'd like to hear them. Two good places to send ideas/proposals on this are the public-webapps#w3.org mailing list https://lists.w3.org/Archives/Public/public-webapps/ and the whatwg#whatwg.org mailing list https://whatwg.org/mailing-list#specs
One idea I've thought of myself is, we could just have the validator treat all custom elements in the same way it currently treats the <div> element (as far as where it's allowed in a document and what child elements it's allowed to contain). That's also short of ideal, but at least it would give a way to check for errors in descendant elements in the custom element's subtree.
Update 2017-02-06: the W3C HTML Checker now supports custom elements
So, I added support for custom elements to the W3C HTML Checker (validator) on 2016-12-16 and a few days later refined it to do more detailed checking for prohibited names.
The trick I ended up figuring out to implement it in the checker architecture—which is at its core a RelaxNG grammar/schema-based validator—was to add a pre-processing filter that take any elements that have a hyphen in their element name, and puts them in a separate XML namespace.
Then I updated the RelaxNG schema to allow any elements from that XML namespace anywhere. (Which is ironic because I pretty much hate XML namespaces and all the problems they cause.)
So we’re now looking at doing something similar for custom-attribute names—probably just by defining those as being any attribute names that contain a hyphen (like custom-element names).
But the HTML checker can’t be changed to allow custom-attribute names until the HTML spec is updated to allow them. For that, see the proposal being discussed in the HTML-spec issue tracker.
That's indeed a long-known issue with AngularJS.
A few things you can do:
Instead of using the element <ui-select>, you can use <div ui-select>, but that will still fail on the argument.
An argument prefixed with x- or data- will pass but I am not sure ui-select supports that.
HTML W3C validation is useful, but I think mostly important for HTML emails so they don't get screened as spam. It's also good for search engines, but really not that critical.
If you look at 'why validate', the reasons are mostly for cleanliness, ease of debugging, and overall good practice.
Angular (un?)fortunately expands the realm of possibilities for HTML5, in a way that, naturally, deviates from the latest specifications for HTML.
We are having the same problem using Knockout custom components.
http://knockoutjs.com/documentation/component-overview.html
I added a suggestion how to enhance the validator with a minor enhancement for users wanting to use custom elements even if the specification is not yet final (http://w3c.github.io/webcomponents/spec/custom/#custom-tag-example):
https://github.com/validator/validator/issues/94
I want to validate a custom polymer element. To do this, I want in javascript to access all my nested polymer elements to see if they are valids.
I can't find an easy way to do this.
this.querySelectorAll does not find my inputs that are nested in other polymer elements. It seems I can't use "/deep/" in these selectors.
Is there an easy way to do this ? Or do I have to do a recursive javascript methods that will call a querySelectorAll in all elements with shadow roots ?? (I guess performances will get ugly...)
Thanks for your help.
If there is no fast solution, I will probably try the other way around (have my inputs register to the parent)
Answer:
element.querySelectorAll() will find some elements when using /deep/, however, it only goes so far (1 shadow dom level). This would indeed necessitate recursive calls from each ElementNode.
Note:
This type of behavior largely goes against the core tenets of HTML (i.e. that the web page works no matter how well-formed the content is). In other words, all elements are valid no matter their placement.
As an example, I have made a custom element that only renders specific child elements and hides all others. This still keeps in line with the above tenet, as an element's base rendering is controlled by the element/agent, but allows for the developer/designer to customize its presentation aside from the standard presentation.
Were working in a component(module)/template framework.
There is only one template per page, this defines the basic structure and layout. the HEAD area is defined here.
Now, many of our components(modules) include some concept of pagination.
Thus, it's desired to use rel-next and rel-prev in the head of the document.
The problem comes from the template is (and cannot) be aware of the component that provides pagination. They are 100% completely decoupled.
Once the component is run, the head part of the page is typically flushed already.
It's just a limitation of the framework.
since placing the links in the BODY (where the component(module) renders) will not achieve the correct results (i.e. Google ignores it unless in the head).
Can anyone think of an approach or work-around to this issue?
You could send HTTP Link headers instead:
Link: <http://www.example.com/favorite-books/everything-on-one-page>; rel="canonical"
Link: <http://www.example.com/favorite-books/page-1>; rel="prev"
Link: <http://www.example.com/favorite-books/page-3>; rel="next"
According to Google’s documentation about their usage of canonical, it’s supported.
While support isn’t mentioned on Google’s documentation about their usage of prev/next, support was confirmed in a thread on their product forums.
According to Google the default option is to nothing - "Leave whatever you have exactly as-is. Paginated content exists throughout the web and we’ll continue to strive to give searchers the best result, regardless of the page’s rel=”next”/rel=”prev” HTML markup—or lack thereof."
Other more up to date mentions of the issue suggest the advice hasn't change:
Video about pagination
Infinite scroll
Given current HTML5 specs that allows creating custom HTML elements (as long as their name contains a dash), and the fact that Web Components seem to be a feature that's here to stay, I'd like to know why is creating your own custom HTML elements frowned upon?
Note, that I'm not asking whether to use Web Components - which are still a moving target, and even with great polyfills like Polymer might not be ready for production yet. I'm asking about creating your own custom HTML tags and styling them, without attaching any JS APIs to them.
Short answer: I haven't heard any very compelling reasons to avoid them.
However, here are some recurring arguments I've heard made:
Doesn't work in old IE (just document.createElement("my-tag"); should fix that).
Global namespace clashes (same applies to class names, and custom elements in general).
CSS selector performance (doh, this is just about the last thing you should worry about).
Separation of functionality, meaning and presentation. This is actually the only argument I've heard that IMHO has any valid basis to it. You're of course better off with semantic HTML (search engines and all that), but if you were going to use a div for it otherwise, I don't see why you couldn't use a custom tag instead.
One of the arguments against custom tags is their implied incompatibility with screen readers. This issue can be resolved with WAI-ARIA attributes.
There exists an issue in IE11, which breaks table layout if a custom element without display property is inserted inside a table cell. Check the plunker code. Therefore, it's the safest to declare all new elements explicitly, for example like so:
new-element {
display: block;
}
According to many recent HTML specs, when we are using custom attributes (meaning any attributes not defined in the spec), we should prefix them with data-. However, I see no reason to have to do this (unless you require perfectly valid HTML, obviously). Pretty much all current browsers correctly ignore custom attributes, meaning no conflicts except with identically-named attributes from others' code, and we can ignore even this with custom prefixes or something similar (as suggested on the AngularJS directive page). What, if any, other benefits are there? This question has been asked before, at least twice, but both are pretty old.
I forget where I read it, but some guide said custom HTML tags need dashes, and single-word tags aren't valid. First of all, why? Second, should we do this, and why (besides being valid)? Would there be any problem with underscores or camelCase, etc.? Also, conflicts with existing elements shouldn't be a problem, if, like with data attributes, you prefix or suffix them, etc. See the Angular directive page again.
I'm sure all these questions have been asked before, but I'm combining them into one. Is that a good idea (quick, someone ask on Meta)?
The data-* attributes have two advantages:
It is a convention meaning other programmers will understand quickly that it is a custom attribute.
You get a DOM Javascript API for free: HTMLElement.dataset. If you use jQuery, it leverages this to populates the keys and values you find with .data().
The reason for the - in custom element names is for two basic reasons:
It is a quick way for the HTML parser to know it is a custom element instead of a standard element.
You don't run into the issue of a new standard element being added with the same name which would cause conflict if you register a custom Javascript prototype for the DOM element.
Should you use your own custom element name? Right now it is so new that don't expect it to be fully supported. Let's say it does work. You have to balance the issue of the extra complexity with the benefit. If you can get away with a classname, then use a classname. But if you need a whole new element with a custom Javascript DOM prototype for the element, then you may have a valid usage of it.